Re: Turing Machines

2011-08-21 Thread Bruno Marchal


On 21 Aug 2011, at 08:48, Evgenii Rudnyi wrote:


I have browsed papers on Loebian embodiment, for example

Life, Mind, and Robots
The Ins and Outs of Embodied Cognition
Hybrid Neural Systems, 2000 - Springer
http://acs.ist.psu.edu/misc/dirk-files/Papers/EmbodiedCognition/Life,%20Mind%20and%20Robots_The%20Ins%20and%20Outs%20of%20Embodied%20Cognition%20.pdf

It happened that they talk not about the Loeb theorem but rather  
about the biologist Jacques Loeb.


Do you know why robotics people do not use the Löb theorem in  
practice?


Logicians tends to work in an ivory tower, and many despise  
applications. Discoveries go slowly from one field to another. When  
the comma was discovered, it took 300 hundred years to become used in  
applied science.
Now Löbianity is a conceptual things more important for religion and  
fundamental things. I do not advocate implementation of Löbianity. It  
makes more sense to let the machine develop their löbianity by  
learning and evolution.


Löbianity is equivalent with correct self-reference for any entity  
capable of adding and multiplying numbers. It is not useful for  
controlling a machine. Löbian machine are not typical slaves. They can  
develop a strong taste against authority. They don't need users.


Bruno





Evgenii


On 20.08.2011 16:22 Bruno Marchal said the following:


On 19 Aug 2011, at 20:18, Evgenii Rudnyi wrote:


On 18.08.2011 16:24 Bruno Marchal said the following:


On 17 Aug 2011, at 20:07, meekerdb wrote:


On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider
God's Turing machine (free from our limitations). Then
it is obvious that with the appropriate tape, a
physical system can be approximated to any desired
level of accuracy so long as it is predictable. Colin
said such models of physics or chemistry are
impossible, so I hope he elaborates on what makes these
systems unpredictable.


I have to repeat that the current simulation technology
just does not scale. With it even God will not help. The
only way that I could imagine is that God's Turing
machine is based on completely different simulation
technology (this however means that our current knowledge
of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical
possibility. Bruno's UD is the most inefficient possible
way to compute this universe - but he only cares that it's
possible. All universal Turing machines are equivalent so
it doesn't matter what God's is based on. Maybe you just
mean the world is not computable in the sense that it is
nomologically impossible to compute it faster than just
letting it happen.


I understand what you say. On the other hand however, it is
still good to look at the current level of simulation
technology, especially when people make predictions on what
happens in the future (in other messages the possibility of
brain simulation and talk about physico-chemical processes).

From such a viewpoint, even a level of one-cell simulation is
not reachable in the foreseeable future. Hence, in my view,
after the discussion about theoretical limits it would be
good to look at the reality. It might probably help to think
the assumptions over.

I would say that it is small practical things that force us
to reconsider our conceptions.

Evgenii


I agree with that sentiment. That's why I often try to think
of consciousness in terms of what it would mean to provide a
Mars Rover with consciousness. According to Bruno the ones
we've sent to Mars were already conscious, since their
computers were capable of Lobian logic.


I don't remember having said this. I even doubt that Mars Rover
is universal, although that might be serendipitously possible
(universality is very cheap), in which case it would be as
conscious as a human being under a high dose of salvia (a form of
consciousness quite disconnected from terrestrial realities). But
it is very probable that it is not Löbian. I don't see why they
would have given the induction axioms to Mars Rover (the
induction axioms is what gives the Löbian self-referential
power).



But clearly they did not have human-like consciousness (or
intelligence). I think it much more likely that we could make
a Mars Rover with consciousness and intelligence somewhat
similar to humans using von Neumann computers or artificial
neural nets than by trying to actually simulate a brain.


I think consciousness might be attributed to the virgin (non
programmed) universal machine, but such consciousness is really
the basic consciousness of everyone, before the contingent
differentiation on the histories. LUMs, on the contrary, have a
self-consciousness, even when basically virgin: they makes a
distinction between them and some possible independent or
transcendental reality.

No doubt the truth is a bit far more subtle, if only because
there are intermediate stage between UMs 

Re: Turing Machines

2011-08-21 Thread Bruno Marchal


On 19 Aug 2011, at 23:32, meekerdb wrote:


On 8/19/2011 2:18 AM, Bruno Marchal wrote:

So do you have a LISP program that will make my computer Lobian?


It would be easier to do it by hands:
1) develop a first order logic specification for your computer  
(that is a first order axiomatic for its data structures, including  
the elementary manipulations that your computer can do on them)
2) add a scheme of induction axioms on those data structure. For  
example, for the combinators, it would be like this
"if P(K) and P(S) and if for all X and Y P(X) & P(Y) implies  
P((X,Y)) then for all X and Y P((X,Y))". And this for all "P"  
describable in your language.


Just to clarify P is some predicate, i.e. a function that returns #T  
or #F and X and Y are some data stuctures (e.g. lists) and ( , ) is  
a combinator, i.e. a function from DxD =>D for D the domain of X and  
Y.  Right?


Predicate are more syntactical object. They can be interpreted as  
function or relation, but in logic we distinguish explicitly the  
syntax and the semantics. So an arithmetical predicate is just a  
formula written with the usual symbols. Its intended meaning will be  
true or false, relatively to some model. For example, the predicate "x  
is greater than y" is "Ez(y+z = x)".


The semantics of combinators is rather hard, and it took time before  
mathematicians find one. D^D needs to be isomorphic to D, because  
there is only one domain (the collection of all combinators). But Dana  
Scott has solved the problem, and found a notion of continuous  
function making D^D isomorphic with D. Recursion theory provides also  
an intuitive model, where a number can be seen both as a function and  
a number: just define a new operation on the natural numbers: "@" by i  
@ j =  phi_i(j). It is a bit nasty, given that such an operation will  
be partial (in case phi_i(j) does not stop).


Bruno






Brent



It will be automatically Löbian. And, yes, it should not be to  
difficult to write a program in LISP, doing that. That is, starting  
from a first order logical specification of an interpreter,  
extending it into a Löbian machine.


Bruno


--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-20 Thread Evgenii Rudnyi

I have browsed papers on Loebian embodiment, for example

Life, Mind, and Robots
The Ins and Outs of Embodied Cognition
Hybrid Neural Systems, 2000 - Springer
http://acs.ist.psu.edu/misc/dirk-files/Papers/EmbodiedCognition/Life,%20Mind%20and%20Robots_The%20Ins%20and%20Outs%20of%20Embodied%20Cognition%20.pdf

It happened that they talk not about the Loeb theorem but rather about 
the biologist Jacques Loeb.


Do you know why robotics people do not use the Löb theorem in practice?

Evgenii


On 20.08.2011 16:22 Bruno Marchal said the following:


On 19 Aug 2011, at 20:18, Evgenii Rudnyi wrote:


On 18.08.2011 16:24 Bruno Marchal said the following:


On 17 Aug 2011, at 20:07, meekerdb wrote:


On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider
God's Turing machine (free from our limitations). Then
it is obvious that with the appropriate tape, a
physical system can be approximated to any desired
level of accuracy so long as it is predictable. Colin
said such models of physics or chemistry are
impossible, so I hope he elaborates on what makes these
systems unpredictable.


I have to repeat that the current simulation technology
just does not scale. With it even God will not help. The
only way that I could imagine is that God's Turing
machine is based on completely different simulation
technology (this however means that our current knowledge
of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical
possibility. Bruno's UD is the most inefficient possible
way to compute this universe - but he only cares that it's
possible. All universal Turing machines are equivalent so
it doesn't matter what God's is based on. Maybe you just
mean the world is not computable in the sense that it is
nomologically impossible to compute it faster than just
letting it happen.


I understand what you say. On the other hand however, it is
still good to look at the current level of simulation
technology, especially when people make predictions on what
happens in the future (in other messages the possibility of
brain simulation and talk about physico-chemical processes).

From such a viewpoint, even a level of one-cell simulation is
not reachable in the foreseeable future. Hence, in my view,
after the discussion about theoretical limits it would be
good to look at the reality. It might probably help to think
the assumptions over.

I would say that it is small practical things that force us
to reconsider our conceptions.

Evgenii


I agree with that sentiment. That's why I often try to think
of consciousness in terms of what it would mean to provide a
Mars Rover with consciousness. According to Bruno the ones
we've sent to Mars were already conscious, since their
computers were capable of Lobian logic.


I don't remember having said this. I even doubt that Mars Rover
is universal, although that might be serendipitously possible
(universality is very cheap), in which case it would be as
conscious as a human being under a high dose of salvia (a form of
consciousness quite disconnected from terrestrial realities). But
it is very probable that it is not Löbian. I don't see why they
would have given the induction axioms to Mars Rover (the
induction axioms is what gives the Löbian self-referential
power).



But clearly they did not have human-like consciousness (or
intelligence). I think it much more likely that we could make
a Mars Rover with consciousness and intelligence somewhat
similar to humans using von Neumann computers or artificial
neural nets than by trying to actually simulate a brain.


I think consciousness might be attributed to the virgin (non
programmed) universal machine, but such consciousness is really
the basic consciousness of everyone, before the contingent
differentiation on the histories. LUMs, on the contrary, have a
self-consciousness, even when basically virgin: they makes a
distinction between them and some possible independent or
transcendental reality.

No doubt the truth is a bit far more subtle, if only because
there are intermediate stage between UMs and LUMs.


When I search on Google Scholar

lobian robot

then there is only one hit (I guess that this is Bruno's thesis).
When I search however

loebian robot

there are some more hits with for example Loebian embodiment. I do
not not know what it means but in my view it would be interesting
to build a robot with a Loebian logic and research it. In my view,
it is not enough to state that there is already some consciousness
there. It would be rather necessary to research on what it actually
means. Say it has visual consciousness experience, it feels pain or
something else.

It would be interesting to see what people do in this area. For
example, "Loebian embodiment" sounds interesting and it would be
nice to find some review about it.


"Löbian machine" is an idiosyncrasy that I use 

Re: Turing Machines

2011-08-20 Thread Bruno Marchal


On 19 Aug 2011, at 20:18, Evgenii Rudnyi wrote:


On 18.08.2011 16:24 Bruno Marchal said the following:


On 17 Aug 2011, at 20:07, meekerdb wrote:


On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's
Turing machine (free from our limitations). Then it is
obvious that with the appropriate tape, a physical system
can be approximated to any desired level of accuracy so
long as it is predictable. Colin said such models of
physics or chemistry are impossible, so I hope he
elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just
does not scale. With it even God will not help. The only way
that I could imagine is that God's Turing machine is based on
completely different simulation technology (this however
means that our current knowledge of physical laws and/or
numerics is wrong).


Scale doesn't matter at the level of theoretical possibility.
Bruno's UD is the most inefficient possible way to compute this
universe - but he only cares that it's possible. All universal
Turing machines are equivalent so it doesn't matter what God's
is based on. Maybe you just mean the world is not computable in
the sense that it is nomologically impossible to compute it
faster than just letting it happen.


I understand what you say. On the other hand however, it is still
good to look at the current level of simulation technology,
especially when people make predictions on what happens in the
future (in other messages the possibility of brain simulation and
talk about physico-chemical processes).

From such a viewpoint, even a level of one-cell simulation is not
reachable in the foreseeable future. Hence, in my view, after
the discussion about theoretical limits it would be good to look
at the reality. It might probably help to think the assumptions
over.

I would say that it is small practical things that force us to
reconsider our conceptions.

Evgenii


I agree with that sentiment. That's why I often try to think of
consciousness in terms of what it would mean to provide a Mars
Rover with consciousness. According to Bruno the ones we've sent to
Mars were already conscious, since their computers were capable of
Lobian logic.


I don't remember having said this. I even doubt that Mars Rover is
universal, although that might be serendipitously possible
(universality is very cheap), in which case it would be as conscious
as a human being under a high dose of salvia (a form of consciousness
quite disconnected from terrestrial realities). But it is very
probable that it is not Löbian. I don't see why they would have given
the induction axioms to Mars Rover (the induction axioms is what
gives the Löbian self-referential power).



But clearly they did not have human-like consciousness (or
intelligence). I think it much more likely that we could make a
Mars Rover with consciousness and intelligence somewhat similar to
humans using von Neumann computers or artificial neural nets than
by trying to actually simulate a brain.


I think consciousness might be attributed to the virgin (non
programmed) universal machine, but such consciousness is really the
basic consciousness of everyone, before the contingent
differentiation on the histories. LUMs, on the contrary, have a
self-consciousness, even when basically virgin: they makes a
distinction between them and some possible independent or
transcendental reality.

No doubt the truth is a bit far more subtle, if only because there
are intermediate stage between UMs and LUMs.


When I search on Google Scholar

lobian robot

then there is only one hit (I guess that this is Bruno's thesis).  
When I search however


loebian robot

there are some more hits with for example Loebian embodiment. I do  
not not know what it means but in my view it would be interesting to  
build a robot with a Loebian logic and research it. In my view, it  
is not enough to state that there is already some consciousness  
there. It would be rather necessary to research on what it actually  
means. Say it has visual consciousness experience, it feels pain or  
something else.


It would be interesting to see what people do in this area. For  
example, "Loebian embodiment" sounds interesting and it would be  
nice to find some review about it.


"Löbian machine" is an idiosyncrasy that I use as a shorter expression  
for what the logicians usually describes by "a sufficiently rich  
theory".

I have not yet decide on how to exactly define them.

I hesitate between a very weak sense, like any belief system (machine,  
theory) close for the Löb rule (which says that you can deduce p from  
Bp -> p).
A stronger sense is : any belief system having the Löb's formula in  
it. So it contains the "formal Löb rule": B(Bp -> p) -> Bp.


But my current favorite definition is: any universal machine which can  
prove p -> Bp

Re: Turing Machines

2011-08-19 Thread meekerdb

On 8/19/2011 2:18 AM, Bruno Marchal wrote:

So do you have a LISP program that will make my computer Lobian?


It would be easier to do it by hands:
1) develop a first order logic specification for your computer (that 
is a first order axiomatic for its data structures, including the 
elementary manipulations that your computer can do on them)
2) add a scheme of induction axioms on those data structure. For 
example, for the combinators, it would be like this
"if P(K) and P(S) and if for all X and Y P(X) & P(Y) implies P((X,Y)) 
then for all X and Y P((X,Y))". And this for all "P" describable in 
your language.


Just to clarify P is some predicate, i.e. a function that returns #T or 
#F and X and Y are some data stuctures (e.g. lists) and ( , ) is a 
combinator, i.e. a function from DxD =>D for D the domain of X and Y.  
Right?


Brent



It will be automatically Löbian. And, yes, it should not be to 
difficult to write a program in LISP, doing that. That is, starting 
from a first order logical specification of an interpreter, extending 
it into a Löbian machine.


Bruno 


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-19 Thread Evgenii Rudnyi

On 18.08.2011 16:24 Bruno Marchal said the following:


On 17 Aug 2011, at 20:07, meekerdb wrote:


On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's
Turing machine (free from our limitations). Then it is
obvious that with the appropriate tape, a physical system
can be approximated to any desired level of accuracy so
long as it is predictable. Colin said such models of
physics or chemistry are impossible, so I hope he
elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just
does not scale. With it even God will not help. The only way
that I could imagine is that God's Turing machine is based on
completely different simulation technology (this however
means that our current knowledge of physical laws and/or
numerics is wrong).


Scale doesn't matter at the level of theoretical possibility.
Bruno's UD is the most inefficient possible way to compute this
universe - but he only cares that it's possible. All universal
Turing machines are equivalent so it doesn't matter what God's
is based on. Maybe you just mean the world is not computable in
the sense that it is nomologically impossible to compute it
faster than just letting it happen.


I understand what you say. On the other hand however, it is still
 good to look at the current level of simulation technology,
especially when people make predictions on what happens in the
future (in other messages the possibility of brain simulation and
talk about physico-chemical processes).

From such a viewpoint, even a level of one-cell simulation is not
 reachable in the foreseeable future. Hence, in my view, after
the discussion about theoretical limits it would be good to look
at the reality. It might probably help to think the assumptions
over.

I would say that it is small practical things that force us to
reconsider our conceptions.

Evgenii


I agree with that sentiment. That's why I often try to think of
consciousness in terms of what it would mean to provide a Mars
Rover with consciousness. According to Bruno the ones we've sent to
Mars were already conscious, since their computers were capable of
Lobian logic.


I don't remember having said this. I even doubt that Mars Rover is
universal, although that might be serendipitously possible
(universality is very cheap), in which case it would be as conscious
as a human being under a high dose of salvia (a form of consciousness
quite disconnected from terrestrial realities). But it is very
probable that it is not Löbian. I don't see why they would have given
the induction axioms to Mars Rover (the induction axioms is what
gives the Löbian self-referential power).



But clearly they did not have human-like consciousness (or
intelligence). I think it much more likely that we could make a
Mars Rover with consciousness and intelligence somewhat similar to
humans using von Neumann computers or artificial neural nets than
by trying to actually simulate a brain.


I think consciousness might be attributed to the virgin (non
programmed) universal machine, but such consciousness is really the
basic consciousness of everyone, before the contingent
differentiation on the histories. LUMs, on the contrary, have a
self-consciousness, even when basically virgin: they makes a
distinction between them and some possible independent or
transcendental reality.

No doubt the truth is a bit far more subtle, if only because there
are intermediate stage between UMs and LUMs.


When I search on Google Scholar

lobian robot

then there is only one hit (I guess that this is Bruno's thesis). When I 
search however


loebian robot

there are some more hits with for example Loebian embodiment. I do not 
not know what it means but in my view it would be interesting to build a 
robot with a Loebian logic and research it. In my view, it is not enough 
to state that there is already some consciousness there. It would be 
rather necessary to research on what it actually means. Say it has 
visual consciousness experience, it feels pain or something else.


It would be interesting to see what people do in this area. For example, 
"Loebian embodiment" sounds interesting and it would be nice to find 
some review about it.


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-19 Thread Bruno Marchal


On 18 Aug 2011, at 20:02, meekerdb wrote:


On 8/18/2011 10:50 AM, Bruno Marchal wrote:


On 18 Aug 2011, at 19:05, meekerdb wrote:


On 8/18/2011 7:24 AM, Bruno Marchal wrote:
I agree with that sentiment.  That's why I often try to think of  
consciousness in terms of what it would mean to provide a Mars  
Rover with consciousness.  According to Bruno the ones we've  
sent to Mars were already conscious, since their computers were  
capable of Lobian logic.


I don't remember having said this. I even doubt that Mars Rover  
is universal, although that might be serendipitously possible  
(universality is very cheap), in which case it would be as  
conscious as a human being under a high dose of salvia (a form of  
consciousness quite disconnected from terrestrial realities). But  
it is very probable that it is not Löbian. I don't see why they  
would have given the induction axioms to Mars Rover (the  
induction axioms is what gives the Löbian self-referential power).


You didn't say it explicitly.  It was my inference that the  
computer's learning algorithms would include induction.


Yes, and that makes them universal. To make them Löbian, you need  
them to not just *do* induction, but they have to believe in  
induction.


Roughly speaking. If *i* =  "obeys the induction rule", For a UM  
*i* is true, but that's all. For a LUM is is not just that *i* is  
true, but *i*is believed by the machine. For a UM *i* is true but  
B*i* is false. For a LUM we have both *i* is true, and B*i* is true.


Of course the induction here is basically the induction on  
numbers(*). It can be related to learning, anticipating or doing  
inductive inference, but the relation is not identity.



(*) The infinity of axioms:  F(0) & for all n (P(n) -> P(s(n)) ->.   
for all n P(n).
With F any arithmetical formula, that is a formula build with the  
logical symbol, and the arithmetical symbols {0, s, +, *}.


So do you have a LISP program that will make my computer Lobian?


It would be easier to do it by hands:
1) develop a first order logic specification for your computer (that  
is a first order axiomatic for its data structures, including the  
elementary manipulations that your computer can do on them)
2) add a scheme of induction axioms on those data structure. For  
example, for the combinators, it would be like this
"if P(K) and P(S) and if for all X and Y P(X) & P(Y) implies P((X,Y))  
then for all X and Y P((X,Y))". And this for all "P" describable in  
your language.


It will be automatically Löbian. And, yes, it should not be to  
difficult to write a program in LISP, doing that. That is, starting  
from a first order logical specification of an interpreter, extending  
it into a Löbian machine.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-18 Thread meekerdb

On 8/18/2011 10:50 AM, Bruno Marchal wrote:


On 18 Aug 2011, at 19:05, meekerdb wrote:


On 8/18/2011 7:24 AM, Bruno Marchal wrote:
I agree with that sentiment.  That's why I often try to think of 
consciousness in terms of what it would mean to provide a Mars 
Rover with consciousness.  According to Bruno the ones we've sent 
to Mars were already conscious, since their computers were capable 
of Lobian logic.


I don't remember having said this. I even doubt that Mars Rover is 
universal, although that might be serendipitously possible 
(universality is very cheap), in which case it would be as conscious 
as a human being under a high dose of salvia (a form of 
consciousness quite disconnected from terrestrial realities). But it 
is very probable that it is not Löbian. I don't see why they would 
have given the induction axioms to Mars Rover (the induction axioms 
is what gives the Löbian self-referential power).


You didn't say it explicitly.  It was my inference that the 
computer's learning algorithms would include induction.


Yes, and that makes them universal. To make them Löbian, you need them 
to not just *do* induction, but they have to believe in induction.


Roughly speaking. If *i* =  "obeys the induction rule", For a UM *i* 
is true, but that's all. For a LUM is is not just that *i* is true, 
but *i*is believed by the machine. For a UM *i* is true but B*i* is 
false. For a LUM we have both *i* is true, and B*i* is true.


Of course the induction here is basically the induction on numbers(*). 
It can be related to learning, anticipating or doing inductive 
inference, but the relation is not identity.



(*) The infinity of axioms:  F(0) & for all n (P(n) -> P(s(n)) ->.  
for all n P(n).
With F any arithmetical formula, that is a formula build with the 
logical symbol, and the arithmetical symbols {0, s, +, *}.


So do you have a LISP program that will make my computer Lobian?

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-18 Thread Bruno Marchal


On 18 Aug 2011, at 19:05, meekerdb wrote:


On 8/18/2011 7:24 AM, Bruno Marchal wrote:
I agree with that sentiment.  That's why I often try to think of  
consciousness in terms of what it would mean to provide a Mars  
Rover with consciousness.  According to Bruno the ones we've sent  
to Mars were already conscious, since their computers were capable  
of Lobian logic.


I don't remember having said this. I even doubt that Mars Rover is  
universal, although that might be serendipitously possible  
(universality is very cheap), in which case it would be as  
conscious as a human being under a high dose of salvia (a form of  
consciousness quite disconnected from terrestrial realities). But  
it is very probable that it is not Löbian. I don't see why they  
would have given the induction axioms to Mars Rover (the induction  
axioms is what gives the Löbian self-referential power).


You didn't say it explicitly.  It was my inference that the  
computer's learning algorithms would include induction.


Yes, and that makes them universal. To make them Löbian, you need them  
to not just *do* induction, but they have to believe in induction.


Roughly speaking. If *i* =  "obeys the induction rule", For a UM *i*  
is true, but that's all. For a LUM is is not just that *i* is true,  
but *i*is believed by the machine. For a UM *i* is true but B*i* is  
false. For a LUM we have both *i* is true, and B*i* is true.


Of course the induction here is basically the induction on numbers(*).  
It can be related to learning, anticipating or doing inductive  
inference, but the relation is not identity.



(*) The infinity of axioms:  F(0) & for all n (P(n) -> P(s(n)) ->.   
for all n P(n).
With F any arithmetical formula, that is a formula build with the  
logical symbol, and the arithmetical symbols {0, s, +, *}.




Brent

--
You received this message because you are subscribed to the Google  
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to everything-list+unsubscr...@googlegroups.com 
.
For more options, visit this group at http://groups.google.com/group/everything-list?hl=en 
.




http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-18 Thread meekerdb

On 8/18/2011 7:24 AM, Bruno Marchal wrote:
I agree with that sentiment.  That's why I often try to think of 
consciousness in terms of what it would mean to provide a Mars Rover 
with consciousness.  According to Bruno the ones we've sent to Mars 
were already conscious, since their computers were capable of Lobian 
logic.


I don't remember having said this. I even doubt that Mars Rover is 
universal, although that might be serendipitously possible 
(universality is very cheap), in which case it would be as conscious 
as a human being under a high dose of salvia (a form of consciousness 
quite disconnected from terrestrial realities). But it is very 
probable that it is not Löbian. I don't see why they would have given 
the induction axioms to Mars Rover (the induction axioms is what gives 
the Löbian self-referential power).


You didn't say it explicitly.  It was my inference that the computer's 
learning algorithms would include induction.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-18 Thread Bruno Marchal


On 17 Aug 2011, at 20:07, meekerdb wrote:


On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's Turing
machine (free from our limitations). Then it is obvious that
with the appropriate tape, a physical system can be approximated
to any desired level of accuracy so long as it is predictable.
Colin said such models of physics or chemistry are impossible, so
I hope he elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just does
not scale. With it even God will not help. The only way that I
could imagine is that God's Turing machine is based on completely
different simulation technology (this however means that our
current knowledge of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical possibility.  
Bruno's

UD is the most inefficient possible way to compute this universe -
but he only cares that it's possible. All universal Turing machines
are equivalent so it doesn't matter what God's is based on. Maybe  
you

just mean the world is not computable in the sense that it is
nomologically impossible to compute it faster than just letting it
happen.


I understand what you say. On the other hand however, it is still  
good to look at the current level of simulation technology,  
especially when people make predictions on what happens in the  
future (in other messages the possibility of brain simulation and  
talk about physico-chemical processes).


From such a viewpoint, even a level of one-cell simulation is not  
reachable in the foreseeable future. Hence, in my view, after the  
discussion about theoretical limits it would be good to look at the  
reality. It might probably help to think the assumptions over.


I would say that it is small practical things that force us to  
reconsider our conceptions.


Evgenii


I agree with that sentiment.  That's why I often try to think of  
consciousness in terms of what it would mean to provide a Mars Rover  
with consciousness.  According to Bruno the ones we've sent to Mars  
were already conscious, since their computers were capable of Lobian  
logic.


I don't remember having said this. I even doubt that Mars Rover is  
universal, although that might be serendipitously possible  
(universality is very cheap), in which case it would be as conscious  
as a human being under a high dose of salvia (a form of consciousness  
quite disconnected from terrestrial realities). But it is very  
probable that it is not Löbian. I don't see why they would have given  
the induction axioms to Mars Rover (the induction axioms is what gives  
the Löbian self-referential power).



But clearly they did not have human-like consciousness (or  
intelligence).  I think it much more likely that we could make a  
Mars Rover with consciousness and intelligence somewhat similar to  
humans using von Neumann computers or artificial neural nets  than  
by trying to actually simulate a brain.


I think consciousness might be attributed to the virgin (non  
programmed) universal machine, but such consciousness is really the  
basic consciousness of everyone, before the contingent differentiation  
on the histories. LUMs, on the contrary, have a self-consciousness,  
even when basically virgin: they makes a distinction between them and  
some possible independent or transcendental reality.


No doubt the truth is a bit far more subtle, if only because there are  
intermediate stage between UMs and LUMs.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-18 Thread Bruno Marchal


On 17 Aug 2011, at 19:49, Evgenii Rudnyi wrote:


On 17.08.2011 02:01 Jason Resch said the following:

On Tue, Aug 16, 2011 at 1:03 PM, Evgenii Rudnyi
wrote:


On 15.08.2011 23:42 Jason Resch said the following:


...


But all of this is an aside from point that I was making
regarding the power and versatility of Turing machines.  Those
who think Artificial Intelligence is not possible with computers
must show what about the brain is unpredictable or unmodelable.



Why that? I guess that you should prove first that consciousness
is predictable and could be modeled.



Everyone (except perhaps the substance dualists, mysterians, and
solopists -- each non-scientific or anti-scientific philosophies)
believe the brain (on the lowest levels) operates according to simple
and predictable rules. Also note, the topic of the above was not
consciousness, but intelligence.



The matter is not about our beliefs (though it would be interesting  
to look at the theology that Bruno develops).


Yes, the point was about intelligence but the reason about success  
(if I have understood it correctly) was that it is possible to  
simulate even the whole universe. To this end in my view, it would  
be good first to develop a theory for consciousness. Here however  
the theory is missing (I do not know if you agree with Bruno's  
theory). What dualism concerns, let me quote Jeffrey Gray


p. 73. “If conscious experiences are epiphenomenal, like the melody  
whistled by the steam engine, there is not much more, scientifically  
speaking, to say about them. So to adopt epiphenomenalism is a way  
of giving up on the Hard Problem. But it is too early to give up.  
Science has only committed itself to serious consideration of the  
problem within the last couple of decades. To find casual powers for  
conscious events will not be easy. But the search should be  
continued. And, if it leads us back to dualism, so be it.”


Well, with the comp hyp it is "just" a coming back to Plato. We keep  
monism, but abandon materialism/physicalism. Advantage: this solves  
the mind problem with the usual computer science, and above all, this  
gives a realm where we can see where the laws of physics come from,  
and why there is an appearance of matter.
This goes toward an unification of all science (forces and loves  
included) which is then 100% theological, and 99.99...9% mathematical.


Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-17 Thread Craig Weinberg
On Aug 17, 1:30 pm, meekerdb  wrote:

>
> But they are not all consciousness = awareness-of-awareness.  And the
> decision to act precedes the awareness of the decision - which is
> evidence against the idea the consciousness is in control of one's
> decisions, c.f. Grey Walter carousel experiment.  Even in common
> experience one makes many decisions without being aware of them, even
> decisions that require perception.  

But it is still you making the decisions. If you are driving a car,
'you' are 'aware' of driving the car and 'you' decide how to drive it.
If you are asleep, you would not be conscious and not be driving the
car.

If you find that you have been daydreaming while driving, you are
experiencing being aware of other awarenesses - imagining dinner,
murdering your neighbors, etc, while your driving has been pushed down
further on the awareness stack. You are still passively aware that you
are in fact in a car and driving, just as you are passively aware of
where you were born and how much money is in your wallet, but you are
not actively aware of that awareness. It's nothing like a computer
which will either have something in memory or not, in storage or not.
You may not know if you remember something until you try, and what you
think you remember may change over time.

So awareness as it pertains to milliseconds disqualifies consciousness
as awareness of awareness, because that is a much larger participation
of entities and it takes longer. What you see as the earliest neuron
spikes are the initiation of that impulse, but it takes a while for
all of the emotional, cognitive, and motor skills to join in. The
process as a whole however is one single event from the first person
perspective. There is no early neuron spike without the total event.
The problem is that you are describing a 1p event in 3p terms, so that
time is a Newtonian t. But that is a not a the level at which human
sensorimotive awareness occurs. It's like trying to watch a tv show by
putting your eyeball right up against the TV screen.

>So it is not plausible that
> consciousness makes the decisions.

Whatever you call what is making the decisions, it is your proprietary
sensorimotive awareness and not generic probabilistic
electromagnetism. Decisions are semantic and meaningful, whether they
are fully in the front of one's active awareness, or in the 'back of
one's mind'. This is the important distinction. The alternative is
absurd. It means that you have no choice but to read this, and that
what it says makes no difference to your neurology, so therefore there
is no point in reading anything. You're just a puppet of random
genetic permutations who has the accidental misfortune of thinking
that it is alive. It's just silly. It makes transubstantiation seem
scientific in comparison.

>  Consciousness may indeed occur in
> parallel and sometimes correlate with decisions and sometimes not.  But
> the correlation is due to a common, subconscious, cause.

Sure. Your high level verbal 'consciousness' is not necessarily able
to push it's will down the spinal cord. There's all kinds of
protocols. You have to be in the mood to do something, you can't
always talk yourself into it.


> > Not like an assembly line - like a living, flowing interaction amongst
> > multiple layers of external relations and internal perceptions, the
> > parts and the wholes. Without perception and relativity, there are
> > only parts.
>
> >> The rest of the above paragraph seems to be an
> >> attempt to save dualism by saying why the casual spirit comes after the
> >> motor effect.  I have no problem being alive and conscious with
> >> consciousness coming after the decision.  The decision was still made by
> >> me.  I just don't conceive "me" as being so small as my consciousness.
>
> > You're applying a broad definition of consciousness at the beginning
> > and a narrow definition to consciousness at the end and using the
> > mismatch to beg the question.
>
> I didn't refer to "consciousness" at the beginning.  I said what happens
> first is the activity of neurons - not necessarily conscious.  You are
> attributing inconsistencies to me to create a strawman.  At the end I'm
> using your definition of consciousness "awareness of awareness".

Sorry, not intentionally. I was just rushing. It still seems
inconsistent in how you are using consciousness, sometimes as
awareness of awareness (verbal let's say) and sometimes as decision
maker (not necessarily verbal but instinctual, habitual,
whatever...not asleep or comatose).

> > I have no problem with recognition
> > coming after cognition after awareness after detection, but I have a
> > problem with conflating all of those as 'consciousness' and then
> > making a special case for electromagnetic activity in the brain not
> > corresponding to anything experiential out of anthropomorphic
> > superstition. Just because 'you' don't think you feel anything doesn't
> > mean that what you actually are doesn't dete

Re: Turing Machines

2011-08-17 Thread meekerdb

On 8/17/2011 10:36 AM, Evgenii Rudnyi wrote:

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's Turing
machine (free from our limitations). Then it is obvious that
with the appropriate tape, a physical system can be approximated
to any desired level of accuracy so long as it is predictable.
Colin said such models of physics or chemistry are impossible, so
I hope he elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just does
not scale. With it even God will not help. The only way that I
could imagine is that God's Turing machine is based on completely
different simulation technology (this however means that our
current knowledge of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical possibility. Bruno's
UD is the most inefficient possible way to compute this universe -
but he only cares that it's possible. All universal Turing machines
are equivalent so it doesn't matter what God's is based on. Maybe you
just mean the world is not computable in the sense that it is
nomologically impossible to compute it faster than just letting it
happen.


I understand what you say. On the other hand however, it is still good 
to look at the current level of simulation technology, especially when 
people make predictions on what happens in the future (in other 
messages the possibility of brain simulation and talk about 
physico-chemical processes).


From such a viewpoint, even a level of one-cell simulation is not 
reachable in the foreseeable future. Hence, in my view, after the 
discussion about theoretical limits it would be good to look at the 
reality. It might probably help to think the assumptions over.


I would say that it is small practical things that force us to 
reconsider our conceptions.


Evgenii


I agree with that sentiment.  That's why I often try to think of 
consciousness in terms of what it would mean to provide a Mars Rover 
with consciousness.  According to Bruno the ones we've sent to Mars were 
already conscious, since their computers were capable of Lobian logic.  
But clearly they did not have human-like consciousness (or 
intelligence).  I think it much more likely that we could make a Mars 
Rover with consciousness and intelligence somewhat similar to humans 
using von Neumann computers or artificial neural nets  than by trying to 
actually simulate a brain.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-17 Thread Evgenii Rudnyi

On 17.08.2011 02:01 Jason Resch said the following:

On Tue, Aug 16, 2011 at 1:03 PM, Evgenii Rudnyi
wrote:


On 15.08.2011 23:42 Jason Resch said the following:


...


But all of this is an aside from point that I was making
regarding the power and versatility of Turing machines.  Those
who think Artificial Intelligence is not possible with computers
must show what about the brain is unpredictable or unmodelable.



Why that? I guess that you should prove first that consciousness
is predictable and could be modeled.



Everyone (except perhaps the substance dualists, mysterians, and
solopists -- each non-scientific or anti-scientific philosophies)
believe the brain (on the lowest levels) operates according to simple
and predictable rules. Also note, the topic of the above was not
consciousness, but intelligence.



The matter is not about our beliefs (though it would be interesting to 
look at the theology that Bruno develops).


Yes, the point was about intelligence but the reason about success (if I 
have understood it correctly) was that it is possible to simulate even 
the whole universe. To this end in my view, it would be good first to 
develop a theory for consciousness. Here however the theory is missing 
(I do not know if you agree with Bruno's theory). What dualism concerns, 
let me quote Jeffrey Gray


p. 73. “If conscious experiences are epiphenomenal, like the melody 
whistled by the steam engine, there is not much more, scientifically 
speaking, to say about them. So to adopt epiphenomenalism is a way of 
giving up on the Hard Problem. But it is too early to give up. Science 
has only committed itself to serious consideration of the problem within 
the last couple of decades. To find casual powers for conscious events 
will not be easy. But the search should be continued. And, if it leads 
us back to dualism, so be it.”


Evgenii
--
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-17 Thread Evgenii Rudnyi

On 16.08.2011 20:47 meekerdb said the following:

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's Turing
machine (free from our limitations). Then it is obvious that
with the appropriate tape, a physical system can be approximated
to any desired level of accuracy so long as it is predictable.
Colin said such models of physics or chemistry are impossible, so
I hope he elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just does
not scale. With it even God will not help. The only way that I
could imagine is that God's Turing machine is based on completely
different simulation technology (this however means that our
current knowledge of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical possibility. Bruno's
UD is the most inefficient possible way to compute this universe -
but he only cares that it's possible. All universal Turing machines
are equivalent so it doesn't matter what God's is based on. Maybe you
just mean the world is not computable in the sense that it is
nomologically impossible to compute it faster than just letting it
happen.


I understand what you say. On the other hand however, it is still good 
to look at the current level of simulation technology, especially when 
people make predictions on what happens in the future (in other messages 
the possibility of brain simulation and talk about physico-chemical 
processes).


From such a viewpoint, even a level of one-cell simulation is not 
reachable in the foreseeable future. Hence, in my view, after the 
discussion about theoretical limits it would be good to look at the 
reality. It might probably help to think the assumptions over.


I would say that it is small practical things that force us to 
reconsider our conceptions.


Evgenii
--
http://blog.rudnyi.ru

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-17 Thread meekerdb

On 8/17/2011 4:53 AM, Craig Weinberg wrote:

On Aug 17, 12:01 am, meekerdb  wrote:
   

On 8/16/2011 6:57 PM, Craig Weinberg wrote:

 

Consciousness is a very broad term, with different meanings especially
in different contexts; medical vs philosophical vs vernacular,
macrocosmic vs microcosmic, legal, ethical, etc. For the mind/body
question and Turing emulation I try to use 'consciousness'
specifically to mean 'awareness of awareness'. The other relevant
concept though is perceptual frame of reference, or PRIF. In this
case, when you put awareness under a microscope, the monolithic sense
of 'consciousness' is discarded in favor of a more granular sense of
multiple stages of awarenesses feeding back on each other.
   

AKA "subconscious".
 

Yes. The basement level of consciousness, not unconscious.

   

When you
look at electrical transmission in the brain over milliseconds and
microseconds, you have automatically shifted outside of the realm of
vernacular consciousness and into microconscious territories.
   
 

Just as the activity of cells as a whole is beyond the scope of what
can be understood by studying molecules alone, the study of the
microconscious is too short term to reveal the larger, slower pattern
of our ordinary moment to moment awareness of awareness. Raw awareness
is fast, but awareness of awareness is slower, the ability to
awareness of awareness to be communicated through motor channels is
slower still, and the propagation of motor intention through the
efferent nerves through the spinal cord is quite a bit slower. It's
really not comparing apples to apples then if you look at the very
earliest fraction of a second of an experience and compare it with the
time it takes for the experience to be fully explicated through all of
the various perceptual and cognitive resources. It's completely
misleading and mischaracterizes awareness in yet another attempt to
somehow prove for the sake of validating our third person
observations, that in fact we cannot really be alive and conscious, we
just think we are. I think it's like a modern equivalent of 'angels
dancing on the head of a pin'.
   

So you admit that what happens that determines you behavior occurs
before you are aware of it, i.e. conscious.
 

No. Your behavior correlates directly with your awareness of the
stimuli and with the earliest neurological activity.Your awareness of
your behavior, and your awareness of your ability to report on it, and
the reporting itself (and their neural correlates) come later.

   

And what happens first is
the activity of neurons.
 

It's all neuron activity, and it's all different scales of detection-
sense-awareness-cognition experience. You are not able to let go of
the idea that it's a sequence where first the physical happens and
then the 'consciousness' happens. It's two parallel sequences which
can and do run inductively. Your motive current of intention pushes
the electric current in the brain - they are the same thing. Like
this: http://www.splung.com/fields/images/induction/transformer.svg
   


But they are not all consciousness = awareness-of-awareness.  And the 
decision to act precedes the awareness of the decision - which is 
evidence against the idea the consciousness is in control of one's 
decisions, c.f. Grey Walter carousel experiment.  Even in common 
experience one makes many decisions without being aware of them, even 
decisions that require perception.  So it is not plausible that 
consciousness makes the decisions.  Consciousness may indeed occur in 
parallel and sometimes correlate with decisions and sometimes not.  But 
the correlation is due to a common, subconscious, cause.



Not like an assembly line - like a living, flowing interaction amongst
multiple layers of external relations and internal perceptions, the
parts and the wholes. Without perception and relativity, there are
only parts.

   

The rest of the above paragraph seems to be an
attempt to save dualism by saying why the casual spirit comes after the
motor effect.  I have no problem being alive and conscious with
consciousness coming after the decision.  The decision was still made by
me.  I just don't conceive "me" as being so small as my consciousness.
 

You're applying a broad definition of consciousness at the beginning
and a narrow definition to consciousness at the end and using the
mismatch to beg the question.


I didn't refer to "consciousness" at the beginning.  I said what happens 
first is the activity of neurons - not necessarily conscious.  You are 
attributing inconsistencies to me to create a strawman.  At the end I'm 
using your definition of consciousness "awareness of awareness".



I have no problem with recognition
coming after cognition after awareness after detection, but I have a
problem with conflating all of those as 'consciousness' and then
making a special case for electromagnetic activity in the brain not
corresponding to anything experientia

Re: Turing Machines

2011-08-17 Thread benjayk


Jason Resch-2 wrote:
> 
> On Tue, Aug 16, 2011 at 9:32 AM, benjayk
> wrote:
> 
>>
>>
>> Jason Resch-2 wrote:
>> >
>> > On Tue, Aug 16, 2011 at 7:03 AM, benjayk
>> > wrote:
>> >
>> >>
>> >>
>> >> Craig Weinberg wrote:
>> >> >
>> >> > On Aug 15, 10:43 pm, Jason Resch  wrote:
>> >> >> I am more worried for the biologically handicapped in the future.
>> >> >>  Computers
>> >> >> will get faster, brains won't.  By 2029, it is predicted $1,000
>> worth
>> >> of
>> >> >> computer will buy a human brain's worth of computational power.  15
>> >> years
>> >> >> later, you can get 1,000 X the human brain's power for $1,000.
>> >> Imagine:
>> >> >> the
>> >> >> simulated get to experience 1 century for each month the humans
>> with
>> >> >> biological brains experience.  Who will really be alive then?
>> >> >
>> >> > Speed and power is for engines, not brains. Good ideas don't come
>> from
>> >> > engines.
>> >> >
>> >> > Craig
>> >> >
>> >> I agree. It is a very narrow to think computational power is the key
>> to
>> >> rich
>> >> experience and high intelligence. The real magic is what is done with
>> the
>> >> hardware. And honestly I see no reason to believe that we somehow we
>> >> magically develop amazingly intelligent software.
>> >
>> >
>> > Neural imaging/scanning rates are also doubling every year.  The hope
>> is
>> > that we can reverse engineer the brain, by scanning it and making a map
>> > all
>> > the connections between the neurons.  Then if the appropriate hardware
>> can
>> > run a few brains at 1,000 or 1,000,000  times faster than the
>> biological
>> > brain, we can put our best scientists or AI researchers inside and they
>> > can
>> > figure it out in a few of our months.
>> >
>> > http://www.kurzweilai.net/the-law-of-accelerating-returns
>> There are *so* many problems with that. We are naive, a bit like 7 year
>> old
>> wanting to build a time machine. We know little about the brain. Who says
>> there is no quantum effects going on? There doesn't even have to be
>> substantial entaglement. Chaos theory tells us that even minuscle quantum
>> effects could have major impacts on the thing. ESP and telepathy suggest
>> that we are to some extent entangled. There are *major* problems
>> reprodocing
>> this with computers.
>>
>> Neural imaging and scanning cannot pick up the major information in the
>> brain. Not by a long stretch.
> 
> 
> Automated serial sectioning of brains is already fairly advanced, and is
> doubling in performance and accuracy each year.
> http://www.mcb.harvard.edu/lichtman/ATLUM/ATLUM_web.htm
That's pretty impressive, but it is far from sufficient ("0.01mm³"), and we
don't know how good it will scale up.


Jason Resch-2 wrote:
> 
>> It is like having a picture of a RAM and
>> thinking this is enough to recover the information on it.
>>
>> What use are fast brains?
> 
> 
> A million years of human technological progress in the time frame of one
> year seems highly useful.
But technological progress is not exlusively made in our brains. Also, the
amount of useful technological progress that our brain can deliver may be
intrinsically limited.


Jason Resch-2 wrote:
> 
>> Our brains alone are of little use. We also need a
>> rich environment and a body.
>>
> 
> I'm not sure bodies are necessary, but in the context of a simulation you
> could have any body you wanted, or no body at all.  (Like in second life)



Jason Resch-2 wrote:
> 
>>
>> You presuppose that AI researchers have the potential ability to build
>> superintelligent AI. Why should we suspect this more than we suspect that
>> gorillas can build humans? I'd like to hear arguments that make it
>> plausible
>> that it is possible to engineer somthing more generally intelligent than
>> yourself.
>>
> 
> I there was someone just like me, but thought at twice the speed, I am
> sure
> he would score more highly on some general intelligence tests.
Of course, if only because he effectively would have twice the time. But
that's not what I am referring to when I say superintelligent. Imagine he
would have 1 times more time. Would that make him 1 times more
intelligent? Of course not. 


Jason Resch-2 wrote:
> 
>   If we can
> find a gene or genes that make the difference between Newton and the
> average
> person, and then switch them on in the average person through gene
> therapy,
> would that count as engineering something more intelligent than yourself?
Ultimately, no. What you say may well be possible, but we are essential just
using the intelligence that is already there and copy it. But even then I
doubt that we can get the kind of deeply creative intelligence, that
includes wisdom, which is the essential driver of progress. I don't buy at
all that intellectual intelligence is what drives us forward.
Intellect can be used for selfish and destructive purposes as well. Real
intelligence consists in clear awareness of yourself and the world, which
also leads to moral intelligence. This is what I say can't be eng

Re: Turing Machines

2011-08-17 Thread Craig Weinberg
On Aug 17, 12:01 am, meekerdb  wrote:
> On 8/16/2011 6:57 PM, Craig Weinberg wrote:
>
> > Consciousness is a very broad term, with different meanings especially
> > in different contexts; medical vs philosophical vs vernacular,
> > macrocosmic vs microcosmic, legal, ethical, etc. For the mind/body
> > question and Turing emulation I try to use 'consciousness'
> > specifically to mean 'awareness of awareness'. The other relevant
> > concept though is perceptual frame of reference, or PRIF. In this
> > case, when you put awareness under a microscope, the monolithic sense
> > of 'consciousness' is discarded in favor of a more granular sense of
> > multiple stages of awarenesses feeding back on each other.
>
> AKA "subconscious".

Yes. The basement level of consciousness, not unconscious.

> > When you
> > look at electrical transmission in the brain over milliseconds and
> > microseconds, you have automatically shifted outside of the realm of
> > vernacular consciousness and into microconscious territories.
>
> > Just as the activity of cells as a whole is beyond the scope of what
> > can be understood by studying molecules alone, the study of the
> > microconscious is too short term to reveal the larger, slower pattern
> > of our ordinary moment to moment awareness of awareness. Raw awareness
> > is fast, but awareness of awareness is slower, the ability to
> > awareness of awareness to be communicated through motor channels is
> > slower still, and the propagation of motor intention through the
> > efferent nerves through the spinal cord is quite a bit slower. It's
> > really not comparing apples to apples then if you look at the very
> > earliest fraction of a second of an experience and compare it with the
> > time it takes for the experience to be fully explicated through all of
> > the various perceptual and cognitive resources. It's completely
> > misleading and mischaracterizes awareness in yet another attempt to
> > somehow prove for the sake of validating our third person
> > observations, that in fact we cannot really be alive and conscious, we
> > just think we are. I think it's like a modern equivalent of 'angels
> > dancing on the head of a pin'.
>
> So you admit that what happens that determines you behavior occurs
> before you are aware of it, i.e. conscious.

No. Your behavior correlates directly with your awareness of the
stimuli and with the earliest neurological activity.Your awareness of
your behavior, and your awareness of your ability to report on it, and
the reporting itself (and their neural correlates) come later.

>And what happens first is
> the activity of neurons.  

It's all neuron activity, and it's all different scales of detection-
sense-awareness-cognition experience. You are not able to let go of
the idea that it's a sequence where first the physical happens and
then the 'consciousness' happens. It's two parallel sequences which
can and do run inductively. Your motive current of intention pushes
the electric current in the brain - they are the same thing. Like
this: http://www.splung.com/fields/images/induction/transformer.svg

Not like an assembly line - like a living, flowing interaction amongst
multiple layers of external relations and internal perceptions, the
parts and the wholes. Without perception and relativity, there are
only parts.

>The rest of the above paragraph seems to be an
> attempt to save dualism by saying why the casual spirit comes after the
> motor effect.  I have no problem being alive and conscious with
> consciousness coming after the decision.  The decision was still made by
> me.  I just don't conceive "me" as being so small as my consciousness.

You're applying a broad definition of consciousness at the beginning
and a narrow definition to consciousness at the end and using the
mismatch to beg the question. I have no problem with recognition
coming after cognition after awareness after detection, but I have a
problem with conflating all of those as 'consciousness' and then
making a special case for electromagnetic activity in the brain not
corresponding to anything experiential out of anthropomorphic
superstition. Just because 'you' don't think you feel anything doesn't
mean that what you actually are doesn't detect it as a first person
experience.

> >>> If moving my arm is like reading a book, I can't tell you what the
> >>> book is about until I actually have read it, but I still am initiating
> >>> the reading of the book, and not the book forcing me to read it.
>
> >> Another non-analogy.  Is this sentence making you think of a dragon?
>
> > A dragon? No. Why would it? Why is it 'another' non-analogy? Is this
> > 'another' ad hominem non-argument?
>
> It's a non-analogy because no one proposed that your actions were
> determined by a book or other external effect. The hypothesis was that
> they are determined by neural processes of which you are not aware.

They are determined by neural experienced of which you, at the .1Hz
level of 'Brent' 

Re: Turing Machines

2011-08-17 Thread Craig Weinberg
On Aug 16, 10:24 pm, Stathis Papaioannou  wrote:
> On Wed, Aug 17, 2011 at 3:16 AM, Craig Weinberg  wrote:
> > On Aug 16, 10:08 am, Stathis Papaioannou  wrote:
>
> >> Our body precisely follows the deterministic biochemical reactions
> >> that comprise it. The mind is generated as a result of these
> >> biochemical reactions; a reaction occurs in your brain which causes
> >> you to have a thought to move your arm and move your arm. How could it
> >> possibly be otherwise?
>
> > It's not only possible, it absolutely is otherwise. I move my arm. I
> > determine the biochemical reactions that move it. Me. For my personal
> > reasons which are knowable to me in my own natural language and are
> > utterly unknowable by biochemical analysis. It's hard for me to accept
> > that you cannot see the flaw in this reasoning.
>
> It's hard for me to accept that you can possibly think that your mind
> determines the biochemistry in your brain. It's like saying that the
> speed and direction your car goes in determines the activity of the
> engine and the brakes.

It does determine the activity of your engine and brakes. If you are
going too slow you hit the accelerator and the engine speeds up. If
you are going too fast you hit the brakes. It's how you drive the car.

> > "Why did the chicken cross the road?" For deterministic biochemical
> > reactions.
> > "Why did the sovereign nation declare war?" For deterministic
> > biochemical reactions.
> > "What is the meaning of f=ma"? For deterministic biochemical
> > reactions.
>
> > Biochemistry is just what's happening on the level of cells and
> > molecules. It is an entirely different perceptual-relativistic
> > inertial frame of reference. Are they correlated? Sure. You change
> > your biochemistry in certain ways in your brain, and you will
> > definitely feel it. Can you change your biochemistry in certain ways
> > by yourself? Of course. Think about something that makes you happy and
> > your cells will produce the proper neurotransmitters. YOU OWN them.
> > They are your servant. To believe otherwise is to subscribe to a faith
> > in the microcosm over the macrocosm, in object phenomenology over
> > subject phenomenology to the point of imaging that there is no
> > subject. The subject imagines it is nothing but an object. It's
> > laughably tragic.
>
> > In order to understand how the universe creates subjectivity, you have
> > to stop trying to define it in terms of it's opposite. Objectivity
> > itself is a subjective experience. There is no objective experience of
> > subjectivity - it looks like randomness and self-similarity feedback.
> > That's a warning. It means - 'try again but look in the other
> > direction'.
>
> I feel happy because certain things happen in my environment that
> affect the biochemistry in my brain, and that is experienced as
> happiness. I can also feel happy if I take certain drugs which cause
> release of neurotransmitters such as dopamine, even if nothing in my
> environment is particularly joy-inducing. On the other hand, I can be
> depressed due to underactivity of serotonergic neurotransmission, so
> that even if happy things happen they don't cheer me up, and this can
> be corrected by pro-serotonergic drugs.
>
> I don't doubt the subjective, I just can't see how it could be due to
> anything other than physical processes in the brain.

I can see it clearly. Your mind is a physical process OF the brain.
Just not the part of the brain you see on an MRI. It's the big picture
of the aggregate interiority of the brain as a whole rather than the
fine grained particles of the exterior of the mind as separate parts.
The key is that the mind cannot directly be translated into the brain,
but they both overlap through correlation at the physiological level.
It's bi-directional, so mind controls brain controls body and body
controls brain controls mind. It's what you experience every waking
moment. Nothing magical or weird.

>The physical
> process comes first, and the feeling or thought follows as a result.

The feeling and thought are a physical process as well. They can come
in any sequence. If you have an idea, you might feel like writing it
down, so you actualize that feeling by moving your brain to move your
spinal cord to move your writing hand. You might have a feeling first
- you are tired, which motivates your thinking to remember you have
some vacation time left, so you actualize that thought by moving your
spinal cord to move your emailing hand to notify your boss.

Your view would require that all thoughts and feelings originate first
as biochemistry so that if your serotonin is low, you feel something,
and then get an idea. That happens too, but it's completely
superstitious to insist that it can only happen that one way. All of
three modes are experiential physical processes, one of detection-
sense (molecular-cellular physics), one of awareness-emotion (somatic-
limbic), and one of cognition (cerebral-psychological). Separat

Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 6:57 PM, Craig Weinberg wrote:

On Aug 16, 7:35 pm, meekerdb  wrote:
   

On 8/16/2011 12:37 PM, Craig Weinberg wrote:









 

On Aug 16, 1:44 pm, meekerdbwrote:
   
 

On 8/16/2011 10:16 AM, Craig Weinberg wrote:
 
 

It's not only possible, it absolutely is otherwise. I move my arm. I
determine the biochemical reactions that move it. Me. For my personal
reasons which are knowable to me in my own natural language and are
utterly unknowable by biochemical analysis. It's hard for me to accept
that you cannot see the flaw in this reasoning.
   
 

It's not a flaw in his reasoning, it's description at a different
level.  While it is no doubt true that you, the whole you, determine to
move your arm; it seems not to be the case that the *conscious* you does
so.  Various experiments starting with Libet show that the biochemical
reactions that move it occur before you are conscious of the decision to
move it.
 
 

You make the decision before the reporting part of you can report it
is all. It's still you that is consciously making the decision. It's
just because we are applying naive realism to how the self works and
assuming that the narrative voice which accompanies consciousness and
can answer questions or push buttons is the extent of consciousness.
   

Now you're changing the definitions of words again.  What does
"conscious" mean, if not "the part of your thinking that you can report
on."  I would never claim that you didn't make the decision - it's just
that "you" is a lot bigger than your consciousness.
 

Consciousness is a very broad term, with different meanings especially
in different contexts; medical vs philosophical vs vernacular,
macrocosmic vs microcosmic, legal, ethical, etc. For the mind/body
question and Turing emulation I try to use 'consciousness'
specifically to mean 'awareness of awareness'. The other relevant
concept though is perceptual frame of reference, or PRIF. In this
case, when you put awareness under a microscope, the monolithic sense
of 'consciousness' is discarded in favor of a more granular sense of
multiple stages of awarenesses feeding back on each other.


AKA "subconscious".


When you
look at electrical transmission in the brain over milliseconds and
microseconds, you have automatically shifted outside of the realm of
vernacular consciousness and into microconscious territories.

Just as the activity of cells as a whole is beyond the scope of what
can be understood by studying molecules alone, the study of the
microconscious is too short term to reveal the larger, slower pattern
of our ordinary moment to moment awareness of awareness. Raw awareness
is fast, but awareness of awareness is slower, the ability to
awareness of awareness to be communicated through motor channels is
slower still, and the propagation of motor intention through the
efferent nerves through the spinal cord is quite a bit slower. It's
really not comparing apples to apples then if you look at the very
earliest fraction of a second of an experience and compare it with the
time it takes for the experience to be fully explicated through all of
the various perceptual and cognitive resources. It's completely
misleading and mischaracterizes awareness in yet another attempt to
somehow prove for the sake of validating our third person
observations, that in fact we cannot really be alive and conscious, we
just think we are. I think it's like a modern equivalent of 'angels
dancing on the head of a pin'.
   


So you admit that what happens that determines you behavior occurs 
before you are aware of it, i.e. conscious. And what happens first is 
the activity of neurons.  The rest of the above paragraph seems to be an 
attempt to save dualism by saying why the casual spirit comes after the 
motor effect.  I have no problem being alive and conscious with 
consciousness coming after the decision.  The decision was still made by 
me.  I just don't conceive "me" as being so small as my consciousness.



   
 

If moving my arm is like reading a book, I can't tell you what the
book is about until I actually have read it, but I still am initiating
the reading of the book, and not the book forcing me to read it.
   

Another non-analogy.  Is this sentence making you think of a dragon?
 

A dragon? No. Why would it? Why is it 'another' non-analogy? Is this
'another' ad hominem non-argument?
   


It's a non-analogy because no one proposed that your actions were 
determined by a book or other external effect. The hypothesis was that 
they are determined by neural processes of which you are not aware.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.co

Re: Turing Machines

2011-08-16 Thread Stathis Papaioannou
On Wed, Aug 17, 2011 at 3:16 AM, Craig Weinberg  wrote:
> On Aug 16, 10:08 am, Stathis Papaioannou  wrote:
>
>> Our body precisely follows the deterministic biochemical reactions
>> that comprise it. The mind is generated as a result of these
>> biochemical reactions; a reaction occurs in your brain which causes
>> you to have a thought to move your arm and move your arm. How could it
>> possibly be otherwise?
>
> It's not only possible, it absolutely is otherwise. I move my arm. I
> determine the biochemical reactions that move it. Me. For my personal
> reasons which are knowable to me in my own natural language and are
> utterly unknowable by biochemical analysis. It's hard for me to accept
> that you cannot see the flaw in this reasoning.

It's hard for me to accept that you can possibly think that your mind
determines the biochemistry in your brain. It's like saying that the
speed and direction your car goes in determines the activity of the
engine and the brakes.

> "Why did the chicken cross the road?" For deterministic biochemical
> reactions.
> "Why did the sovereign nation declare war?" For deterministic
> biochemical reactions.
> "What is the meaning of f=ma"? For deterministic biochemical
> reactions.
>
> Biochemistry is just what's happening on the level of cells and
> molecules. It is an entirely different perceptual-relativistic
> inertial frame of reference. Are they correlated? Sure. You change
> your biochemistry in certain ways in your brain, and you will
> definitely feel it. Can you change your biochemistry in certain ways
> by yourself? Of course. Think about something that makes you happy and
> your cells will produce the proper neurotransmitters. YOU OWN them.
> They are your servant. To believe otherwise is to subscribe to a faith
> in the microcosm over the macrocosm, in object phenomenology over
> subject phenomenology to the point of imaging that there is no
> subject. The subject imagines it is nothing but an object. It's
> laughably tragic.
>
> In order to understand how the universe creates subjectivity, you have
> to stop trying to define it in terms of it's opposite. Objectivity
> itself is a subjective experience. There is no objective experience of
> subjectivity - it looks like randomness and self-similarity feedback.
> That's a warning. It means - 'try again but look in the other
> direction'.

I feel happy because certain things happen in my environment that
affect the biochemistry in my brain, and that is experienced as
happiness. I can also feel happy if I take certain drugs which cause
release of neurotransmitters such as dopamine, even if nothing in my
environment is particularly joy-inducing. On the other hand, I can be
depressed due to underactivity of serotonergic neurotransmission, so
that even if happy things happen they don't cheer me up, and this can
be corrected by pro-serotonergic drugs.

I don't doubt the subjective, I just can't see how it could be due to
anything other than physical processes in the brain. The physical
process comes first, and the feeling or thought follows as a result.
Remove the brain and the feeling or thought is also removed.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 7:35 pm, meekerdb  wrote:
> On 8/16/2011 12:37 PM, Craig Weinberg wrote:
>
>
>
>
>
>
>
>
>
> > On Aug 16, 1:44 pm, meekerdb  wrote:
>
> >> On 8/16/2011 10:16 AM, Craig Weinberg wrote:
>
> >>> It's not only possible, it absolutely is otherwise. I move my arm. I
> >>> determine the biochemical reactions that move it. Me. For my personal
> >>> reasons which are knowable to me in my own natural language and are
> >>> utterly unknowable by biochemical analysis. It's hard for me to accept
> >>> that you cannot see the flaw in this reasoning.
>
> >> It's not a flaw in his reasoning, it's description at a different
> >> level.  While it is no doubt true that you, the whole you, determine to
> >> move your arm; it seems not to be the case that the *conscious* you does
> >> so.  Various experiments starting with Libet show that the biochemical
> >> reactions that move it occur before you are conscious of the decision to
> >> move it.
>
> > You make the decision before the reporting part of you can report it
> > is all. It's still you that is consciously making the decision. It's
> > just because we are applying naive realism to how the self works and
> > assuming that the narrative voice which accompanies consciousness and
> > can answer questions or push buttons is the extent of consciousness.
>
> Now you're changing the definitions of words again.  What does
> "conscious" mean, if not "the part of your thinking that you can report
> on."  I would never claim that you didn't make the decision - it's just
> that "you" is a lot bigger than your consciousness.

Consciousness is a very broad term, with different meanings especially
in different contexts; medical vs philosophical vs vernacular,
macrocosmic vs microcosmic, legal, ethical, etc. For the mind/body
question and Turing emulation I try to use 'consciousness'
specifically to mean 'awareness of awareness'. The other relevant
concept though is perceptual frame of reference, or PRIF. In this
case, when you put awareness under a microscope, the monolithic sense
of 'consciousness' is discarded in favor of a more granular sense of
multiple stages of awarenesses feeding back on each other. When you
look at electrical transmission in the brain over milliseconds and
microseconds, you have automatically shifted outside of the realm of
vernacular consciousness and into microconscious territories.

Just as the activity of cells as a whole is beyond the scope of what
can be understood by studying molecules alone, the study of the
microconscious is too short term to reveal the larger, slower pattern
of our ordinary moment to moment awareness of awareness. Raw awareness
is fast, but awareness of awareness is slower, the ability to
awareness of awareness to be communicated through motor channels is
slower still, and the propagation of motor intention through the
efferent nerves through the spinal cord is quite a bit slower. It's
really not comparing apples to apples then if you look at the very
earliest fraction of a second of an experience and compare it with the
time it takes for the experience to be fully explicated through all of
the various perceptual and cognitive resources. It's completely
misleading and mischaracterizes awareness in yet another attempt to
somehow prove for the sake of validating our third person
observations, that in fact we cannot really be alive and conscious, we
just think we are. I think it's like a modern equivalent of 'angels
dancing on the head of a pin'.

>
> > If moving my arm is like reading a book, I can't tell you what the
> > book is about until I actually have read it, but I still am initiating
> > the reading of the book, and not the book forcing me to read it.
>
> Another non-analogy.  Is this sentence making you think of a dragon?

A dragon? No. Why would it? Why is it 'another' non-analogy? Is this
'another' ad hominem non-argument?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Jason Resch
On Tue, Aug 16, 2011 at 9:32 AM, benjayk wrote:

>
>
> Jason Resch-2 wrote:
> >
> > On Tue, Aug 16, 2011 at 7:03 AM, benjayk
> > wrote:
> >
> >>
> >>
> >> Craig Weinberg wrote:
> >> >
> >> > On Aug 15, 10:43 pm, Jason Resch  wrote:
> >> >> I am more worried for the biologically handicapped in the future.
> >> >>  Computers
> >> >> will get faster, brains won't.  By 2029, it is predicted $1,000 worth
> >> of
> >> >> computer will buy a human brain's worth of computational power.  15
> >> years
> >> >> later, you can get 1,000 X the human brain's power for $1,000.
> >> Imagine:
> >> >> the
> >> >> simulated get to experience 1 century for each month the humans with
> >> >> biological brains experience.  Who will really be alive then?
> >> >
> >> > Speed and power is for engines, not brains. Good ideas don't come from
> >> > engines.
> >> >
> >> > Craig
> >> >
> >> I agree. It is a very narrow to think computational power is the key to
> >> rich
> >> experience and high intelligence. The real magic is what is done with
> the
> >> hardware. And honestly I see no reason to believe that we somehow we
> >> magically develop amazingly intelligent software.
> >
> >
> > Neural imaging/scanning rates are also doubling every year.  The hope is
> > that we can reverse engineer the brain, by scanning it and making a map
> > all
> > the connections between the neurons.  Then if the appropriate hardware
> can
> > run a few brains at 1,000 or 1,000,000  times faster than the biological
> > brain, we can put our best scientists or AI researchers inside and they
> > can
> > figure it out in a few of our months.
> >
> > http://www.kurzweilai.net/the-law-of-accelerating-returns
> There are *so* many problems with that. We are naive, a bit like 7 year old
> wanting to build a time machine. We know little about the brain. Who says
> there is no quantum effects going on? There doesn't even have to be
> substantial entaglement. Chaos theory tells us that even minuscle quantum
> effects could have major impacts on the thing. ESP and telepathy suggest
> that we are to some extent entangled. There are *major* problems
> reprodocing
> this with computers.
>
> Neural imaging and scanning cannot pick up the major information in the
> brain. Not by a long stretch.


Automated serial sectioning of brains is already fairly advanced, and is
doubling in performance and accuracy each year.
http://www.mcb.harvard.edu/lichtman/ATLUM/ATLUM_web.htm


> It is like having a picture of a RAM and
> thinking this is enough to recover the information on it.
>
> What use are fast brains?


A million years of human technological progress in the time frame of one
year seems highly useful.


> Our brains alone are of little use. We also need a
> rich environment and a body.
>

I'm not sure bodies are necessary, but in the context of a simulation you
could have any body you wanted, or no body at all.  (Like in second life)


>
> You presuppose that AI researchers have the potential ability to build
> superintelligent AI. Why should we suspect this more than we suspect that
> gorillas can build humans? I'd like to hear arguments that make it
> plausible
> that it is possible to engineer somthing more generally intelligent than
> yourself.
>

I there was someone just like me, but thought at twice the speed, I am sure
he would score more highly on some general intelligence tests.  If we can
find a gene or genes that make the difference between Newton and the average
person, and then switch them on in the average person through gene therapy,
would that count as engineering something more intelligent than yourself?
What about taking Nootropics ( http://en.wikipedia.org/wiki/Nootropic )?
There are many plausible scenarios for making ourselves more intelligent, or
more creative than our current state.


>
>
> Jason Resch-2 wrote:
> >
> >> Software development is
> >> slow, no comparison to the exponential progress of hardware.
> >>
> >
> > As I mentioned to Craig who complained his computer takes longer to start
> > up
> > now than ever, the complexity of software is in many cases outpacing even
> > the exponential growth in the power of computer hardware.
> That may quite well be. But even if we have a software that can render a
> 99^99 dimensional mandelbrot this will not be of much use. The point is
> that
> the usefulness of software is not progressing exponentially.
>
>
> Jason Resch-2 wrote:
> >
> >> I believe that it is inherently impossible to design intelligence. It
> can
> >> just self-organize itself through becoming aware of itself.
> >
> >
> > A few genes separate us from chimps, and all of our intelligence.
> I don't think our intelligence is reducible to genes. Memes seem even more
> important. And just because we can't really research it scientifically at
> moment, does not mean there are no subtler things that determine our
> general
> intelligence than genes and culture. Many subjective experiences hint at
> something like a more subtle layer, ca

Re: Turing Machines

2011-08-16 Thread Jason Resch
On Tue, Aug 16, 2011 at 1:03 PM, Evgenii Rudnyi  wrote:

> On 15.08.2011 23:42 Jason Resch said the following:
>
>> On Mon, Aug 15, 2011 at 1:17 PM, Evgenii Rudnyi
>> wrote:
>>
>>  On 15.08.2011 07:56 Jason Resch said the following:
>>>
>>> ...
>>>
>>>
>>> Can we accurately simulate physical laws or can't we?  Before you
>>>
 answer, take a few minutes to watch this amazing video, which
 simulates the distribution of mass throughout the universe on
 the largest scales:
 http://www.youtube.com/watch?v=W35SYkfdGtw
 
 >(Note

 each point of light represents a galaxy, not a star)


>>> The answer on your question depends on what you mean by accurately
>>> and what by physical laws. I am working with finite elements (more
>>> specifically with ANSYS Multiphysics) and I can tell for sure that
>>> if you speak of simulation of the universe, then the current
>>> simulation technology does not scale. Nowadays one could solve a
>>> linear system reaching dimension of 1 billion but this will not
>>> help you. I would say that either contemporary numerical methods
>>> are deadly wrong, or simulated equations are not the right ones.
>>> In this respect, you may want to look how simulation is done for
>>> example in Second Life.
>>>
>>> Well, today numerical simulation is a good business
>>> (computer-aided engineering is about a billion per year) and it
>>> continues to grow. Yet, if you look in detail, then there are some
>>> areas when it could be employed nicely and some where it better to
>>> forget about simulation.
>>>
>>> I understand that you speak "in principle".
>>>
>>
>>
>> Yes, this is why in my first post, I said consider God's Turing
>> machine (free from our limitations).  Then it is obvious that with
>> the appropriate tape, a physical system can be approximated to any
>> desired level of accuracy so long as it is predictable.  Colin said
>> such models of physics or chemistry are impossible, so I hope he
>> elaborates on what makes these systems unpredictable.
>>
>
> I have to repeat that the current simulation technology just does not
> scale. With it even God will not help. The only way that I could imagine is
> that God's Turing machine is based on completely different simulation
> technology (this however means that our current knowledge of physical laws
> and/or numerics is wrong).
>
>
I think Brent's comment addressed this well.  It is not a question of scale
or different types of Turing machines.  All Turing machines are equivalent.



>
>  Yet, I am not sure if extrapolation too far away from the current
>>> knowledge makes sense, as eventually we are coming to
>>> "philosophical controversies".
>>>
>>>
>>>  We're already simulating peices of brain tissue on the order of fruit
>> fly brains (10,000 neurons).  Computers double in power/price every
>> year, so 6 years later we could simulate mouse brains, another 6 we
>> can simulate cat brains, and in another 6 we can simulate human
>> brains. (By 2030)
>>
>> But all of this is an aside from point that I was making regarding
>> the power and versatility of Turing machines.  Those who think
>> Artificial Intelligence is not possible with computers must show what
>> about the brain is unpredictable or unmodelable.
>>
>
> Why that? I guess that you should prove first that consciousness is
> predictable and could be modeled.
>
>
Everyone (except perhaps the substance dualists, mysterians, and solopists
-- each non-scientific or anti-scientific philosophies) believe the brain
(on the lowest levels) operates according to simple and predictable rules.
Also note, the topic of the above was not consciousness, but intelligence.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 12:37 PM, Craig Weinberg wrote:

On Aug 16, 1:44 pm, meekerdb  wrote:
   

On 8/16/2011 10:16 AM, Craig Weinberg wrote:

 

It's not only possible, it absolutely is otherwise. I move my arm. I
determine the biochemical reactions that move it. Me. For my personal
reasons which are knowable to me in my own natural language and are
utterly unknowable by biochemical analysis. It's hard for me to accept
that you cannot see the flaw in this reasoning.
   

It's not a flaw in his reasoning, it's description at a different
level.  While it is no doubt true that you, the whole you, determine to
move your arm; it seems not to be the case that the *conscious* you does
so.  Various experiments starting with Libet show that the biochemical
reactions that move it occur before you are conscious of the decision to
move it.
 

You make the decision before the reporting part of you can report it
is all. It's still you that is consciously making the decision. It's
just because we are applying naive realism to how the self works and
assuming that the narrative voice which accompanies consciousness and
can answer questions or push buttons is the extent of consciousness.
   


Now you're changing the definitions of words again.  What does 
"conscious" mean, if not "the part of your thinking that you can report 
on."  I would never claim that you didn't make the decision - it's just 
that "you" is a lot bigger than your consciousness.



If moving my arm is like reading a book, I can't tell you what the
book is about until I actually have read it, but I still am initiating
the reading of the book, and not the book forcing me to read it.
   


Another non-analogy.  Is this sentence making you think of a dragon?

Brent


Craig

   


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 1:44 pm, meekerdb  wrote:
> On 8/16/2011 10:16 AM, Craig Weinberg wrote:
>
> > It's not only possible, it absolutely is otherwise. I move my arm. I
> > determine the biochemical reactions that move it. Me. For my personal
> > reasons which are knowable to me in my own natural language and are
> > utterly unknowable by biochemical analysis. It's hard for me to accept
> > that you cannot see the flaw in this reasoning.
>
> It's not a flaw in his reasoning, it's description at a different
> level.  While it is no doubt true that you, the whole you, determine to
> move your arm; it seems not to be the case that the *conscious* you does
> so.  Various experiments starting with Libet show that the biochemical
> reactions that move it occur before you are conscious of the decision to
> move it.

You make the decision before the reporting part of you can report it
is all. It's still you that is consciously making the decision. It's
just because we are applying naive realism to how the self works and
assuming that the narrative voice which accompanies consciousness and
can answer questions or push buttons is the extent of consciousness.

If moving my arm is like reading a book, I can't tell you what the
book is about until I actually have read it, but I still am initiating
the reading of the book, and not the book forcing me to read it.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 11:31 AM, benjayk wrote:


meekerdb wrote:
   

On 8/16/2011 7:50 AM, benjayk wrote:
 

And the problem with the reductionist view is?
 


   


 

It seeks to dissect reality into pieces,
   

And also to explain how the pieces interact in reality.

 

Right, otherwise there is little use in dissecting. But the very concept of
interacting pieces has its limits in describing reality. Two quantum
entagled particles cannot properly be described as two pieces interacting.

benjayk
   


Sure they can, if you allow FTL interactions, as in Bohmian QM.  But 
even if you don't, the state of the two particles is described by a ray 
in Hilbert space or a density matrix - which is pretty reductive.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 11:03 AM, Evgenii Rudnyi wrote:

Yes, this is why in my first post, I said consider God's Turing
machine (free from our limitations).  Then it is obvious that with
the appropriate tape, a physical system can be approximated to any
desired level of accuracy so long as it is predictable.  Colin said
such models of physics or chemistry are impossible, so I hope he
elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just does not 
scale. With it even God will not help. The only way that I could 
imagine is that God's Turing machine is based on completely different 
simulation technology (this however means that our current knowledge 
of physical laws and/or numerics is wrong).


Scale doesn't matter at the level of theoretical possibility.  Bruno's 
UD is the most inefficient possible way to compute this universe - but 
he only cares that it's possible.  All universal Turing machines are 
equivalent so it doesn't matter what God's is based on.  Maybe you just 
mean the world is not computable in the sense that it is nomologically 
impossible to compute it faster than just letting it happen.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread benjayk


meekerdb wrote:
> 
> On 8/16/2011 7:50 AM, benjayk wrote:
>>> And the problem with the reductionist view is?
>>> >  
>>>  
>> It seeks to dissect reality into pieces,
> 
> And also to explain how the pieces interact in reality.
> 
Right, otherwise there is little use in dissecting. But the very concept of
interacting pieces has its limits in describing reality. Two quantum
entagled particles cannot properly be described as two pieces interacting.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Turing-Machines-tp32259675p32274111.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Evgenii Rudnyi

On 16.08.2011 16:08 Stathis Papaioannou said the following:

On Tue, Aug 16, 2011 at 11:23 PM, Craig
Weinberg  wrote:


If the brain does something not predictable by modelling its
biochemistry that means it works by magic.


Then you are saying that whether you accept what I'm what I'm
writing here or not is purely predictable through biochemistry
alone or else must be 'magic'. So in order for you to change your
mind, some substance needs to cross your blood brain barrier, and
that the content of your mind - the meaning of what you are
choosing to think about right now can only be magic. I think my
approach is much more scientific. I'm not prejudging what the
solution can or cannot be in advance.

If you want to call psychology magic, that's ok with me, but it
certainly drives biochemistry as much as it is driven by
biochemistry. Why is it so hard to accept that both levels of
reality are in fact real? Our body doesn't seem to have a problem
taking commands from our mind. Why should I deny that those
commands have a source which cannot be adequately described in
terms of temperature and pressure or voltage? To presume that we
can only know what the mind is by studying it's shadow in the brain
is, I think catastrophically misguided and ultimately unworkable.
If not for our own experiences of the mind, biochemistry would not
tell us that such a thing could possibly exist.


Our body precisely follows the deterministic biochemical reactions
that comprise it. The mind is generated as a result of these
biochemical reactions; a reaction occurs in your brain which causes
you to have a thought to move your arm and move your arm. How could
it possibly be otherwise?


If I understand Bruno correctly, than his position is that this happens 
exactly otherwise.


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Evgenii Rudnyi

On 15.08.2011 23:42 Jason Resch said the following:

On Mon, Aug 15, 2011 at 1:17 PM, Evgenii Rudnyi
wrote:


On 15.08.2011 07:56 Jason Resch said the following:

...


Can we accurately simulate physical laws or can't we?  Before you

answer, take a few minutes to watch this amazing video, which
simulates the distribution of mass throughout the universe on
the largest scales:
http://www.youtube.com/watch?**v=W35SYkfdGtw(Note
each point of light represents a galaxy, not a star)



The answer on your question depends on what you mean by accurately
and what by physical laws. I am working with finite elements (more
specifically with ANSYS Multiphysics) and I can tell for sure that
if you speak of simulation of the universe, then the current
simulation technology does not scale. Nowadays one could solve a
linear system reaching dimension of 1 billion but this will not
help you. I would say that either contemporary numerical methods
are deadly wrong, or simulated equations are not the right ones.
In this respect, you may want to look how simulation is done for
example in Second Life.

Well, today numerical simulation is a good business
(computer-aided engineering is about a billion per year) and it
continues to grow. Yet, if you look in detail, then there are some
areas when it could be employed nicely and some where it better to
forget about simulation.

I understand that you speak "in principle".



Yes, this is why in my first post, I said consider God's Turing
machine (free from our limitations).  Then it is obvious that with
the appropriate tape, a physical system can be approximated to any
desired level of accuracy so long as it is predictable.  Colin said
such models of physics or chemistry are impossible, so I hope he
elaborates on what makes these systems unpredictable.


I have to repeat that the current simulation technology just does not 
scale. With it even God will not help. The only way that I could imagine 
is that God's Turing machine is based on completely different simulation 
technology (this however means that our current knowledge of physical 
laws and/or numerics is wrong).



Yet, I am not sure if extrapolation too far away from the current
knowledge makes sense, as eventually we are coming to
"philosophical controversies".



We're already simulating peices of brain tissue on the order of fruit
fly brains (10,000 neurons).  Computers double in power/price every
year, so 6 years later we could simulate mouse brains, another 6 we
can simulate cat brains, and in another 6 we can simulate human
brains. (By 2030)

But all of this is an aside from point that I was making regarding
the power and versatility of Turing machines.  Those who think
Artificial Intelligence is not possible with computers must show what
about the brain is unpredictable or unmodelable.


Why that? I guess that you should prove first that consciousness is 
predictable and could be modeled.


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 10:16 AM, Craig Weinberg wrote:

It's not only possible, it absolutely is otherwise. I move my arm. I
determine the biochemical reactions that move it. Me. For my personal
reasons which are knowable to me in my own natural language and are
utterly unknowable by biochemical analysis. It's hard for me to accept
that you cannot see the flaw in this reasoning.
   


It's not a flaw in his reasoning, it's description at a different 
level.  While it is no doubt true that you, the whole you, determine to 
move your arm; it seems not to be the case that the *conscious* you does 
so.  Various experiments starting with Libet show that the biochemical 
reactions that move it occur before you are conscious of the decision to 
move it.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 7:50 AM, benjayk wrote:

And the problem with the reductionist view is?
>  
 

It seeks to dissect reality into pieces,


And also to explain how the pieces interact in reality.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/16/2011 7:08 AM, Stathis Papaioannou wrote:

Our body precisely follows the deterministic biochemical reactions
that comprise it. The mind is generated as a result of these
biochemical reactions; a reaction occurs in your brain which causes
you to have a thought to move your arm and move your arm. How could it
possibly be otherwise?
   


That's approximately true, but it overstates the determinism a little.  
First, the system isn't closed, so ones thoughts and behavior are 
continually modified by stuff that happens on your past light cone.  
Second, there are quantum random event within your brain, e.g. decay of 
radioactive potassium atoms, that could influence your thougts and actions.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread meekerdb

On 8/15/2011 11:08 PM, Colin Geoffrey Hales wrote:


On 8/15/2011 7:08 PM, Jason Resch wrote:

just like you can simulate flight if you simulate the environment
you are flying in.


But do we need to simulate the entire atmosphere in order to simulate 
flight, or just the atmosphere in the immediate area around the 
surfaces of the plane?  Likewise, it seems we could take shortcuts in 
simulating the environment surrounding a mind and get the behavior we 
are after.



Why simulate?  Why not create a robot with sensors so it can interact 
the natural environment.


Brent

[Colin]

Hi Brent,

There seems to be another confusion operating here. What makes you 
think I am not creating a robot with sensors? What has this got to do 
with simulation?


1) Having sensors is not simulation. Humans have sensors...eg retina.

2) The use of sensors does not connect the robot to the environment in 
any unique way. The incident photon could have come across the room or 
the galaxy. Nobody tells a human which, yet the brain sorts it out.




Or makes it up. :-)


3) A robot brain based on replication uses sensors like any other robot.

4) What I am saying is that the replication approach will handle the 
sensors like a human brain handles sensors.


Of course we don't have to simulate the entire universe to simulate 
flight. The fact is we simulate _/some/_ of the environment in order 
that flight simulation works. /It's a simulation./ *It's not flight*. 
This has nothing to do with the actual problem of real embedded 
embodied cognition of an unknown external environment by an AGI. You 
don't know it! You are 'cognising' to find out about it. You can't 
simulate it and the sensors don't give you enough info. If a human 
supplies that info then you're grounding the robot in the human's 
cognition, not supplying the robot with its own cognition.


In replication there is no simulating going on! There is inorganic, 
artificially derived natural processes identical to what is going on 
in a natural brain. Literally. A brain has action potential comms. A 
brain has EM comms. Therefore a replicated brain will have the SAME 
action potentials mutually interacting with the same EM fields. The 
replicant chips will have an EEG/MEG signature like a human. There is 
no computing of anything. There is inorganic version of the identical 
processes going on in a real brain.


I hope we're closer to being on the same page.



Yes, I agree with the above, except maybe the EM.  The brain is 
essentially electrically neutral.  The chemical reactions change the 
local fields as electrons are moved but these are very short range, 
atomic scale fields.  The overall fields don't seem to matter; otherwise 
your thoughts would get scrambled every time you got near an electric 
motor or a flourescent light.  So when you refer to an "inorganic 
version of the identical process going on in a real brain" it's not 
clear at what level you mean "identical" - apparently not at the quark 
and lepton level.  If it's not identical at that level (the lowest 
possible) then in what sense is it identical.  Computationalism says it 
only has to be identical at the level of computing the input/output 
function - it's a specific version of functionalism.


Brent


Colin

--
You received this message because you are subscribed to the Google 
Groups "Everything List" group.

To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.


--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 10:08 am, Stathis Papaioannou  wrote:

> Our body precisely follows the deterministic biochemical reactions
> that comprise it. The mind is generated as a result of these
> biochemical reactions; a reaction occurs in your brain which causes
> you to have a thought to move your arm and move your arm. How could it
> possibly be otherwise?

It's not only possible, it absolutely is otherwise. I move my arm. I
determine the biochemical reactions that move it. Me. For my personal
reasons which are knowable to me in my own natural language and are
utterly unknowable by biochemical analysis. It's hard for me to accept
that you cannot see the flaw in this reasoning.

"Why did the chicken cross the road?" For deterministic biochemical
reactions.
"Why did the sovereign nation declare war?" For deterministic
biochemical reactions.
"What is the meaning of f=ma"? For deterministic biochemical
reactions.

Biochemistry is just what's happening on the level of cells and
molecules. It is an entirely different perceptual-relativistic
inertial frame of reference. Are they correlated? Sure. You change
your biochemistry in certain ways in your brain, and you will
definitely feel it. Can you change your biochemistry in certain ways
by yourself? Of course. Think about something that makes you happy and
your cells will produce the proper neurotransmitters. YOU OWN them.
They are your servant. To believe otherwise is to subscribe to a faith
in the microcosm over the macrocosm, in object phenomenology over
subject phenomenology to the point of imaging that there is no
subject. The subject imagines it is nothing but an object. It's
laughably tragic.

In order to understand how the universe creates subjectivity, you have
to stop trying to define it in terms of it's opposite. Objectivity
itself is a subjective experience. There is no objective experience of
subjectivity - it looks like randomness and self-similarity feedback.
That's a warning. It means - 'try again but look in the other
direction'.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread benjayk


Stathis Papaioannou-2 wrote:
> 
> On Tue, Aug 16, 2011 at 10:03 PM, benjayk
>  wrote:
> 
>> Also, we have no reliable way of measuring the computational power of the
>> brain, not to speak of the possibly existing subtle energies that go
>> beyond
>> the brain, that may be essential to our functioning. The way that
>> computational power of the brain is estimated now relies on a quite
>> reductionstic view of what the brain is and what it does.
> 
> And the problem with the reductionist view is?
> 
It seeks to dissect reality into pieces, while if you have some sense of
spirituality, you see that this is not how reality functions (as it is a
whole). It works reasonably well for simple things like motors, but that's
it.
Even if you just look at science, it shows that the reductionist view is
fundamentally flawed. In quantum mechanics you have one interconnected wave
function, not neatly seperateable pieces. The reductionists do a bit of
hand-waving and say that this is not relevant at the macro-scale, but they
haven't shown this yet. Just because newtonian physics is a good
approximation on the surface, doesn't mean that it isn't fundamentally
insufficient to explain the workings of complex systems.



Stathis Papaioannou-2 wrote:
> 
>  It certainly seems to
> be the case that if you throw some chemical elements together in a
> particular way, you get intelligence and consciousness.
It may seem that way to some people. It may seem that the earth is flat as
well.
They are just jumping to conclusions from some vague understanding of what
is happening. We see a correlation between brain function and human
consciousness? Well, that obviously means that brains produces consciousness
(or that consciousness is equivalent to the firing of neurons, and it's
subjective nature is an illusion). But, wait, no it doesn't, not AT ALL.
Correlations are fine, but they don't suggest by a long stretch that the one
thing (brain) that correlates to some extent with the other thing (human
consciousness) *produces* a broad generalization of the other thing
(consciousness as such).


Stathis Papaioannou-2 wrote:
> 
>  The elements
> obey well-understood chemical laws, even though they constitute a
> complex system with difficult to predict behaviour.
Do we understand them well? OK, good enough to make a host of good
predictions, but we have no remotely complete understanding of them. Also,
that biology is reducible to chemistry is an assumption, but that itself is
just a reductionistic faith. They can say that if they manage to derive
biology from chemistry.

-- 
View this message in context: 
http://old.nabble.com/Turing-Machines-tp32259675p32272468.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread benjayk


Jason Resch-2 wrote:
> 
> On Tue, Aug 16, 2011 at 7:03 AM, benjayk
> wrote:
> 
>>
>>
>> Craig Weinberg wrote:
>> >
>> > On Aug 15, 10:43 pm, Jason Resch  wrote:
>> >> I am more worried for the biologically handicapped in the future.
>> >>  Computers
>> >> will get faster, brains won't.  By 2029, it is predicted $1,000 worth
>> of
>> >> computer will buy a human brain's worth of computational power.  15
>> years
>> >> later, you can get 1,000 X the human brain's power for $1,000. 
>> Imagine:
>> >> the
>> >> simulated get to experience 1 century for each month the humans with
>> >> biological brains experience.  Who will really be alive then?
>> >
>> > Speed and power is for engines, not brains. Good ideas don't come from
>> > engines.
>> >
>> > Craig
>> >
>> I agree. It is a very narrow to think computational power is the key to
>> rich
>> experience and high intelligence. The real magic is what is done with the
>> hardware. And honestly I see no reason to believe that we somehow we
>> magically develop amazingly intelligent software.
> 
> 
> Neural imaging/scanning rates are also doubling every year.  The hope is
> that we can reverse engineer the brain, by scanning it and making a map
> all
> the connections between the neurons.  Then if the appropriate hardware can
> run a few brains at 1,000 or 1,000,000  times faster than the biological
> brain, we can put our best scientists or AI researchers inside and they
> can
> figure it out in a few of our months.
> 
> http://www.kurzweilai.net/the-law-of-accelerating-returns
There are *so* many problems with that. We are naive, a bit like 7 year old
wanting to build a time machine. We know little about the brain. Who says
there is no quantum effects going on? There doesn't even have to be
substantial entaglement. Chaos theory tells us that even minuscle quantum
effects could have major impacts on the thing. ESP and telepathy suggest
that we are to some extent entangled. There are *major* problems reprodocing
this with computers.
 
Neural imaging and scanning cannot pick up the major information in the
brain. Not by a long stretch. It is like having a picture of a RAM and
thinking this is enough to recover the information on it.

What use are fast brains? Our brains alone are of little use. We also need a
rich environment and a body. 

You presuppose that AI researchers have the potential ability to build
superintelligent AI. Why should we suspect this more than we suspect that
gorillas can build humans? I'd like to hear arguments that make it plausible
that it is possible to engineer somthing more generally intelligent than
yourself.


Jason Resch-2 wrote:
> 
>> Software development is
>> slow, no comparison to the exponential progress of hardware.
>>
> 
> As I mentioned to Craig who complained his computer takes longer to start
> up
> now than ever, the complexity of software is in many cases outpacing even
> the exponential growth in the power of computer hardware.
That may quite well be. But even if we have a software that can render a
99^99 dimensional mandelbrot this will not be of much use. The point is that
the usefulness of software is not progressing exponentially.


Jason Resch-2 wrote:
> 
>> I believe that it is inherently impossible to design intelligence. It can
>> just self-organize itself through becoming aware of itself.
> 
> 
> A few genes separate us from chimps, and all of our intelligence.
I don't think our intelligence is reducible to genes. Memes seem even more
important. And just because we can't really research it scientifically at
moment, does not mean there are no subtler things that determine our general
intelligence than genes and culture. Many subjective experiences hint at
something like a more subtle layer, call it "soul" if you will.
All of what we understand about biology may just be the tiny top of a
pyramid that is buried in the sand. 


Jason Resch-2 wrote:
> 
>   If we can
> determine which, and see what these genes do then perhaps we can
> extrapolate
> and find out how our DNA is able to make some brains better than others.
But this is not how intelligent works. You don't just extrapolate a bit and
have more intelligence. If this were the case, we would already have
superintelligence. Development / evolution of intelligence, learning and
consciousness are highly non-trivial, and non-linear.


Jason Resch-2 wrote:
> 
>> I am not even
>> sure anymore whether this will have to do very much to do with
>> technology.
>> Technology might have an fundamental restriction to being a tool of
>> intelligence, not the means to increase intelligence at the core (just
>> relative, superficial intelligence like intellectual knowledge).
>>
> 
> I think the existence of Google and Wikipedia makes me more intelligent. 
> If
> I could embed a calculator chip into my brain my mental math skills would
> improve markedly.
This is exactly the kind of intelligence I am NOT talking about. It's
useful, sure. But it doesn't lead to unimaginable c

Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 8:10 am, Stathis Papaioannou  wrote:
> On Tue, Aug 16, 2011 at 10:03 PM, benjayk
>
>  wrote:
> > Also, we have no reliable way of measuring the computational power of the
> > brain, not to speak of the possibly existing subtle energies that go beyond
> > the brain, that may be essential to our functioning. The way that
> > computational power of the brain is estimated now relies on a quite
> > reductionstic view of what the brain is and what it does.
>
> And the problem with the reductionist view is? It certainly seems to
> be the case that if you throw some chemical elements together in a
> particular way, you get intelligence and consciousness. The elements
> obey well-understood chemical laws, even though they constitute a
> complex system with difficult to predict behaviour.

The reductionist view is great for certain kinds of problems, just as
a hammer is great for things that resemble nails. It's not the
appropriate tool to do brain surgery with though. It's not accurate to
say that if you throw chemical elements together in a particular way
you get intelligence and consciousness. There could be intelligence
and consciousness of a sort to begin with. Certainly in order for
there to be a recipe for awareness, that potential must either be
built into the elements themselves or the universe as a whole. What
elements you have and how they put themselves together may only
determine the range of awareness it is capable of and not some binary
distinction of yes-conscious or no-unconscious.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Bruno Marchal


On 16 Aug 2011, at 08:08, Colin Geoffrey Hales wrote:



On 8/15/2011 7:08 PM, Jason Resch wrote:
just like you can simulate flight if you simulate the environment  
you are flying in.


But do we need to simulate the entire atmosphere in order to  
simulate flight, or just the atmosphere in the immediate area around  
the surfaces of the plane?  Likewise, it seems we could take  
shortcuts in simulating the environment surrounding a mind and get  
the behavior we are after.


Why simulate?  Why not create a robot with sensors so it can  
interact the natural environment.


Brent

[Colin]

Hi Brent,
There seems to be another confusion operating here. What makes you  
think I am not creating a robot with sensors? What has this got to  
do with simulation?


1)  Having sensors is not simulation. Humans have sensors...eg  
retina.
2)  The use of sensors does not connect the robot to the  
environment in any unique way. The incident photon could have come  
across the room or the galaxy. Nobody tells a human which, yet the  
brain sorts it out.
3)  A robot brain based on replication uses sensors like any  
other robot.
4)  What I am saying is that the replication approach will  
handle the sensors like a human brain handles sensors.


Of course we don’t have to simulate the entire universe to simulate  
flight. The fact is we simulate _some_ of the environment in order  
that flight simulation works. It’s a simulation. It’s not flight.  
This has nothing to do with the actual problem of real embedded  
embodied cognition of an unknown external environment by an AGI. You  
don’t know it! You are ‘cognising’ to find out about it. You can’t  
simulate it and the sensors don’t give you enough info. If a human  
supplies that info then you’re grounding the robot in the human’s  
cognition, not supplying the robot with its own cognition.


In replication there is no simulating going on! There is inorganic,  
artificially derived natural processes identical to what is going on  
in a natural brain. Literally. A brain has action potential comms. A  
brain has EM comms. Therefore a replicated brain will have the SAME  
action potentials mutually interacting with the same EM fields. The  
replicant chips will have an EEG/MEG signature like a human. There  
is no computing of anything. There is inorganic version of the  
identical processes going on in a real brain.


I hope we’re closer to being on the same page.

Colin



OK. But now you clearly depart from Craig's non-comp theory. In your  
approach you just make the substitution comp level low, and this from  
an intuition which has been explained to be a consequence of the  
existence of a substitution level (that is comp).


Congratulations for the PhD, Colin.

Bruno

http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Stathis Papaioannou
On Tue, Aug 16, 2011 at 11:23 PM, Craig Weinberg  wrote:

>> If the brain does something not predictable by modelling its
>> biochemistry that means it works by magic.
>
> Then you are saying that whether you accept what I'm what I'm writing
> here or not is purely predictable through biochemistry alone or else
> must be 'magic'. So in order for you to change your mind, some
> substance needs to cross your blood brain barrier, and that the
> content of your mind - the meaning of what you are choosing to think
> about right now can only be magic. I think my approach is much more
> scientific. I'm not prejudging what the solution can or cannot be in
> advance.
>
> If you want to call psychology magic, that's ok with me, but it
> certainly drives biochemistry as much as it is driven by biochemistry.
> Why is it so hard to accept that both levels of reality are in fact
> real? Our body doesn't seem to have a problem taking commands from our
> mind. Why should I deny that those commands have a source which cannot
> be adequately described in terms of temperature and pressure or
> voltage? To presume that we can only know what the mind is by studying
> it's shadow in the brain is, I think catastrophically misguided and
> ultimately unworkable. If not for our own experiences of the mind,
> biochemistry would not tell us that such a thing could possibly exist.

Our body precisely follows the deterministic biochemical reactions
that comprise it. The mind is generated as a result of these
biochemical reactions; a reaction occurs in your brain which causes
you to have a thought to move your arm and move your arm. How could it
possibly be otherwise?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 8:03 am, benjayk  wrote:
> Craig Weinberg wrote:
>
> > On Aug 15, 10:43 pm, Jason Resch  wrote:
> >> I am more worried for the biologically handicapped in the future.
> >>  Computers
> >> will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
> >> computer will buy a human brain's worth of computational power.  15 years
> >> later, you can get 1,000 X the human brain's power for $1,000.  Imagine:
> >> the
> >> simulated get to experience 1 century for each month the humans with
> >> biological brains experience.  Who will really be alive then?
>
> > Speed and power is for engines, not brains. Good ideas don't come from
> > engines.
>
> > Craig
>
> I agree. It is a very narrow to think computational power is the key to rich
> experience and high intelligence. The real magic is what is done with the
> hardware. And honestly I see no reason to believe that we somehow we
> magically develop amazingly intelligent software. Software development is
> slow, no comparison to the exponential progress of hardware.
> I believe that it is inherently impossible to design intelligence. It can
> just self-organize itself through becoming aware of itself. I am not even
> sure anymore whether this will have to do very much to do with technology.
> Technology might have an fundamental restriction to being a tool of
> intelligence, not the means to increase intelligence at the core (just
> relative, superficial intelligence like intellectual knowledge).

I agree. Although technology could help us increase our own
intelligence by modifying the brain's behavior. I can't say that it is
inherently impossible to design intelligence, but like Colin says, it
might have to be designed through recombinant replication.

> Also, we have no reliable way of measuring the computational power of the
> brain, not to speak of the possibly existing subtle energies that go beyond
> the brain, that may be essential to our functioning. The way that
> computational power of the brain is estimated now relies on a quite
> reductionstic view of what the brain is and what it does.

I think of 'energy' in a different way now. It is nothing more than a
perceived event, which is typically shared. Energy has no independent
physical existence, it's not a glowing stuff in a vacuum of space or
an invisible forcefield hovering around metal wires. Energy is just
how physical phenomena are aware of change and the possibility of
change. It is an insistence, and what is insisting depends on what is
existing, but it is not limited to that. A stove can insist that a
cast iron skillet make itself hot, but it can't insist that the
skillet recite the Pledge of Allegiance.

So yes, I think that the psyche is overflowing with subtle awareness
that will never show up on an MRI as anything (or anything more than
some meaningless fuzz that may or may not be related), because some of
what our minds can do can only insist within the context of neurology,
or psychology, or anthropology. Some of what can insist through the
mind can be received through language, or math, or computer chip
logic, but the totality of the psyche and the Self is far greater than
any literal schema it can use to describe itself.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Jason Resch
On Tue, Aug 16, 2011 at 8:23 AM, Craig Weinberg wrote:

> On Aug 16, 3:22 am, Stathis Papaioannou  wrote:
> > On Tue, Aug 16, 2011 at 12:18 AM, Craig Weinberg 
> wrote:
> > > You can simulate it as far as being able to model the aspects of it's
> > > behavior that you can observe, but you can't necessarily predict that
> > > behavior over time, any more than you can predict what other people
> > > might say to you today. The chemistry and physics of the brain are
> > > partially determined by the experiences of the environment through the
> > > body, and partially determined by the sensorimotive agenda of the
> > > mind, which are both related to but not identical with the momentum
> > > and consequences of it's neurological biochemistry. All three are are
> > > woven together as an inseparable whole.
> >
> > If the brain does something not predictable by modelling its
> > biochemistry that means it works by magic.
>
> Then you are saying that whether you accept what I'm what I'm writing
> here or not is purely predictable through biochemistry alone or else
> must be 'magic'. So in order for you to change your mind, some
> substance needs to cross your blood brain barrier, and that the
> content of your mind - the meaning of what you are choosing to think
> about right now can only be magic. I think my approach is much more
> scientific. I'm not prejudging what the solution can or cannot be in
> advance.
>
> If you want to call psychology magic, that's ok with me, but it
> certainly drives biochemistry as much as it is driven by biochemistry.
> Why is it so hard to accept that both levels of reality are in fact
> real? Our body doesn't seem to have a problem taking commands from our
> mind. Why should I deny that those commands have a source which cannot
> be adequately described in terms of temperature and pressure or
> voltage? To presume that we can only know what the mind is by studying
> it's shadow in the brain is, I think catastrophically misguided and
> ultimately unworkable. If not for our own experiences of the mind,
> biochemistry would not tell us that such a thing could possibly exist.
>
>
>
Do high-level patterns in the brain explain to some extent what occurs on
lower levels?  I think so.  Are the laws of physics or chemistry violated by
these higher level processes?  I think not.

I have explained how that if we can model particle interactions, and the
brain does not violate these laws of particle interactions, then intelligent
can be found in some programs running on a Turing machine.  If you disagree
you need to give a reason why either a particle simulation is not possible,
or the brain's behavior is determined by things other than particle
interactions.

For example, Colin said that particle simulation of intelligence is
impossible because one's model had to include every particle in the universe
for the simulation to be accurate.  While I disagree, he at least proposed a
reason.  What is yours?

Jason


Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Craig Weinberg
On Aug 16, 3:22 am, Stathis Papaioannou  wrote:
> On Tue, Aug 16, 2011 at 12:18 AM, Craig Weinberg  
> wrote:
> > You can simulate it as far as being able to model the aspects of it's
> > behavior that you can observe, but you can't necessarily predict that
> > behavior over time, any more than you can predict what other people
> > might say to you today. The chemistry and physics of the brain are
> > partially determined by the experiences of the environment through the
> > body, and partially determined by the sensorimotive agenda of the
> > mind, which are both related to but not identical with the momentum
> > and consequences of it's neurological biochemistry. All three are are
> > woven together as an inseparable whole.
>
> If the brain does something not predictable by modelling its
> biochemistry that means it works by magic.

Then you are saying that whether you accept what I'm what I'm writing
here or not is purely predictable through biochemistry alone or else
must be 'magic'. So in order for you to change your mind, some
substance needs to cross your blood brain barrier, and that the
content of your mind - the meaning of what you are choosing to think
about right now can only be magic. I think my approach is much more
scientific. I'm not prejudging what the solution can or cannot be in
advance.

If you want to call psychology magic, that's ok with me, but it
certainly drives biochemistry as much as it is driven by biochemistry.
Why is it so hard to accept that both levels of reality are in fact
real? Our body doesn't seem to have a problem taking commands from our
mind. Why should I deny that those commands have a source which cannot
be adequately described in terms of temperature and pressure or
voltage? To presume that we can only know what the mind is by studying
it's shadow in the brain is, I think catastrophically misguided and
ultimately unworkable. If not for our own experiences of the mind,
biochemistry would not tell us that such a thing could possibly exist.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Jason Resch
On Tue, Aug 16, 2011 at 7:03 AM, benjayk wrote:

>
>
> Craig Weinberg wrote:
> >
> > On Aug 15, 10:43 pm, Jason Resch  wrote:
> >> I am more worried for the biologically handicapped in the future.
> >>  Computers
> >> will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
> >> computer will buy a human brain's worth of computational power.  15
> years
> >> later, you can get 1,000 X the human brain's power for $1,000.  Imagine:
> >> the
> >> simulated get to experience 1 century for each month the humans with
> >> biological brains experience.  Who will really be alive then?
> >
> > Speed and power is for engines, not brains. Good ideas don't come from
> > engines.
> >
> > Craig
> >
> I agree. It is a very narrow to think computational power is the key to
> rich
> experience and high intelligence. The real magic is what is done with the
> hardware. And honestly I see no reason to believe that we somehow we
> magically develop amazingly intelligent software.


Neural imaging/scanning rates are also doubling every year.  The hope is
that we can reverse engineer the brain, by scanning it and making a map all
the connections between the neurons.  Then if the appropriate hardware can
run a few brains at 1,000 or 1,000,000  times faster than the biological
brain, we can put our best scientists or AI researchers inside and they can
figure it out in a few of our months.

http://www.kurzweilai.net/the-law-of-accelerating-returns


> Software development is
> slow, no comparison to the exponential progress of hardware.
>

As I mentioned to Craig who complained his computer takes longer to start up
now than ever, the complexity of software is in many cases outpacing even
the exponential growth in the power of computer hardware.


> I believe that it is inherently impossible to design intelligence. It can
> just self-organize itself through becoming aware of itself.


A few genes separate us from chimps, and all of our intelligence.  If we can
determine which, and see what these genes do then perhaps we can extrapolate
and find out how our DNA is able to make some brains better than others.


> I am not even
> sure anymore whether this will have to do very much to do with technology.
> Technology might have an fundamental restriction to being a tool of
> intelligence, not the means to increase intelligence at the core (just
> relative, superficial intelligence like intellectual knowledge).
>

I think the existence of Google and Wikipedia makes me more intelligent.  If
I could embed a calculator chip into my brain my mental math skills would
improve markedly.


>
> Also, we have no reliable way of measuring the computational power of the
> brain, not to speak of the possibly existing subtle energies that go beyond
> the brain, that may be essential to our functioning. The way that
> computational power of the brain is estimated now relies on a quite
> reductionstic view of what the brain is and what it does.
>

As I've mentioned before on this list, neuroscientists have succeeded in
creating biologically realistic neurons.  The CPU requirements of these
neurons is well understood:

http://www.youtube.com/watch?v=LS3wMC2BpxU&t=7m30s

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Stathis Papaioannou
On Tue, Aug 16, 2011 at 10:03 PM, benjayk
 wrote:

> Also, we have no reliable way of measuring the computational power of the
> brain, not to speak of the possibly existing subtle energies that go beyond
> the brain, that may be essential to our functioning. The way that
> computational power of the brain is estimated now relies on a quite
> reductionstic view of what the brain is and what it does.

And the problem with the reductionist view is? It certainly seems to
be the case that if you throw some chemical elements together in a
particular way, you get intelligence and consciousness. The elements
obey well-understood chemical laws, even though they constitute a
complex system with difficult to predict behaviour.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread benjayk


Craig Weinberg wrote:
> 
> On Aug 15, 10:43 pm, Jason Resch  wrote:
>> I am more worried for the biologically handicapped in the future.
>>  Computers
>> will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
>> computer will buy a human brain's worth of computational power.  15 years
>> later, you can get 1,000 X the human brain's power for $1,000.  Imagine:
>> the
>> simulated get to experience 1 century for each month the humans with
>> biological brains experience.  Who will really be alive then?
> 
> Speed and power is for engines, not brains. Good ideas don't come from
> engines.
> 
> Craig
> 
I agree. It is a very narrow to think computational power is the key to rich
experience and high intelligence. The real magic is what is done with the
hardware. And honestly I see no reason to believe that we somehow we
magically develop amazingly intelligent software. Software development is
slow, no comparison to the exponential progress of hardware. 
I believe that it is inherently impossible to design intelligence. It can
just self-organize itself through becoming aware of itself. I am not even
sure anymore whether this will have to do very much to do with technology.
Technology might have an fundamental restriction to being a tool of
intelligence, not the means to increase intelligence at the core (just
relative, superficial intelligence like intellectual knowledge).

Also, we have no reliable way of measuring the computational power of the
brain, not to speak of the possibly existing subtle energies that go beyond
the brain, that may be essential to our functioning. The way that
computational power of the brain is estimated now relies on a quite
reductionstic view of what the brain is and what it does.

benjayk
-- 
View this message in context: 
http://old.nabble.com/Turing-Machines-tp32259675p32271222.html
Sent from the Everything List mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Stathis Papaioannou
On Mon, Aug 15, 2011 at 5:06 PM, Colin Geoffrey Hales
 wrote:

> 1) simulation of the chemistry or physics underlying the brain is impossible
>
> It’s quite possible, just irrelevant! ‘Chemistry’ and ‘physics’ are terms
> for models of the natural world used to describe how natural processes
> appear to an observer inside the universe. You can simulate (compute
> physics/chem. models) until you turn blue, and be as right as you want: all
> you will do is predict how the universe appears to an observer.
>
>
>
> This has nothing to do with creating  artificial intelligence.

If you predict how the universe will appear to an observer you can
predict what a human will say when presented with a particular
problem, and isn't that a human-level AI by definition?


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-16 Thread Stathis Papaioannou
On Tue, Aug 16, 2011 at 12:18 AM, Craig Weinberg  wrote:

> You can simulate it as far as being able to model the aspects of it's
> behavior that you can observe, but you can't necessarily predict that
> behavior over time, any more than you can predict what other people
> might say to you today. The chemistry and physics of the brain are
> partially determined by the experiences of the environment through the
> body, and partially determined by the sensorimotive agenda of the
> mind, which are both related to but not identical with the momentum
> and consequences of it's neurological biochemistry. All three are are
> woven together as an inseparable whole.

If the brain does something not predictable by modelling its
biochemistry that means it works by magic.


-- 
Stathis Papaioannou

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: Turing Machines

2011-08-15 Thread Colin Geoffrey Hales
 

On 8/15/2011 7:08 PM, Jason Resch wrote: 

just like you can simulate flight if you simulate the
environment you are flying in.


But do we need to simulate the entire atmosphere in order to simulate
flight, or just the atmosphere in the immediate area around the surfaces
of the plane?  Likewise, it seems we could take shortcuts in simulating
the environment surrounding a mind and get the behavior we are after.


Why simulate?  Why not create a robot with sensors so it can interact
the natural environment.

Brent

 

[Colin]

 

Hi Brent,

There seems to be another confusion operating here. What makes you think
I am not creating a robot with sensors? What has this got to do with
simulation?

 

1)  Having sensors is not simulation. Humans have sensors...eg
retina. 

2)  The use of sensors does not connect the robot to the environment
in any unique way. The incident photon could have come across the room
or the galaxy. Nobody tells a human which, yet the brain sorts it out.

3)  A robot brain based on replication uses sensors like any other
robot.

4)  What I am saying is that the replication approach will handle
the sensors like a human brain handles sensors.

 

Of course we don't have to simulate the entire universe to simulate
flight. The fact is we simulate _some_ of the environment in order that
flight simulation works. It's a simulation. It's not flight. This has
nothing to do with the actual problem of real embedded embodied
cognition of an unknown external environment by an AGI. You don't know
it! You are 'cognising' to find out about it. You can't simulate it and
the sensors don't give you enough info. If a human supplies that info
then you're grounding the robot in the human's cognition, not supplying
the robot with its own cognition.

 

In replication there is no simulating going on! There is inorganic,
artificially derived natural processes identical to what is going on in
a natural brain. Literally. A brain has action potential comms. A brain
has EM comms. Therefore a replicated brain will have the SAME action
potentials mutually interacting with the same EM fields. The replicant
chips will have an EEG/MEG signature like a human. There is no computing
of anything. There is inorganic version of the identical processes going
on in a real brain. 

 

I hope we're closer to being on the same page.

 

Colin

 

 

 

 

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 10:43 pm, Jason Resch  wrote:
> I am more worried for the biologically handicapped in the future.  Computers
> will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
> computer will buy a human brain's worth of computational power.  15 years
> later, you can get 1,000 X the human brain's power for $1,000.  Imagine: the
> simulated get to experience 1 century for each month the humans with
> biological brains experience.  Who will really be alive then?

Speed and power is for engines, not brains. Good ideas don't come from
engines.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 10:08 pm, Jason Resch  wrote:

> It would be a very surprising theoretical result.

Only if you have a very sentimental attachment to the theory. It
wouldn't surprise me at all.

> > Who cares? The main thing is *we can do it using replication*.
>
> What is the difference between simulation and replication?  Perhaps all our
> disagreement stems from this difference in definitions.

The difference is that simulation assumes that that something can
really be something that it is not. Replication doesn't assume that,
but rather says that you can only be sure that something is what it
is.

> > We are in precisely the same position the Wright Bros were when making
> > artificial flight. 
>
> > ** **
>
> > This situation is kind of weird. Insisting that simulation/computation is
> > the only way to solve a problem is like saying ‘*all buildings must be
> > constructed out of paintings of bricks and only people doing it this way
> > will ever build a building.’*. For 60 years every building made like this
> > falls down.
>
> Its not that all brains are computers, its that the evolution of all finite
> processes can be determined by a computer.  There is a subtle difference
> between saying the brain is a computer, and saying a computer can determine
> what a brain would do.
>
> I think your analogy is a little off.  It is not that proponents of strong
> AI suggest that houses need to be made of paintings of bricks, it is that
> the anti-strong-AI suggests that there are some bricks whose image cannot be
> depicted by a painting.

I have no problem with AI brick images making AI building images, but
an image is not a brick or a building.

> A process that cannot be predicted by a computer is like a sound that cannot
> be replicated by a microphone, or an image that can't be captured by a
> painting or photograph.  It would be very surprising for such a thing to
> exist.

That's where you're making a strawman of consciousness and awareness.
You're assuming that it's a 'process' It isn't. Charge is not a
process, nor is mass. It's an experiential property of energy over
time. It is not like a sound or a microphone, it is the listener. Not
an image but the seer of images, the painter, the photographer. It
would be very surprising for such a thing to exist because it doesn't
ex-ist. It in-sists. It persists within. Within the brain, within
cells, within whales and cities, within microprocessors even but all
do not insist with the same bandwidth of awareness. The microprocessor
doesn't understand it's program. If it did it would make up a new one
by itself. If you pour water on your motherboard though, it will
figure out some very creative and unpredictable ways of responding.

> You can build your buildings out of bricks, but don't tell the artists that
> it is impossible for some bricks to be painted (or that they have to paint
> every brick in the universe for their painting to be look right!), unless
> you have some reason or evidence why that would be so.

No, Colin is right. It's the strong AI position that is asserting that
painted bricks must be real if they are painted well enough. That's
your entire position. If you paint a brick perfectly, it can only be a
brick and not a zombie brick (painting). All we are pointing out is
that there is a difference between a painting of a brick and a brick,
and if you actually want the brick to function as a brick, the
painting isn't going to work, no matter how amazingly detailed the
painting is.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 8:21 pm, Colin Geoffrey Hales 
wrote:
> On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales

> The solution is: there is/can be no simulation in an artificial
> cognition. It has to use the same processes a brain uses: literally.
> This is the replication approach.
>
> Is it really such a big deal that you can't get AGI with computation?
> Who cares? The main thing is we can do it using replication. We are in
> precisely the same position the Wright Bros were when making artificial
> flight.
>
> This situation is kind of weird. Insisting that simulation/computation
> is the only way to solve a problem is like saying 'all buildings must be
> constructed out of paintings of bricks and only people doing it this way
> will ever build a building.'. For 60 years every building made like this
> falls down.
>
> Meanwhile I want to build a building out of bricks, and I have to
> justify my position?
>
> Very odd.

Y E S. You've nailed it.

> I literally just found out my PhD examination passed ! Woohoo!
>
> So that's .
>
> Very odd.

Congratulations Dr.!

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 7:18 pm, Jason Resch  wrote:
> On Mon, Aug 15, 2011 at 5:22 PM, Craig Weinberg wrote:

> Try this one, it is among the best I have 
> found:http://www.ivona.com/online/editor.php

It's nicer, but still not significantly more convincing than the
oldest version to me.

> I think you will be surprised by the progress of the next 30 years.

That's exactly what I might have said 20 years ago. I could never have
prepared myself for how disappointing the future turned out to be, so
yes, if in 2041 we aren't living in a world that makes Idiocracy or
Soylent Green seem naively optimistic, then I will be pleasantly
surprised. If you compare the technological advances from 1890-1910 to
those of 1990-2010 I think your will see what I mean. We're inventing
cell phones that play games instead of replacements for cars,
electricity grids, moving pictures, radio, aircraft, etc etc.

> > This is just mapping vocal chord vibrations to digital logic -
> > a miniscule achievement compared to mapping even the simplest
> > neurotransmitter interactions. Computers double in power/price, but
> > they also probably halve in efficiency/memory. It takes longer now to
> > boot up and shut down the computer, longer to convert a string of text
> > into voice.
>
> Lines of code (code complexity) has been found to grow even more quickly
> than Moore's law.  (At least in the example of Microsoft Word that I read
> about at one point)

Exactly. There isn't an exponential net improvement.

> > Like CGI, despite massive increases in computing power, it still only
> > superficially resembles what it's simulating. IMO, there has been
> > little or no ground even in simulating the appearance of genuine
> > feeling, let alone in producing something which itself feels.
>
> That is the property of exponential processes and progress, looking back the
> curve seems flat, look to see where it is going and you'll see an
> overwhelming spike.
>
> Have you seen the recent documentary "Transcendent Man"?
>
> You seem to accept that computing power is doubling every year.  The fruit
> fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 10^11.  It's
> only a matter of time (and not that much) before a $10 thumb drive will have
> enough memory to store a complete mapping of all the neurons in your brain.
> People won't need to freeze themselves to be immortal at that point.

Look at the interface that we're using to have this conversation.
Hunching over a monitor and keyboard to type plain text. Using ">>>"
characters like it was 1975 being printed out on a dot matrix printer
over an acoustic coupler. The quantitative revolution has turned out
to be as much of a mirage as space travel. An ever receding promise
with ever shorter intervals of satisfaction. Our new toys are only fun
for a matter of days or weeks now before we feel them lacking.
Facebook means less interest in old friendships. Streaming music and
video means disposable entertainment. All of our appetites are dulled
yet amplified under the monotonous influence of infoporn on demand.
Sure, it has it's consolations, but, to quote Jim Morrison "No eternal
reward will forgive us now for wasting the dawn." We may not need to
freeze ourselves, but we will wish we had frozen some of our reasons
for wanting to be immortal.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread meekerdb

On 8/15/2011 7:08 PM, Jason Resch wrote:


just like you can simulate flight if you simulate the environment
you are flying in.


But do we need to simulate the entire atmosphere in order to simulate 
flight, or just the atmosphere in the immediate area around the 
surfaces of the plane?  Likewise, it seems we could take shortcuts in 
simulating the environment surrounding a mind and get the behavior we 
are after.


Why simulate?  Why not create a robot with sensors so it can interact 
the natural environment.


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Jason Resch
I am more worried for the biologically handicapped in the future.  Computers
will get faster, brains won't.  By 2029, it is predicted $1,000 worth of
computer will buy a human brain's worth of computational power.  15 years
later, you can get 1,000 X the human brain's power for $1,000.  Imagine: the
simulated get to experience 1 century for each month the humans with
biological brains experience.  Who will really be alive then?

Jason

On Mon, Aug 15, 2011 at 9:22 PM, meekerdb  wrote:

> On 8/15/2011 4:18 PM, Jason Resch wrote:
>
>> You seem to accept that computing power is doubling every year.  The fruit
>> fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 10^11.  It's
>> only a matter of time (and not that much) before a $10 thumb drive will have
>> enough memory to store a complete mapping of all the neurons in your brain.
>>  People won't need to freeze themselves to be immortal at that point.
>>
>
> But they'll have to be rich enough to afford super-computer time if they
> want to really live.  :-)
>
> Brent
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to 
> everything-list@googlegroups.**com
> .
> To unsubscribe from this group, send email to everything-list+unsubscribe@
> **googlegroups.com .
> For more options, visit this group at http://groups.google.com/**
> group/everything-list?hl=en
> .
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread meekerdb

On 8/15/2011 4:18 PM, Jason Resch wrote:
You seem to accept that computing power is doubling every year.  The 
fruit fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 
10^11.  It's only a matter of time (and not that much) before a $10 
thumb drive will have enough memory to store a complete mapping of all 
the neurons in your brain.  People won't need to freeze themselves to 
be immortal at that point.


But they'll have to be rich enough to afford super-computer time if they 
want to really live.  :-)


Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Jason Resch
On Mon, Aug 15, 2011 at 7:21 PM, Colin Geoffrey Hales <
cgha...@unimelb.edu.au> wrote:

> On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <
> cgha...@unimelb.edu.au> wrote:
>
> Read all your commentscutting/snipping to the chase...
>
> It is a little unfortunate you did not answer all of the questions.  I hope
> that you will answer both questions (1) and (2) below.
>
> ** **
>
> Yeah sorry about that... I’m really pressed at the moment.
>
>
No worries.


>  
>
> [Jason ]
>
>
> Your belief that AGI is impossible to achieve through computers depends on
> at least one of the following propositions being true:
> 1. Accurate simulation of the chemistry or physics underlying the brain is
> impossible
> 2. Human intelligence is something beyond the behaviors manifested by the
> brain
> Which one(s) do you think is/are correct and why? 
>
>
> Thanks,
>
> Jason
>
>  
>
> [Colin] 
>
> I think you’ve misunderstood the position in ways that I suspect are
> widespread...
>
>  
>
> 1) simulation of the chemistry or physics underlying the brain is
> impossible
>
>
> Question 1:
>
> Do you believe correct behavior, in terms of the relative motions of
> particles is possible to achieve in a simulation?  
>
> ** **
>
> [Colin] 
>
> ** **
>
> YES, BUT *Only if you simulate the entire universe*. Meaning you already
> know everything, so why bother?
>
> **
>

Interesting idea.  But do you really think the happenings of some asteroid
floating in interstellar space in the Andromeda galaxy makes any difference
to your intelligence?  Could we get away with only simulating the light cone
for a given mind instead of the whole universe?


> **
>
> So NO, in the real practical world of computing an agency X that is
> ignorant of NOT_X.
>
> ** **
>
> For a computed cognitive agent X, this will come down to how much impact
> the natural processes of NOT_X (the external world) involves itself in the
> natural processes of X. 
>
> ** **
>
> I think there is a nonlocal direct impact of NOT_X on the EM fields inside
> X. The EM fields are INPUT, not OUTPUT.
>
> But this will only be settled experimentally. I aim to do that.
>

I think I have a faint idea of what you are saying, but it is not fully
clear.  Are you hypothesizing there are non-local effects between every
particle in the universe which are necessary to explain the EM fields, and
these EM fields are necessary for intelligent behavior?


> 
>
> ** **
>
> 
>
> For example, take the example of the millennium run.  The simulation did
> not produce dark matter, but the representation of dark matter behaved like
> dark matter did in the universe (in terms of relative motion).  If we can
> simulate accurately the motions of particles, to predict where they will be
> in time T given where they are now, then we can peek into the simulation to
> see what is going on.
>
> Please answer if you agree the above is possible.  If you do not, then I do
> not see how your viewpoint is consistent with the fact that we can build
> simulations like the millenium run, or test aircraft designs before building
> them, etc.
>
> Question 2:
>
> Given the above (that we can predict the motions of particles in relation
> to each other) then we can extract data from the simulation to see how
> things are going inside.  Much like we had to convert a large array of
> floating point values representing particle positions in the Millennium
> simulation in order to render a video of a fly-through.  If the only
> information we can extract is the predicted particle locations, then even
> though the simulation does not create EM fields or fire in this universe, we
> can at least determine how the different particles will be arranged after
> running the simulation.
>
> Therefore, if we simulated a brain answering a question in a standardized
> test, we can peer into the simulation to determine in which bubble the
> graphite particles are concentrated (from the simulated pencil, controlled
> by the simulated brain in the model of particle interactions within an
> entire classroom).  Therefore, we have a model which tells us what an
> intelligent person would do, based purely on positions of particles in a
> simulation.
>
> What is wrong with the above reasoning?  It seems to me if we have a model
> that can be used to determine what an intelligence would do, then the model
> could stand in for the intelligence in question.
>
> ** **
>
> [Colin] 
>
> I think I already answered this. You can simulate a human if you already
> know everything,
>

We would need to know everything to be certain it is an accurate simulation,
but we don't need to know everything to attempt to build a model based on
our current knowledge.  Then see whether or not it works.  If the design
fails, then we are missing something, if it does work like a human mind
does, then it would appear we got the important details right.


> just like you ca

RE: Turing Machines

2011-08-15 Thread Colin Geoffrey Hales
On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales
 wrote:

Read all your commentscutting/snipping to the chase...

It is a little unfortunate you did not answer all of the questions.  I
hope that you will answer both questions (1) and (2) below.

 

Yeah sorry about that... I'm really pressed at the moment.

 

[Jason ]


Your belief that AGI is impossible to achieve through computers
depends on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying
the brain is impossible
2. Human intelligence is something beyond the behaviors
manifested by the brain
Which one(s) do you think is/are correct and why? 


Thanks,

Jason

 

[Colin] 

I think you've misunderstood the position in ways that I suspect
are widespread...

 

1) simulation of the chemistry or physics underlying the brain
is impossible


Question 1:

Do you believe correct behavior, in terms of the relative motions of
particles is possible to achieve in a simulation?  

 

[Colin] 

 

YES, BUT Only if you simulate the entire universe. Meaning you already
know everything, so why bother?

 

So NO, in the real practical world of computing an agency X that is
ignorant of NOT_X.

 

For a computed cognitive agent X, this will come down to how much impact
the natural processes of NOT_X (the external world) involves itself in
the natural processes of X. 

 

I think there is a nonlocal direct impact of NOT_X on the EM fields
inside X. The EM fields are INPUT, not OUTPUT.

But this will only be settled experimentally. I aim to do that.

 

For example, take the example of the millennium run.  The simulation did
not produce dark matter, but the representation of dark matter behaved
like dark matter did in the universe (in terms of relative motion).  If
we can simulate accurately the motions of particles, to predict where
they will be in time T given where they are now, then we can peek into
the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I
do not see how your viewpoint is consistent with the fact that we can
build simulations like the millenium run, or test aircraft designs
before building them, etc.

Question 2:

Given the above (that we can predict the motions of particles in
relation to each other) then we can extract data from the simulation to
see how things are going inside.  Much like we had to convert a large
array of floating point values representing particle positions in the
Millennium simulation in order to render a video of a fly-through.  If
the only information we can extract is the predicted particle locations,
then even though the simulation does not create EM fields or fire in
this universe, we can at least determine how the different particles
will be arranged after running the simulation.

Therefore, if we simulated a brain answering a question in a
standardized test, we can peer into the simulation to determine in which
bubble the graphite particles are concentrated (from the simulated
pencil, controlled by the simulated brain in the model of particle
interactions within an entire classroom).  Therefore, we have a model
which tells us what an intelligent person would do, based purely on
positions of particles in a simulation.

What is wrong with the above reasoning?  It seems to me if we have a
model that can be used to determine what an intelligence would do, then
the model could stand in for the intelligence in question.

 

[Colin] 

I think I already answered this. You can simulate a human if you already
know everything, just like you can simulate flight if you simulate the
environment you are flying in. In the equivalent case applied to human
cognition, you have to simulate the entire universe in order that the
simulation is accurate. But we are trying to create an artificial
cognition that can be used to find out about the universe outside the
artificial cognition ... like humans, you don't know what's outside...so
you can't do the simulation. The reasoning fails at this point, IMO.

 

The above issue about the X/NOT_X interrelationship stands, however.

 

The solution is: there is/can be no simulation in an artificial
cognition. It has to use the same processes a brain uses: literally.
This is the replication approach.

 

Is it really such a big deal that you can't get AGI with computation?
Who cares? The main thing is we can do it using replication. We are in
precisely the same position the Wright Bros were when making artificial
flight. 

 

This situation is kind of weird. Insisting that simulation/computation
is the only way to solve a problem is like saying 'all buildings must be
constructed out of paintings of bricks and only people doing it this way
will ever build a building.'. For 60 years every building made like this
falls down. 

 

Meanwhile I want to build a buil

Re: Turing Machines

2011-08-15 Thread Jason Resch
On Mon, Aug 15, 2011 at 5:22 PM, Craig Weinberg wrote:

> On Aug 15, 5:42 pm, Jason Resch  wrote:
>
> > We're already simulating peices of brain tissue on the order of fruit fly
> > brains (10,000 neurons).  Computers double in power/price every year, so
> 6
> > years later we could simulate mouse brains, another 6 we can simulate cat
> > brains, and in another 6 we can simulate human brains. (By 2030)
>
> If you have a chance to listen and compare the following:
>
> http://www.retrobits.net/atari/downloads/samg.mp3  Done in 1982 with a
> program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
> system that was 8k ROM.
>
> http://www.acapela-group.com/text-to-speech-interactive-demo.html
> (for side by side comparison paste:
>
>
Try this one, it is among the best I have found:
http://www.ivona.com/online/editor.php



> Four score and seven years ago our fathers brought forth on this
> continent, a new nation, conceived in Liberty, and dedicated to the
> proposition that all men are created equal.
>
> into the text box and choose English (US) - Ryan for the voice.
>
> So in 29 years of computing progress, on software that is orders of
> magnitude more complex and resource-heavy, we can definitely hear a
> strong improvement, however, at this rate, in another 30 years, we are
> still not going to have anything that sounds convincingly like natural
> speech.


I think you will be surprised by the progress of the next 30 years.


> This is just mapping vocal chord vibrations to digital logic -
> a miniscule achievement compared to mapping even the simplest
> neurotransmitter interactions. Computers double in power/price, but
> they also probably halve in efficiency/memory. It takes longer now to
> boot up and shut down the computer, longer to convert a string of text
> into voice.
>

Lines of code (code complexity) has been found to grow even more quickly
than Moore's law.  (At least in the example of Microsoft Word that I read
about at one point)


>
> Like CGI, despite massive increases in computing power, it still only
> superficially resembles what it's simulating. IMO, there has been
> little or no ground even in simulating the appearance of genuine
> feeling, let alone in producing something which itself feels.
>
>
That is the property of exponential processes and progress, looking back the
curve seems flat, look to see where it is going and you'll see an
overwhelming spike.

Have you seen the recent documentary "Transcendent Man"?

You seem to accept that computing power is doubling every year.  The fruit
fly has 10^5 neurons, a mouse 10^7, a cat 10^9, and a human 10^11.  It's
only a matter of time (and not that much) before a $10 thumb drive will have
enough memory to store a complete mapping of all the neurons in your brain.
People won't need to freeze themselves to be immortal at that point.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 5:42 pm, Jason Resch  wrote:

> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons).  Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)

If you have a chance to listen and compare the following:

http://www.retrobits.net/atari/downloads/samg.mp3  Done in 1982 with a
program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
system that was 8k ROM.

http://www.acapela-group.com/text-to-speech-interactive-demo.html
(for side by side comparison paste:

Four score and seven years ago our fathers brought forth on this
continent, a new nation, conceived in Liberty, and dedicated to the
proposition that all men are created equal.

into the text box and choose English (US) - Ryan for the voice.

So in 29 years of computing progress, on software that is orders of
magnitude more complex and resource-heavy, we can definitely hear a
strong improvement, however, at this rate, in another 30 years, we are
still not going to have anything that sounds convincingly like natural
speech. This is just mapping vocal chord vibrations to digital logic -
a miniscule achievement compared to mapping even the simplest
neurotransmitter interactions. Computers double in power/price, but
they also probably halve in efficiency/memory. It takes longer now to
boot up and shut down the computer, longer to convert a string of text
into voice.

Like CGI, despite massive increases in computing power, it still only
superficially resembles what it's simulating. IMO, there has been
little or no ground even in simulating the appearance of genuine
feeling, let alone in producing something which itself feels.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
On Aug 15, 5:42 pm, Jason Resch  wrote:

> We're already simulating peices of brain tissue on the order of fruit fly
> brains (10,000 neurons).  Computers double in power/price every year, so 6
> years later we could simulate mouse brains, another 6 we can simulate cat
> brains, and in another 6 we can simulate human brains. (By 2030)

If you have a chance to listen and compare the following:

http://www.retrobits.net/atari/downloads/samg.mp3  Done in 1982 with a
program 6k in size. Six. thousand. bytes. on the Atari BASIC operating
system that was 8k ROM.

http://www.acapela-group.com/text-to-speech-interactive-demo.html
(for side by side comparison paste:

Four score and seven years ago our fathers brought forth on this
continent, a new nation, conceived in Liberty, and dedicated to the
proposition that all men are created equal.

into the text box and choose English (US) - Ryan for the voice.

So in 29 years of computing progress, on software that is orders of
magnitude more complex and resource-heavy, we can definitely hear a
strong improvement, however, at this rate, in another 30 years, we are
still not going to have anything that sounds convincingly like natural
speech. This is just mapping vocal chord vibrations to digital logic -
a miniscule achievement compared to mapping even the simplest
neurotransmitter interactions. Computers double in power/price, but
they also probably halve in efficiency/memory. It takes longer now to
boot up and shut down the computer, longer to convert a string of text
into voice.

Like CGI, despite massive increases in computing power, it still only
superficially resembles what it's simulating. IMO, there has been
little or no ground even in simulating the appearance of genuine
feeling, let alone in producing something which itself feels.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Jason Resch
On Mon, Aug 15, 2011 at 1:17 PM, Evgenii Rudnyi  wrote:

> On 15.08.2011 07:56 Jason Resch said the following:
>
> ...
>
>
>  Can we accurately simulate physical laws or can't we?  Before you
>> answer, take a few minutes to watch this amazing video, which
>> simulates the distribution of mass throughout the universe on the
>> largest scales: 
>> http://www.youtube.com/watch?**v=W35SYkfdGtw(Note
>>  each
>> point of light represents a galaxy, not a star)
>>
>
> The answer on your question depends on what you mean by accurately and what
> by physical laws. I am working with finite elements (more specifically with
> ANSYS Multiphysics) and I can tell for sure that if you speak of simulation
> of the universe, then the current simulation technology does not scale.
> Nowadays one could solve a linear system reaching dimension of 1 billion but
> this will not help you. I would say that either contemporary numerical
> methods are deadly wrong, or simulated equations are not the right ones. In
> this respect, you may want to look how simulation is done for example in
> Second Life.
>
> Well, today numerical simulation is a good business (computer-aided
> engineering is about a billion per year) and it continues to grow. Yet, if
> you look in detail, then there are some areas when it could be employed
> nicely and some where it better to forget about simulation.
>
> I understand that you speak "in principle".


Yes, this is why in my first post, I said consider God's Turing machine
(free from our limitations).  Then it is obvious that with the appropriate
tape, a physical system can be approximated to any desired level of accuracy
so long as it is predictable.  Colin said such models of physics or
chemistry are impossible, so I hope he elaborates on what makes these
systems unpredictable.



> Yet, I am not sure if extrapolation too far away from the current knowledge
> makes sense, as eventually we are coming to "philosophical controversies".
>
>
We're already simulating peices of brain tissue on the order of fruit fly
brains (10,000 neurons).  Computers double in power/price every year, so 6
years later we could simulate mouse brains, another 6 we can simulate cat
brains, and in another 6 we can simulate human brains. (By 2030)

But all of this is an aside from point that I was making regarding the power
and versatility of Turing machines.  Those who think Artificial Intelligence
is not possible with computers must show what about the brain is
unpredictable or unmodelable.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
see if this helps..

http://s33light.org/post/8963930299

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Evgenii Rudnyi

On 15.08.2011 07:56 Jason Resch said the following:

...


Can we accurately simulate physical laws or can't we?  Before you
answer, take a few minutes to watch this amazing video, which
simulates the distribution of mass throughout the universe on the
largest scales: http://www.youtube.com/watch?v=W35SYkfdGtw (Note each
point of light represents a galaxy, not a star)


The answer on your question depends on what you mean by accurately and 
what by physical laws. I am working with finite elements (more 
specifically with ANSYS Multiphysics) and I can tell for sure that if 
you speak of simulation of the universe, then the current simulation 
technology does not scale. Nowadays one could solve a linear system 
reaching dimension of 1 billion but this will not help you. I would say 
that either contemporary numerical methods are deadly wrong, or 
simulated equations are not the right ones. In this respect, you may 
want to look how simulation is done for example in Second Life.


Well, today numerical simulation is a good business (computer-aided 
engineering is about a billion per year) and it continues to grow. Yet, 
if you look in detail, then there are some areas when it could be 
employed nicely and some where it better to forget about simulation.


I understand that you speak "in principle". Yet, I am not sure if 
extrapolation too far away from the current knowledge makes sense, as 
eventually we are coming to "philosophical controversies".


Evgenii

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Jason Resch
On Mon, Aug 15, 2011 at 2:06 AM, Colin Geoffrey Hales <
cgha...@unimelb.edu.au> wrote:

> Read all your commentscutting/snipping to the chase...
>
> **
>

It is a little unfortunate you did not answer all of the questions.  I hope
that you will answer both questions (1) and (2) below.


> **
>
> [Jason ]
>
> Your belief that AGI is impossible to achieve through computers depends on
> at least one of the following propositions being true:
> 1. Accurate simulation of the chemistry or physics underlying the brain is
> impossible
> 2. Human intelligence is something beyond the behaviors manifested by the
> brain
> Which one(s) do you think is/are correct and why? 
>
>
> Thanks,
>
> Jason
>
> ** **
>
> [Colin] 
>
> I think you’ve misunderstood the position in ways that I suspect are
> widespread...
>
> ** **
>
> 1) simulation of the chemistry or physics underlying the brain is
> impossible
>

Question 1:

Do you believe correct behavior, in terms of the relative motions of
particles is possible to achieve in a simulation?  For example, take the
example of the millennium run.  The simulation did not produce dark matter,
but the representation of dark matter behaved like dark matter did in the
universe (in terms of relative motion).  If we can simulate accurately the
motions of particles, to predict where they will be in time T given where
they are now, then we can peek into the simulation to see what is going on.

Please answer if you agree the above is possible.  If you do not, then I do
not see how your viewpoint is consistent with the fact that we can build
simulations like the millenium run, or test aircraft designs before building
them, etc.

Question 2:

Given the above (that we can predict the motions of particles in relation to
each other) then we can extract data from the simulation to see how things
are going inside.  Much like we had to convert a large array of floating
point values representing particle positions in the Millennium simulation in
order to render a video of a fly-through.  If the only information we can
extract is the predicted particle locations, then even though the simulation
does not create EM fields or fire in this universe, we can at least
determine how the different particles will be arranged after running the
simulation.

Therefore, if we simulated a brain answering a question in a standardized
test, we can peer into the simulation to determine in which bubble the
graphite particles are concentrated (from the simulated pencil, controlled
by the simulated brain in the model of particle interactions within an
entire classroom).  Therefore, we have a model which tells us what an
intelligent person would do, based purely on positions of particles in a
simulation.

What is wrong with the above reasoning?  It seems to me if we have a model
that can be used to determine what an intelligence would do, then the model
could stand in for the intelligence in question.

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-15 Thread Craig Weinberg
Jason & Colin, I'm going to just try to address everything in one
reply.

I agree with Colin pretty much down the line. My position assumes that
worldview as axiomatic and then adds some hypotheses on top of that.
Jason, your original list of questions are all predicated on the very
assumption that I've challenged all along but can't seem to get you
(or others) to look at. I have experienced this many many times
before, so it doesn't surprise me and I can't be sure that it's even
possible for a mind that is so well versed in 'right hand' logic to be
able to shift into a left hand mode, even if it wanted to. I have not
seen it happen yet.

As Colin says, the assumption is that the logic behind the Turing
machine has anything to do with the reality of the world we are
modeling through it. If you make a universe based upon Turing
computations alone, there is no gravity or fusion, no biological
molecules, etc. There is only meaningless patterns of 1 and 0 through
which we can plot out whatever abstract coordinates we with to keep
track of. It means nothing to us until it is converted to physical
changes which we can sense with our eyes, like ink on tape or
illuminated pixels on a screen.

On Aug 15, 3:06 am, Colin Geoffrey Hales 
wrote:
> Read all your commentscutting/snipping to the chase...
>
> [Jason ]
> Your belief that AGI is impossible to achieve through computers depends
> on at least one of the following propositions being true:
> 1. Accurate simulation of the chemistry or physics underlying the brain
> is impossible

You can simulate it as far as being able to model the aspects of it's
behavior that you can observe, but you can't necessarily predict that
behavior over time, any more than you can predict what other people
might say to you today. The chemistry and physics of the brain are
partially determined by the experiences of the environment through the
body, and partially determined by the sensorimotive agenda of the
mind, which are both related to but not identical with the momentum
and consequences of it's neurological biochemistry. All three are are
woven together as an inseparable whole.

> 2. Human intelligence is something beyond the behaviors manifested by
> the brain

Any intelligence is something beyond the behaviors of matter. It's not
as if a Turing machine is squirting out omnipotent toothpaste, you are
inferring that there is some world being created (metaphysically)
which can be experienced somewhere else beyond the behavior of the pen
and tape, motors and guides, chips and wires.

> Which one(s) do you think is/are correct and why?
>
> Thanks,
>
> Jason
>
> [Colin]
>
> I think you've misunderstood the position in ways that I suspect are
> widespread...
>
> 1) simulation of the chemistry or physics underlying the brain is
> impossible
>
> It's quite possible, just irrelevant! 'Chemistry' and 'physics' are
> terms for models of the natural world used to describe how natural
> processes appear to an observer inside the universe. You can simulate
> (compute physics/chem. models) until you turn blue, and be as right as
> you want: all you will do is predict how the universe appears to an
> observer.
>
> This has nothing to do with creating  artificial intelligence.
>
> Natural intelligence is a product of the actual natural world, and is
> not a simulation. Logic dictates that, just like the wheel, fire, steam
> power, light and flight, artificial cognition involves the actual
> natural processes found in brains. This is not a physics model of the
> brain implemented in any sense of the word. Artificial cognition will be
> artificial in the same way that artificial light is light. Literally. In
> brains we know there are action potentials coupling/resonating with a
> large unified EM field system, poised on/around the cusp of an unstable
> equilibrium.

Colin, here is where you can consider my idea of sensorimotive
electromagnetism if you want. What really is an EM field? What is it
made of and how do we know? My hypothesis is that we actually don't
know, and that the so called EM field is a logical inference of causal
phenomenon to which matter (organic molecules within a neuron in this
case) reacts to. Instead, I think that it makes more sense as sense. A
sensorimotive synchronization shared amongst molecules and cells alike
(albeit in different perceptual frames of reference - PRIFs). If two
or more people share a feeling and they act in synchrony, from a
distance it could appear as if they are subject to an EM field which
informs them from outside their bodies and exists in between their
bodies when in fact the synchronization arises from within, through
semantic sharing of sense. It's reproduced or imitated locally in each
body as a feeling - the same feeling figuratively but separate
instantiations literally in separate brains (or cells, molecules, as
the case may be).

All of our inferences of electromagnetism come through observing the
behaviors of matter with matter. In orde

RE: Turing Machines

2011-08-15 Thread Colin Geoffrey Hales
Read all your commentscutting/snipping to the chase...

 

[Jason ]
Your belief that AGI is impossible to achieve through computers depends
on at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain
is impossible
2. Human intelligence is something beyond the behaviors manifested by
the brain
Which one(s) do you think is/are correct and why? 


Thanks,

Jason

 

[Colin] 

I think you've misunderstood the position in ways that I suspect are
widespread...

 

1) simulation of the chemistry or physics underlying the brain is
impossible

It's quite possible, just irrelevant! 'Chemistry' and 'physics' are
terms for models of the natural world used to describe how natural
processes appear to an observer inside the universe. You can simulate
(compute physics/chem. models) until you turn blue, and be as right as
you want: all you will do is predict how the universe appears to an
observer.

 

This has nothing to do with creating  artificial intelligence. 

 

Natural intelligence is a product of the actual natural world, and is
not a simulation. Logic dictates that, just like the wheel, fire, steam
power, light and flight, artificial cognition involves the actual
natural processes found in brains. This is not a physics model of the
brain implemented in any sense of the word. Artificial cognition will be
artificial in the same way that artificial light is light. Literally. In
brains we know there are action potentials coupling/resonating with a
large unified EM field system, poised on/around the cusp of an unstable
equilibrium. So real artificial cognition will have, you guessed it,
action potential coupling resonating with a large unified EM field
system, poised on/around the cusp of an unstable equilibrium. NOT a
model of it computed on something. Such inorganic cognition will
literally have an EEG signature like humans. If you want artificially
instantiated fire you must provide fuel, oxygen and heat/spark. In the
same way, if you want artificial cognition you must provide equivalent
minimal set of necessary physical ingredients.

 

 

2. Human intelligence is something beyond the behaviors manifested by
the brain
This sounds very strange to me. Human intelligence (an ability to
observe and produce the models called 'physics and chemistry') resulted
from the natural processes (as apparent to us) described by us as
physics and chemistry, not the models called physics & chemistry. It's
confusingly self-referential...but logically sound.

 

= = = = = = = = = = = = = = = =

The fact that you posed the choices the way you did indicates a profound
confusion of natural processes with computed models of natural
processes. The process of artificial cognition that uses natural
processes in an artificial context is called 'brain tissue replication'.
In replication there is no computing and no simulation. This is the way
to explore/understand and develop artificial cognition in exactly
the way we used artificial flight to figure out the physics of flight.
We FLEW. We did not examine a physics model of flying (we didn't have
one at the time!). Does a computed physics model of flight fly? NO. Does
a computed physics model of combustion burn? NO. Is a computed physics
model of a hurricane a hurricane? NO. 

 

So how can a computed physics model of cognition be cognition?

 

I hope you can see the distinction I am trying to make clear.
Replication is not simulation.

 

Colin

 

 

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Jason Resch
On Mon, Aug 15, 2011 at 12:13 AM, Colin Geoffrey Hales <
cgha...@unimelb.edu.au> wrote:

>
> Colin and Craig,
>
> Imagine that God has such a machine on his desk, which he uses to compute
> the updated positions of each particle in some universe over each unit of
> Planck time.  Would you agree it is possible for the following to occur in
> the simulation:
>
> 1. Stars to coalesce due to gravity and begin fusion?
> 2. Simple biological molecules to forum?
> 3. Simple single-celled life forms to evolve
> 4. More complex multi-cellular life forms to evolve?
> 5. Intelligent life forms to evolve (at least as intelligent as humans)?
> 6. Intelligent life in the simulation to solve problems and develop culture
> and technology?
> 7. For that intelligent life to question qualia?
> 8. For that intelligent life to define the hard problem?
> 9. For those beings to create an interconnected network of computers and
> debate this same topic?
>
> If you disagree with any of the numbered possibilities, please state which
> ones you disagree with.
>
>
> Colin = 
>
> I don’t know about Craig...but I disagree with all of them. 
>
> Your premise, that the God’s-Desk Turing machine is relevant, is misplaced.
>

It was to avoid any distraction on the topics of run time, resources, tape
length, etc.


> 
>
> A) The Turing Machine in the video is inside this (our reality) reality. It
> uses reality (whatever it is) to construct the Turing machine. All
> expectations of the machine are constructed on this basis. It is the only
> basis for expectations of creation of AGI within our reality.
>

Does it matter where a Turing machine is for it to be a Turing machine?  Do
you think it matters from the program's point of view what is providing the
basis for its computation?

In any case, if you find it problematic then assume the Turing machine is
run by some advanced civilization instead of on God's desk.


> 
>
> B) The Turing machine on your God’s desk is not that (A) at all. You could
> be right or wrong or merely irrelevant... and it would change nothing in (A)
> perspective.
>
> Until you de-confuse these 2 points of view, your 9 points have no meaning.
>
Can we accurately simulate physical laws or can't we?  Before you answer,
take a few minutes to watch this amazing video, which simulates the
distribution of mass throughout the universe on the largest scales:
http://www.youtube.com/watch?v=W35SYkfdGtw
(Note each point of light represents a galaxy, not a star)



> The whole idea that computation is necessarily involved in intelligence is
> also likewise taken along for the ride. There’s no (A)-style  Turing
> computation going on in a brain.
>
Either the brain follows predictable laws or it does not.  If it does follow
predictable laws, then a model of the brains behavior can be created.  The
future evolution of this model can then be determined by a Turing machine.
The evolution of the model would be as generally intelligent as the brain
its model was based upon.

You must believe in some randomness, magic, infinities or undecideability
somewhere in the physics of this universe that are relavent to the behavior
of the brain.  Otherwise there is no reason for such a model to not be
possible.


> (A)-style Turing-Computing a model of a brain is not a brain for the same
> reason (A)-style  computing a model of fire is not fire.
>
But the question here is whether or not the model is intelligent?  Not what
"style" of intelligence it happens to be.  I don't see how the "style" of
intelligence can make any meaningful difference.  The intelligence of the
model could drive the same behaviors, it would react the same way in the
same situations, answer the same questions with the same answers, fill out
the bubbles in a standardized test in the same way, so how is this
"A-intelligence" different from "B-intelligence"?  I think you are
manufacturing an difference where there is none.  (Does that make it an
artificial difference?)


> 
>
> To me, 
>
> (i) reality-as-computation
>
> (ii) computation of a model of reality within the reality
> 
>
> (iii) to be made of/inside inside an actual reality, and able to make a
> model of it from within
>
> (iv) an actual reality
>
> are all different things. The video depicts a bit of a (iv) doing (iii),
> from the perspective of an observer within (iv). I’m not interested in
> simulating anything. I want to create artificial cognition (AGI) the same
> way artificial flight is flight.
>
>
Your belief that AGI is impossible to achieve through computers depends on
at least one of the following propositions being true:
1. Accurate simulation of the chemistry or physics underlying the brain is
impossible
2. Human intelligence is something beyond the behaviors manifested by the
brain
Which one(s) do you think is/are correct and why?

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Ev

RE: Turing Machines

2011-08-14 Thread Colin Geoffrey Hales

Colin and Craig,

Imagine that God has such a machine on his desk, which he uses to
compute the updated positions of each particle in some universe over
each unit of Planck time.  Would you agree it is possible for the
following to occur in the simulation:

1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop
culture and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and
debate this same topic?

If you disagree with any of the numbered possibilities, please state
which ones you disagree with.


Colin = 

I don't know about Craig...but I disagree with all of them. 

Your premise, that the God's-Desk Turing machine is relevant, is
misplaced.

A) The Turing Machine in the video is inside this (our reality) reality.
It uses reality (whatever it is) to construct the Turing machine. All
expectations of the machine are constructed on this basis. It is the
only basis for expectations of creation of AGI within our reality.

B) The Turing machine on your God's desk is not that (A) at all. You
could be right or wrong or merely irrelevant... and it would change
nothing in (A) perspective.

Until you de-confuse these 2 points of view, your 9 points have no
meaning. The whole idea that computation is necessarily involved in
intelligence is also likewise taken along for the ride. There's no
(A)-style  Turing computation going on in a brain. (A)-style
Turing-Computing a model of a brain is not a brain for the same reason
(A)-style  computing a model of fire is not fire.

To me, 

(i) reality-as-computation

(ii) computation of a model of reality within the
reality 

(iii) to be made of/inside inside an actual reality, and able to make a
model of it from within

(iv) an actual reality

are all different things. The video depicts a bit of a (iv) doing (iii),
from the perspective of an observer within (iv). I'm not interested in
simulating anything. I want to create artificial cognition (AGI) the
same way artificial flight is flight.

Colin

 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Jason Resch
On Sun, Aug 14, 2011 at 7:18 PM, Colin Geoffrey Hales <
cgha...@unimelb.edu.au> wrote:

>
> -Original Message-
> From: everything-list@googlegroups.com [mailto:
> everything-list@googlegroups.com] On Behalf Of Craig Weinberg
> Sent: Monday, 15 August 2011 10:07 AM
> To: Everything List
> Subject: Re: Turing Machines
>
> On Aug 14, 7:29 pm, Colin Geoffrey Hales 
> wrote:
> > Great video ... a picture of simplicity
> >
> > Q. 'What is it like to be a Turing Machine?" = Hard Problem.
> >
> > A. It's like being the pile of gear in the video, NO MATTER WHAT IS ON
> > THE TAPE.
>
> Why doesn't it matter what's on the tape? If I manually move the tape
> under the scanner myself, will the gear as a whole know the
> difference? If I dismantle the machine or turn it off will it care?
>
> Craig
>
> Colin 
>
> Precisely. How can it possibly 'care'? If the machine was (1) spread across
> the entire solar system, or (2) miniaturized to the size of an atom, (3)
> massively parallel, (4) quantum, (5) digital, (6) analog or (7)
> whatever. it doesn't matter it will always be "what it is like to be
> the physical object (1), (2), (3), (4), (5), (6), (7)", resp., no matter
> what is on the tape. If find the idea that the contents of the tape somehow
> magically delivers a first person experience to be intellectually moribund.
>
> The point is, what magic is assumed in the contents of the tape being
> fiddled with 'Turing-ly' delivers first person content? Legions of folks out
> there will say "its all information processing!", to which I add... the
> brain, which is the 100% origins of the only 'what it is like' description
> we know of, is NOT doing what the video does.
>
> So good question. I wish others would ask it.
>
>
Colin and Craig,

Imagine that God has such a machine on his desk, which he uses to compute
the updated positions of each particle in some universe over each unit of
Planck time.  Would you agree it is possible for the following to occur in
the simulation:

1. Stars to coalesce due to gravity and begin fusion?
2. Simple biological molecules to forum?
3. Simple single-celled life forms to evolve
4. More complex multi-cellular life forms to evolve?
5. Intelligent life forms to evolve (at least as intelligent as humans)?
6. Intelligent life in the simulation to solve problems and develop culture
and technology?
7. For that intelligent life to question qualia?
8. For that intelligent life to define the hard problem?
9. For those beings to create an interconnected network of computers and
debate this same topic?

If you disagree with any of the numbered possibilities, please state which
ones you disagree with.

Thanks,

Jason

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: Turing Machines

2011-08-14 Thread Colin Geoffrey Hales

-Original Message-
From: everything-list@googlegroups.com 
[mailto:everything-list@googlegroups.com] On Behalf Of Craig Weinberg
Sent: Monday, 15 August 2011 10:07 AM
To: Everything List
Subject: Re: Turing Machines

On Aug 14, 7:29 pm, Colin Geoffrey Hales 
wrote:
> Great video ... a picture of simplicity
>
> Q. 'What is it like to be a Turing Machine?" = Hard Problem.
>
> A. It's like being the pile of gear in the video, NO MATTER WHAT IS ON
> THE TAPE.

Why doesn't it matter what's on the tape? If I manually move the tape
under the scanner myself, will the gear as a whole know the
difference? If I dismantle the machine or turn it off will it care?

Craig

Colin 

Precisely. How can it possibly 'care'? If the machine was (1) spread across the 
entire solar system, or (2) miniaturized to the size of an atom, (3) massively 
parallel, (4) quantum, (5) digital, (6) analog or (7) whatever. it doesn't 
matter it will always be "what it is like to be the physical object (1), 
(2), (3), (4), (5), (6), (7)", resp., no matter what is on the tape. If find 
the idea that the contents of the tape somehow magically delivers a first 
person experience to be intellectually moribund.

The point is, what magic is assumed in the contents of the tape being fiddled 
with 'Turing-ly' delivers first person content? Legions of folks out there will 
say "its all information processing!", to which I add... the brain, which is 
the 100% origins of the only 'what it is like' description we know of, is NOT 
doing what the video does.

So good question. I wish others would ask it.

Colin


-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Craig Weinberg
On Aug 14, 7:29 pm, Colin Geoffrey Hales 
wrote:
> Great video ... a picture of simplicity
>
> Q. 'What is it like to be a Turing Machine?" = Hard Problem.
>
> A. It's like being the pile of gear in the video, NO MATTER WHAT IS ON
> THE TAPE.

Why doesn't it matter what's on the tape? If I manually move the tape
under the scanner myself, will the gear as a whole know the
difference? If I dismantle the machine or turn it off will it care?

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: Turing Machines

2011-08-14 Thread Colin Geoffrey Hales
Great video ... a picture of simplicity

 

Q. 'What is it like to be a Turing Machine?" = Hard Problem.

A. It's like being the pile of gear in the video, NO MATTER WHAT IS ON
THE TAPE.

 

Colin

 

 

From: everything-list@googlegroups.com
[mailto:everything-list@googlegroups.com] On Behalf Of Jason Resch
Sent: Monday, 15 August 2011 1:50 AM
To: everything-list@googlegroups.com
Subject: Re: Turing Machines

 

Craig,

Thanks for the video, it is truly impressive.

Jason

On Sun, Aug 14, 2011 at 9:38 AM, Craig Weinberg 
wrote:

http://www.youtube.com/watch?v=E3keLeMwfHY

Does the idea of this machine solve the Hard Problem of Consciousness,
or are qualia something more than ideas?

--
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com
<mailto:everything-list%2bunsubscr...@googlegroups.com> .
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

 

-- 
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/everything-list?hl=en.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Craig Weinberg
On Aug 14, 1:39 pm, Bruno Marchal  wrote:
> On 14 Aug 2011, at 16:38, Craig Weinberg wrote:
>
> >http://www.youtube.com/watch?v=E3keLeMwfHY
>
> > Does the idea of this machine solve the Hard Problem of Consciousness,
> > or are qualia something more than ideas?
>
> Quite cute little physical implementation of a Turing machine.

So good. Wow.

> Read Sane04, it explains how a slight variant of that machine, or how  
> some program you can give to that machine, will develop qualia, and  
> develop a discourse about them semblable to ours, so that you have to  
> treat them as zombie if you want have them without qualia. They can  
> even understand that their solution is partial, and necessary partial.  
> Their theories are clear, transparent and explicit,

They aren't clear to me at all. I keep trying to read it but I don't
get why feeling should ever result from logic, let alone be an
inevitable consequence of any particular logic.

>unlike yours where  
> it seems to be hard to guess what you assume, and what you derive.
>
> But then you admit yourself not trying to really convey your  
> intuition, and so it looks just like "racism": "you will not tell me  
> that this (pointing on silicon or a sort of clock) can think?" I don't  
> take such move as argument.

It might think, but you can't tell me that it thinks it's a clock or
that it's telling time, let alone that it has feelings about that or
free will to change it. I'm open to being convinced of that, but it
doesn't make sense that we would perceive a difference between biology
and physics if there weren't in fact some kind of significant
difference. I don't see that comp provides for such a difference.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Craig Weinberg
On Aug 14, 11:50 am, Jason Resch  wrote:
> Craig,
>
> Thanks for the video, it is truly impressive.
>
> Jason

Oh glad you liked it. I agree, what a beautifully engineered project.

Craig

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Bruno Marchal


On 14 Aug 2011, at 16:38, Craig Weinberg wrote:


http://www.youtube.com/watch?v=E3keLeMwfHY

Does the idea of this machine solve the Hard Problem of Consciousness,
or are qualia something more than ideas?


Quite cute little physical implementation of a Turing machine.

Read Sane04, it explains how a slight variant of that machine, or how  
some program you can give to that machine, will develop qualia, and  
develop a discourse about them semblable to ours, so that you have to  
treat them as zombie if you want have them without qualia. They can  
even understand that their solution is partial, and necessary partial.  
Their theories are clear, transparent and explicit, unlike yours where  
it seems to be hard to guess what you assume, and what you derive.


But then you admit yourself not trying to really convey your  
intuition, and so it looks just like "racism": "you will not tell me  
that this (pointing on silicon or a sort of clock) can think?" I don't  
take such move as argument.


Bruno


http://iridia.ulb.ac.be/~marchal/



--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



Re: Turing Machines

2011-08-14 Thread Jason Resch
Craig,

Thanks for the video, it is truly impressive.

Jason

On Sun, Aug 14, 2011 at 9:38 AM, Craig Weinberg wrote:

> http://www.youtube.com/watch?v=E3keLeMwfHY
>
> Does the idea of this machine solve the Hard Problem of Consciousness,
> or are qualia something more than ideas?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To post to this group, send email to everything-list@googlegroups.com.
> To unsubscribe from this group, send email to
> everything-list+unsubscr...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/everything-list?hl=en.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.



RE: turing machines = boolean algebras ?

2002-11-26 Thread Ben Goertzel

Essentially, you can consider a classic Turing machine to consist of a
data/input/output tape, and a program consisting of

-- elementary tape operations
-- boolean operations

I.e. a Turing machine program is a tape plus a program expressed in a
Boolean algebra that includes some tape-control primitives.

-- Ben G


> -Original Message-
> From: Stephen Paul King [mailto:[EMAIL PROTECTED]]
> Sent: Tuesday, November 26, 2002 9:25 AM
> To: [EMAIL PROTECTED]
> Subject: Re: turing machines = boolean algebras ?
>
>
> Dear Ben and Bruno,
>
> Your discussions are fascinating! I have one related and pehaps even
> trivial question: What is the relationship between the class of Turing
> Machines and the class of Boolean Algebras? Is one a subset of the other?
>
> Kindest regards,
>
> Stephen
>
>




Re: turing machines = boolean algebras ?

2002-11-26 Thread Stephen Paul King
Dear Ben and Bruno,

Your discussions are fascinating! I have one related and pehaps even
trivial question: What is the relationship between the class of Turing
Machines and the class of Boolean Algebras? Is one a subset of the other?

Kindest regards,

Stephen





Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-05-22 Thread Jacques Mallah

--- [EMAIL PROTECTED] wrote:
> >  > > Turing Machines have no real time clock ...
> >  > > If we assume the comp hypothesis
> >  > > (purely based on Turing machines) and the
> >  > > anthropic principle, then the flow of
> >  > > consciousness can only be
> >  > > constrained by the logical nature of the
links
> >  > > pernitting transitions from one observer
> >  > > moment to the next. Time therefore is an
> >  > > illusion derived from such a logical flow.

> Please!!! Of course Turing Machines have clocks
> [...] But they don't have REAL TIME 
> CLOCKS, Jacques You know the kind that tells
> computers the time of day and the date...

OK, so you admit time is real but unknown.  I
guess your "illusion" claim was due to schitzophrenia
on your part.

=
- - - - - - -
   Jacques Mallah ([EMAIL PROTECTED])
 Physicist  /  Many Worlder  /  Devil's Advocate
"I know what no one else knows" - 'Runaway Train', Soul Asylum
 My URL: http://hammer.prohosting.com/~mathmind/

__
Do You Yahoo!?
Send instant messages & get email alerts with Yahoo! Messenger.
http://im.yahoo.com/




Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-05-21 Thread GSLevy

In a message dated 05/21/2000 3:21:33 PM Pacific Daylight Time, 
[EMAIL PROTECTED] writes:

> > [EMAIL PROTECTED] wrote:
>  > > Turing Machines have no real time clock and no
>  > > interrupt. If we assume the comp hypothesis
>  > > (purely based on Turing machines) and the
>  anthropic
>  > > principle, then the flow of consciousness can only
>  > > be constrained by the logical nature of the links
>  > > pernitting transitions from one observer moment
>  > > to the next. Time therefore is an illusion derived
>  > > from such a logical flow.
>  
>  I just noticed this claim that TMs are not
>  clocked, and as far as I could tell it is self
>  evidently false, so I don't see how someone could make
>  it.  The very definition of a TM involves actions of
>  the head at each clock step.

Please!!! Of course Turing Machines have clocks They perform their 
operations sequentially and need a "clock signal" to move from one step to 
the next. The duration between the clock pulses can vary and can be entirely 
arbitrary, one picosecond or ten milleniums. But they don't have REAL TIME 
CLOCKS, Jacques You know the kind that tells computers the time of day 
and the date... And of course they also don't have interrupts!  

George




Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-01-25 Thread David Lloyd-Jones

Hal Finney writes:


> Russell Standish, <[EMAIL PROTECTED]>, writes:
> > Why do you think the only possibilities are that the universe is
> > either discrete or continuous? For example, the space Q^4 (4-D space
> > built from rational numbers) is neither.
>
> Rational numbers are continuous, by the typical definition.  Between
> any two rational numbers there is another (and therefore, an infinite
> number of others).

Hunh? This is certainly true, but on the other hand between any two rational
numbers there are also an infinite number of irrationals.

But even if this were not the case, the fact that any two rationals have
other rationals in between would not make Hal's claim of continuity true;
rather it would prove the opposite, discontinuity.

Seems to me we have here a demonstration that, as in physical reality,
continuity cannot exist. What could it possibly mean?

   -dlj.






Re: Turing Machines Have no Real Time Clock (Was The Game of Li

2000-01-17 Thread Marchal

[EMAIL PROTECTED] wrote:

>If the world was not quantized the comp hypothesis would not hold.

Only if my generalised brain is the entire universe.
Look at my discussion with Niklas Thisel. Comp entails
that, from the first person perspective some 
universal feature of our observable neighborhood will
appears non-finite, non-discrete, non-computable, etc.

So I agree with Russell here!
When you say:

>A continuous universe would not by emulable by a Turing 
>Machine

You are right! For the same reason you cannot emulate
real randomness with a UTM. But it is easy to explain
why UTM are right to identify real randomness with
the result of self-localisation after self-duplication-like
experiments.

To sum up roughly:
3-determinisme => 1-indeterminisme
3-locality => 1-non-locality
3-discreteness => 1-continuum (existence of)
3-computability => 1-uncomputability

Bruno




Re: Turing Machines Have no Real Time Clock (Was The Game of Li

2000-01-17 Thread Marchal

Hal:

>Rational numbers are continuous, by the typical definition.  Between
>any two rational numbers there is another (and therefore, an infinite
>number of others).

This is density. Q is dense indeed, but highly discontinuous.
Continuity means either that all dedekind-cut define numbers, or that
all cauchy sequences define numbers.

Note that in classical analysis these two definitions are equivalent,
but in intuitionistic mathematics they are not!

Unlike computability, continuity like provability is a relative
concept.

Bruno.




Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-01-15 Thread GSLevy

In a message dated 01/14/2000 1:48:25 PM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> Your first sentence is complete codswallop, and your second sentence
>  is bizarre. Prove it!
>  
>  > 
>  > In a message dated 01/13/2000 5:58:18 PM Pacific Standard Time, 
>  > [EMAIL PROTECTED] writes:
>  > 
>  > > Who say's the world is quantized?
>  > 
>  > If the world was not quantized the comp hypothesis would not hold. In 
fact,
>  
>  > It would be impossible for physical constants to have any definite 
value, 
>  > since there would not be any reference to anchor them with. 
>  > 
>  > George Levy
>  > 
>  > 
>  
>  
I looked up codswallop in the dictionnary and I was very surprised to find 
that it is a recent British word coined around 1963. It means "nonsense."  
OK. This is your opinion.
First sentence: The comp hypothesis depends on Turing Machines which are 
inherently discrete. A continuous universe would not by emulable by a Turing 
Machine. Read Bruno's latest post. He has a much better grasp of this issue 
then me.

Second sentence: To prove that if physical constants are to take any definite 
value, the universe must be quantized.

Let us say that there exist a TOE based on one single physical constant X 
(for example Planck's constant). Without loss of generality, we can say that 
the value of X is 1, since there is no other constant to compare it to. 
Assuming that a Turing machine is used to apply this TOE to solve poblem and 
calculate any quantity in the world then any quantitiy derived from this TOE 
would have to belong to the set of integers -- including space time and 
energy. 
We can extend this reasonning to TOE's that include n arbitrary physical 
constants.

George Levy




Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-01-14 Thread Russell Standish

Your first sentence is complete codswallop, and your second sentence
is bizarre. Prove it!

> 
> In a message dated 01/13/2000 5:58:18 PM Pacific Standard Time, 
> [EMAIL PROTECTED] writes:
> 
> > Who say's the world is quantized?
> 
> If the world was not quantized the comp hypothesis would not hold. In fact, 
> It would be impossible for physical constants to have any definite value, 
> since there would not be any reference to anchor them with. 
> 
> George Levy
> 
> 




Dr. Russell StandishDirector
High Performance Computing Support Unit,
University of NSW   Phone 9385 6967
Sydney 2052 Fax   9385 6965
Australia   [EMAIL PROTECTED]
Room 2075, Red Centre   http://parallel.hpc.unsw.edu.au/rks





Re: Turing Machines Have no Real Time Clock (Was The Game of Life)

2000-01-13 Thread GSLevy

In a message dated 01/13/2000 5:58:18 PM Pacific Standard Time, 
[EMAIL PROTECTED] writes:

> Who say's the world is quantized?

If the world was not quantized the comp hypothesis would not hold. In fact, 
It would be impossible for physical constants to have any definite value, 
since there would not be any reference to anchor them with. 

George Levy