Dear folks,

Overall, I agree with Gordana. I have one, perhaps very large, correction though. I will go through this bit by bit (no pun intended). I have been planning to jump into this discussion,  but a visit here (UFBA) by Stewart Newman has kept me busy. He gave lots of examples of non-computable processes in development in showing that developmental units could be retained through changes in both function and genes. Gerd Muller had told me this almost ten years ago, but Stewart has some remarkable new cases, some of which have implications  for evolution and phylogeny.


At 12:54 PM 2012/11/19, Bruno Marchal wrote:
Dear Joe,

On 19 Nov 2012, at 12:26, joe.bren...@bluewin.ch wrote:


Dear Bruno, Gordana and All,

What I am resisting is any form of numerical-computational totalitarianism.

Joe, if you have an idea of an effective procedure that is not captured by the Turing machine, recursive function model, and you can make clear why it doesn't then I will listen to that complaint. Turing did, in fact, give a idea of how this might work in unpublished papers of his. He was also the originator of diffusion-reaction processes (I would not call them mechanisms, but they are commonly called so in the literature, since it has become fashionable to call processes that are not mechanical in the classical sense mechanisms). This sort of process is fundamental to Stuart's arguments about the non-reducibility of differences in developmental units to either genetic differences or to functional differences. Note, in passing, the role of differences here. If information is a difference that makes a difference, then we are talking about types of information here. Actual systems that realized Turing's mathematics were only created some time later, with the Brusselator (an autocatalytic reaction devised by Prigogine, but it is more) in the material form of the Belousov–Zhabotinsky reaction (BZ reaction) from the 1950s, but they are very common in nature, such as in forming the stripes on a zebra, or the correct end of a severed hydra.

I have some remarks about computations to make next, which agree in general with Gordana's remarks, but add another element that I think is essential.

Computationalism would be totalitarian if there was a computable universal programs for the total computable functions, and only them. But it is easy to prove that such a "total" universal machine cannot exist.

This sort of machine is often called an 'oracle'. Greg Chaitin, who originated algorithmic information theory independently of Kolmogorov, did a construction of a number omega (my email programme won't take the Greek capital letter). If we knew that number, then we could tell whether any program halts. We could also give the first n digits of the product of any program if we knew the first n digits of omega (assuming the same programming language). The digits of omega, though, are provably random, which means that there is no program shorter than n that can compute the first n digits of the product for every program. So there are no oracles. We can get this from the unsolvability of the halting problem, but Chaitin's approach shows what an oracle would have to look like.


The price of having universal machine is that it will be a partial machine, undefined on many arguments. Then such machine can be shown to be able to defeat all reductionist or totalitarian theories about their own behavior.
That is why I say that computationalism is a vaccine against totalitarianism or reductionism.

This is where I part company. There are two notions of computation. The first is the sense of computation. The first and most widely used is that of an algorithm (specifically a Knuth algorithm). It is equivalent to a program that halts, for all intents. There are programs that don't halt, which are partial functions: there is no algorithm that gives a value for every input. However, a program that could compute such a function (an oracle) would be able to compute the first n, for any n, numbers. It just can't compute them all. For example, the three body problem is known to be uncomputable. However, with a computer powerful enough, and some epsilon of some value, we could computer the later state of the system to within epsilon for any arbitrary finite time t.

What we can't do is solve reaction diffusion systems. Why is that? It is because dissipation is an essential part of their dynamics -- they need an energy input to work (this is true of all dissipative systems in the Prigogine sense -- I say Prigogine sense since there are systems that self-organize through dissipation that are not dissipative systems, they are only self-reorganizing, not spontaneously self-organizing -- some people confuse the two, or are careless about distinguishing them, such as Stuart Kauffman). In any case, to reach a steady state in finite time (e.g., the BZ reaction oscillations) they do the noncomputable in finite time. Harmonics in the Solar System, such as the1-1 harmonic of the Moon's rotation to revolution speed, and the 3-2 harmonic of Mercury's rotation to revolution speed. The dynamical equations in these cases are not solvable.

So to Gordana's argument, I would add dissipation (energy coming into the system from within would work as well, but we don't know of this sort of case with any certainty -- it would violate conservation of energy, or the non-decrease of entropy, or both). In other words, the system has to a) produce entropy within, and b) dissipate it outwards (as heat or other form of lesser order).


If Gordana writes "models" then there is a world that is modeled and its logic and rules can be quite different.This is the world whose processes I am trying to describe. A computational theory of models is fine.

Similarly, when Bruno writes:"But "I am not Turing emulable" would be as hypothetical, and, in my opinion much more speculative, especially with the weak form of computationalism I use as a working hypothesis.", why not let the "two flowers bloom"?

I am open to that idea, but I have never seen a non-comp theory, and for good reason: indeed, to assume non comp, you have to diagonalize against the partial computable functions. Strictly speaking this does not work, so you have to assume something irrational in the picture, and why not, but this seems premature to me, given the range and power of the comp hypothesis, when well understood and not reduced to its total functions parts.

I disagree here as well. I won't go into this in depth here, as I have to do my taxes urgently, and this sort of thing takes up a lot of time for me. Robert Rosen distinguishes between what he calls synthetic and analytic models. Synthetic models can be broken down into parts (like inputs and outputs, or the dynamics of pairwise parts) and summed to get a total model of the system (they can be reduced logically to their data, if we have enough of it). Not all analytic models are synthetic -- given data the number of possible synthetic models is in the set of logical sums of the data sets. The possible models are the in crossproduct (logical product) of the data set with itself. This is bigger than the first set, so their are more possible analytic models than synthetic models in general. Analytic models that are not synthetic are noncomputable for infinite possible data, but I won't prove that here; it is either obvious or you need to learn more logic. (Rosen's account isn't much better.)


Assuming something irrational at the start, can only hide the irrationality which somehow already exists for the machine, and deprive comp from its explanation power in some arbitrary way. If comp is refuted, then we will have a way to localize where something irrational, and not comp derivable, occurs.

I am not sure what you mean by irrational, Gordana. It could mean effectively noncomputable in the Turing sense, in which case it would be effectively random (within constraints on the whole model). Or it could mean something else, in which case it is beyond my current imagining abilities to understand. I don't see analytic but not synthetic models as irrational, though some data would be unpredictable, and not fully understandable.



All of his remarks apply to that part of the world to which his weak form of computationalism applies, but not to the entire world.

But what is the world? Computationalism can answer that, and can guaranty that the world (whatever it is) is NOT computable or emulable by a computer. Comp is an hypothesis on me (or you), not on the world, which, assuming comp, is not a computable object, if it is an object at all. A summing up slogan: if my body is a machine, then consciousness and matter are not Turing emulable. Or if my body is a machine, my soul cannot be a machine.

Seth Lloyd argues rather convincingly that the world is a quantum computer. I worry about the role of decoherence in making it really like a computer nonetheless. But Newman's cases are not Turing emulable if you mean by a halting machine (Knuth algorithm), but by a programme in general (or a Rosen analytic model) the issue is different.


If someone want me to explain more on why Church thesis protects us against reductionism, and why there is no total universal machine, please just ask. Keep also in mind that we can send only two posts per week.

As I have tried to argue above, to avoid reductionism in reality as opposed to in logic and mathematics I think we need the additional condition of dissipation (what I call nonHamiltonian mechanics elsewhere -- the usual condition of conservation breaks down due to the loss of free energy to the system).

Best,
John


Professor John Collier                                     colli...@ukzn.ac.za
Philosophy and Ethics, University of KwaZulu-Natal, Durban 4041 South Africa
T: +27 (31) 260 3248 / 260 2292       F: +27 (31) 260 3031
http ://web.ncf.ca/collier
_______________________________________________
fis mailing list
fis@listas.unizar.es
https://webmail.unizar.es/cgi-bin/mailman/listinfo/fis

Reply via email to