People interested on this thread subject might be interested to read a
paper we wrote some years ago published by World Scientific:

---
Hector Zenil, Francisco Hernandez-Quiroz, "On the possible
Computational Power of the Human Mind", WORLDVIEWS, SCIENCE AND US,
edited by Carlos Gershenson, Diederik Aerts and Bruce Edmonds, World
Scientific, 2007.

available online: http://arxiv.org/abs/cs/0605065

Abstract
The aim of this paper is to address the question: Can an artificial
neural network (ANN) model be used as a possible characterization of
the power of the human mind? We will discuss what might be the
relationship between such a model and its natural counterpart. A
possible characterization of the different power capabilities of the
mind is suggested in terms of the information contained (in its
computational complexity) or achievable by it. Such characterization
takes advantage of recent results based on natural neural networks
(NNN) and the computational power of arbitrary artificial neural
networks (ANN). The possible acceptance of neural networks as the
model of the human mind's operation makes the aforementioned quite
relevant.

Presented as a talk at the Complexity, Science and Society Conference,
2005, University of Liverpool, UK.
---

On the other hand, Goedelian type arguments (such as
http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf) have been widely
accepted to be disproved since Hofstadter's Escher, Goedel and Bach in
the 70s or before.

I consider myself as someone within the busy beaver field since my own
research on what we call experimental algorithmic information theory
is very related to. I don't see how either Solomonoff's induction or
the Busy Beaver problem can be used as evidence or be conceived as an
explaination of the human mind as a hypercomputer. I don't see in the
development of the two fields anything not Turing computable.

There are known values of the busy beaver up to 4 state 2 symbol
Turing machines (although it seems they claim to have calculated up to
6 states...). To determine whether a Turing machine halts up to that
number of states is a relatively easy task by using very computable
tricks, (including the Christmas Tree method).

I think their main argument is that (a) once known the value of a busy
beaver for n states, one learns how to crack the set of n+1 states and
eventually get it. (i) They then use a kind of mathematical induction
to proof that any given Turing machine with a fixed number of states
will eventually fail, while the human mind can go on. However it seems
pretty clear that the method evidently fails for n large enough, and
hence disproving their claim. Now suppose their claim is right (a),
now let's conceive the following method: (b) that each time we learn
how to crack n+1 we build a Turing machine T that computes n+1, using
their own argument (i) then Turing machine are hypercomputers!

I might be missing something, if so please feel free to point it out.

Best regards,



-- 
Hector Zenil    http://zenil.mathrix.org


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to