Re: [agi] Learning without Understanding?

2008-06-17 Thread J Storrs Hall, PhD
The only thing I find surprising in that story is:

The findings go against one prominent theory that says children can only show 
smart, flexible behavior if they have conceptual knowledge – knowledge about 
how things work...

I don't see how anybody who's watched human beings at all can come with such a 
theory. People -- not just children -- do so much by rote, because that's 
the way we do things here, come up with totally clueless scientific theories 
like this, and so forth. 

Joe and Bob are carpenters, working on a house. Joe is hammering and Bob is 
handing him the nails. 

Bob says, Hey, wait a minute, half of these nails are defective. He takes 
out a nail and holds it up and sure enough, the head is toward the wall and 
the point is toward the hammer.

Joe retorts, Those aren't defective, you idiot, they're for the other side of 
the house.

Josh


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Roadrunner PetaVision

2008-06-17 Thread Richard Loosemore

Derek Zahn wrote:
 
Brain modeling certainly does seem to be in the news lately.  Checking 
out nextbigfuture.com, I was reading about that petaflop computer 
Roadrunner and articles about it say that they are or will soon be 
emulating the entire visual cortex -- a billion neurons.  I'm sure I'm 
not the only one who thinks that knowing what the cortex does and 
roughly how it does it could be quite inspiring for AGI, so I was 
surprised at this news.
 
Does anybody have links to more information (besides the short recent 
mainstream news story)?  Are they just being enthusiastic about their 
big computer or do they have a sophisticated theory?



I don't have more information, but I would counsel caution.

In my past experiences with claims of this sort (i.e. they are or will 
soon be emulating the entire visual cortex) it turns out that when you 
ask for the exact details of the project, you find that entire visual 
cortex means something like we are going to sample one neuron in 10,000 
and measure 10% of its connections, then extrapolate from this to an 
entire visual cortex.


Unless someone can convince me that they are going to scan a complete 
visual cortex in such detail that they can track all connections, right 
down to the individual synaptic boutons, and then translate that into a 
precise computational model that takes account of all the molecular 
mechanisms that play some role in signal transmission, I am not going to 
buy it.


In fact, claims like this have become so outrageously exaggerated that, 
these days, I cannot even be bothered to move the mouse far enough to 
click on a link and go find out what the real story is.


Their big computer might be able to model *something*, but it sounds 
like marketing hype to call that something the entire visual cortex.





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Learning without Understanding?

2008-06-17 Thread Richard Loosemore

Brad Paulsen wrote:

Hear Ye, Hear Ye...

CHILDREN LEARN SMART BEHAVIORS WITHOUT KNOWING WHAT THEY KNOW
http://www.physorg.com/news132839991.html


It's garbage science.  Or at least, it is a garbage headline.

There is a whole body of experiments done with adults in which subjects 
are asked to learn about several conceptual categories as a result of 
seeing only exemplars of the categories, without ever being told 
explicitly what the reasons are for a given instance being in one 
category or another.


These adults can easily pick up the categories even when they cannot 
easily articulate what the criteria are.  This is concept building, and 
it is one of the most fundamental activities of the human mind.


Is it surprising or new that children do the same thing?  It should be 
stupidly obvious that they do the same thing.  Children spend all their 
time voraciously separating the world out into categories, using almost 
nothing but exemplar-based learning.


Just because I believe that there is much of value in cognitive science, 
 doesn't mean I will defend everything done in its name.





Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
Mike A.:

Well, if you're convinced that infinity and the uncomputable are
imaginary things, then you've got a self-consistent view that I can't
directly argue against. But are you really willing to say that
seemingly understandable notions such as the problem of deciding
whether a given Turing machine will eventually halt are nonsense,
simply because we would need infinite time to verify that one doesn't
halt?

Ben J.:

Step 3 requires human society to invent new concepts and techniques,
and to thereby perform hypercomputation. I don't think that a
computable nonmonotonic logic really solves this problem.

I agree that nonmonotonic logic is not enough, not nearly. The point
is just that since there are computable approximations of
hypercomputers, it is not unreasonable to allow an AGI to reason about
uncomputable objects.

My own interpretation of the work is that an individual person is no more
powerful than a Turing machine (though, this point isn't discussed in the
paper), but that society as a whole is capable of hypercomputation because
we can keep drawing upon more resources to solve a problem: we build
machines, we reproduce, we interact with and record our thoughts in the
environment. Effectively, society as a whole becomes somewhat like a Zeus
machine - faster and more complex with each moment.

Something like this is mentioned in the paper as objection #4. But
personally, I'd respond as follows: if a society of AGIs can
hypercompute, then why not a single AGI with a society-of-mind style
architecture? It is difficult to distinguish between a closely-linked
society and a loosely-knit individual, where AI is concerned. So I
argue that if a society can (and should) hypercompute, there is no
reason to suspect that an individual can't (or shouldn't).

On Mon, Jun 16, 2008 at 11:37 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I'm not sure that I'm responding to your intended meaning, but: all
 computers are in reality finite-state machines, including the brain
 (granted we don't think the real-number calculations on the cellular
 level are fundamental to intelligence). However, the finite state
 machines we call PCs are so large that it is convenient to pretend
 they have infinite memory; and when we do this, we get a machine that
 is equivalent in power to a Turing machine. But a turing machine has
 an infinite tape, so it cannot really exist (the real computer
 eventually runs out of memory). Similarly, I'm arguing that the human
 brain is so large in particular ways that it is convenient to treat it
 as an even more powerful machine (perhaps an infinite-time turing
 machine), despite the fact that such a machine cannot exist (we only
 have a finite amount of time to think). Thus a spurious infinity is
 not so spurious.

 Abrahm,

 Thanks for responding.  You know, i might be in a bit over my head with
 some of the terminology in your paper, so to apologize in advance, but
 just to clarify:  spurious infinity according to Hegel is the sleight of
 hand the happens when quantity transitions surreptiously into a quality.
 At some point counting up, we are simply not talking about any number at
 all, but about a quality of being REALLY SUPER BIG as we make kind of a
 leap.

 According to him when we talk about infinity we are talking about some
 idea of a huge number (in this case of calculations) and to use a phrase
 he liked:  imaginary being.  So since I am kind of a Hegelian of sorts
 when I scanned the paper it looked like it argued that it is not possible
 to compute something that I had become convinced was imaginary anyway.
 That would be true if you bought into Hegel's definition of infinity and I
 realize there aren't a log of hegelians around.  But, tomorrow I will read
 further.

 Mike



 On Mon, Jun 16, 2008 at 9:19 PM, Mike Archbold [EMAIL PROTECTED] wrote:
 I previously posted here claiming that the human mind (and therefore an
 ideal AGI) entertains uncomputable models, counter to the
 AIXI/Solomonoff model. There was little enthusiasm about this idea. :)
 Anyway, I hope I'm not being too annoying if I try to argue the point
 once again. This paper also argues the point:

 http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf


 It looks like the paper hinges on:
 None of this prior work takes account of G¨odel intuition, repeatedly
 communicated
 to Hao Wang, that human minds converge to infinity in their power, and
 for this reason
 surpass the reach of ordinary Turing machines.

 The thing to watch out for here is what Hegel described as the spurious
 infinity which is just the imagination thinking some imaginary quantity
 really big, but no matter how big, you always can envision +1, but the
 result is always just another imaginary big number, to which you can add
 another +1... the point being that infinity is a idealistic quality,
 not
 a computable numeric quantity at all, ie., not numerical, we are talking
 about thought as such.

 I didn't read the whole paper, but the point I 

Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 9:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Mike A.:

 Well, if you're convinced that infinity and the uncomputable are
 imaginary things, then you've got a self-consistent view that I can't
 directly argue against. But are you really willing to say that
 seemingly understandable notions such as the problem of deciding
 whether a given Turing machine will eventually halt are nonsense,
 simply because we would need infinite time to verify that one doesn't
 halt?


Every thing that you understand is imaginary, your understanding
itself is an image in your mind, which could get there reflecting
reality, through limited number of steps (or so physicists keep
telling), or could be generated by overly vivid finite imagination.

No nonsense, just finite sense. What is this with verification that a
machine doesn't halt? One can't do it, so what is the problem?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
No nonsense, just finite sense. What is this with verification that a
machine doesn't halt? One can't do it, so what is the problem?

The idea would be (if Mike is really willing to go that far): It
makes sense to say that a given Turing machine DOES halt; I know what
that means. But to say that one DOESN'T halt? How can I make sense of
that? Either a given machine has halted, or it has not halted yet. But
to say that it never halts requires infinity, a nonsensical concept.

An AI that only understood computable concepts would agree with the
above. What I am saying is that such a view is... inhuman.

On Tue, Jun 17, 2008 at 1:29 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 9:10 PM, Abram Demski [EMAIL PROTECTED] wrote:
 Mike A.:

 Well, if you're convinced that infinity and the uncomputable are
 imaginary things, then you've got a self-consistent view that I can't
 directly argue against. But are you really willing to say that
 seemingly understandable notions such as the problem of deciding
 whether a given Turing machine will eventually halt are nonsense,
 simply because we would need infinite time to verify that one doesn't
 halt?


 Every thing that you understand is imaginary, your understanding
 itself is an image in your mind, which could get there reflecting
 reality, through limited number of steps (or so physicists keep
 telling), or could be generated by overly vivid finite imagination.

 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Learning without Understanding?

2008-06-17 Thread Matt Mahoney
--- On Tue, 6/17/08, Brad Paulsen [EMAIL PROTECTED] wrote:

 CHILDREN LEARN SMART BEHAVIORS WITHOUT KNOWING WHAT THEY KNOW
 http://www.physorg.com/news132839991.html

Another example: children learn to form grammatically correct sentences before 
they understand the difference between a noun and a verb.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 10:14 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 The idea would be (if Mike is really willing to go that far): It
 makes sense to say that a given Turing machine DOES halt; I know what
 that means. But to say that one DOESN'T halt? How can I make sense of
 that? Either a given machine has halted, or it has not halted yet. But
 to say that it never halts requires infinity, a nonsensical concept.

 An AI that only understood computable concepts would agree with the
 above. What I am saying is that such a view is... inhuman.


It wasn't worded correctly, there are many machines that you can prove
don't halt, but also others for which you can't prove that. Why would
that be inhuman to not be able to do impossible?

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Mike Archbold
 Mike A.:

 Well, if you're convinced that infinity and the uncomputable are
 imaginary things, then you've got a self-consistent view that I can't
 directly argue against. But are you really willing to say that
 seemingly understandable notions such as the problem of deciding
 whether a given Turing machine will eventually halt are nonsense,
 simply because we would need infinite time to verify that one doesn't
 halt?


Abrahm,

I don't think I really disagree with you, and like I said, I need to go
back and really read the whole paper.  It looks interesting.  I will make
a comment that is not a counterargument but rather something to think
about.

By imaginary I mean that it cannot really be pointed at.

I was thinking about this... you know, if we took a rule base, a finite
set of rules, and let's say we start with that -- clearly finite.  We can
identify each rule and literally point to the rule on the screen (I guess,
think of Cyc).  We have a QUANTITY of rules.

Then suppose we have determined that in order to simulate human thought,
we must make the rule base infinite, as our mathematical/philosophical
investigations have shown that to achieve AGI the infinite is necessary,
let's say infinite time and quantity.

At that point the finite quantity of rules would disappear and we would
cross over strictly into the imaginary idea of a really huge number of
rules and a huge imaginary time:  that is, we are not speaking of a
QUANTITY of rules and time -- we have expressly and surreptitiously
shifted from discussing quantities to qualities.

Whereas before we were talking about a finite set of rules, now we fold
our hands and say, well, it's an infinite set of rules and time now.  We
have shifted from a finite-quantity that could be examined to an
infinity-quality that cannot be examined and is wholly imaginary, yet
using that as a proof.

When we make this transition, it seems to me that the shift is so radical
that it is impossible to justify making the step, because as I mentioned
it involves a surreptitious shift from quantity to quality.

Incidentally Hegel held that the true infinite (as opposed to the
spurious infinite which is the unwarranted transition from quantity to
quality) was human thought.  I've been working on a book written by David
Carlson, a law professor, which makes clear some of the very obscure
writing of Hegel.

Mike


 Ben J.:

 Step 3 requires human society to invent new concepts and techniques,
 and to thereby perform hypercomputation. I don't think that a
 computable nonmonotonic logic really solves this problem.

 I agree that nonmonotonic logic is not enough, not nearly. The point
 is just that since there are computable approximations of
 hypercomputers, it is not unreasonable to allow an AGI to reason about
 uncomputable objects.

 My own interpretation of the work is that an individual person is no more
 powerful than a Turing machine (though, this point isn't discussed in the
 paper), but that society as a whole is capable of hypercomputation because
 we can keep drawing upon more resources to solve a problem: we build
 machines, we reproduce, we interact with and record our thoughts in the
 environment. Effectively, society as a whole becomes somewhat like a Zeus
 machine - faster and more complex with each moment.

 Something like this is mentioned in the paper as objection #4. But
 personally, I'd respond as follows: if a society of AGIs can
 hypercompute, then why not a single AGI with a society-of-mind style
 architecture? It is difficult to distinguish between a closely-linked
 society and a loosely-knit individual, where AI is concerned. So I
 argue that if a society can (and should) hypercompute, there is no
 reason to suspect that an individual can't (or shouldn't).

 On Mon, Jun 16, 2008 at 11:37 PM, Mike Archbold [EMAIL PROTECTED]
 wrote:
 I'm not sure that I'm responding to your intended meaning, but: all
 computers are in reality finite-state machines, including the brain
 (granted we don't think the real-number calculations on the cellular
 level are fundamental to intelligence). However, the finite state
 machines we call PCs are so large that it is convenient to pretend
 they have infinite memory; and when we do this, we get a machine that
 is equivalent in power to a Turing machine. But a turing machine has
 an infinite tape, so it cannot really exist (the real computer
 eventually runs out of memory). Similarly, I'm arguing that the human
 brain is so large in particular ways that it is convenient to treat it
 as an even more powerful machine (perhaps an infinite-time turing
 machine), despite the fact that such a machine cannot exist (we only
 have a finite amount of time to think). Thus a spurious infinity is
 not so spurious.

 Abrahm,

 Thanks for responding.  You know, i might be in a bit over my head with
 some of the terminology in your paper, so to apologize in advance, but
 just to clarify:  spurious infinity according to Hegel is the sleight
 

Re: [agi] the uncomputable

2008-06-17 Thread Abram Demski
V. N.,
What is inhuman to me, is to claim that the halting problem is no
problem on such a basis: that the statement Turing machine X does not
halt only is true of Turing machines that are *provably* non-halting.
And this is the view we are forced into if we abandon the reality of
the uncomputable.

A. D.

On Tue, Jun 17, 2008 at 2:34 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 10:14 PM, Abram Demski [EMAIL PROTECTED] wrote:
 No nonsense, just finite sense. What is this with verification that a
 machine doesn't halt? One can't do it, so what is the problem?

 The idea would be (if Mike is really willing to go that far): It
 makes sense to say that a given Turing machine DOES halt; I know what
 that means. But to say that one DOESN'T halt? How can I make sense of
 that? Either a given machine has halted, or it has not halted yet. But
 to say that it never halts requires infinity, a nonsensical concept.

 An AI that only understood computable concepts would agree with the
 above. What I am saying is that such a view is... inhuman.


 It wasn't worded correctly, there are many machines that you can prove
 don't halt, but also others for which you can't prove that. Why would
 that be inhuman to not be able to do impossible?

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Hector Zenil
People interested on this thread subject might be interested to read a
paper we wrote some years ago published by World Scientific:

---
Hector Zenil, Francisco Hernandez-Quiroz, On the possible
Computational Power of the Human Mind, WORLDVIEWS, SCIENCE AND US,
edited by Carlos Gershenson, Diederik Aerts and Bruce Edmonds, World
Scientific, 2007.

available online: http://arxiv.org/abs/cs/0605065

Abstract
The aim of this paper is to address the question: Can an artificial
neural network (ANN) model be used as a possible characterization of
the power of the human mind? We will discuss what might be the
relationship between such a model and its natural counterpart. A
possible characterization of the different power capabilities of the
mind is suggested in terms of the information contained (in its
computational complexity) or achievable by it. Such characterization
takes advantage of recent results based on natural neural networks
(NNN) and the computational power of arbitrary artificial neural
networks (ANN). The possible acceptance of neural networks as the
model of the human mind's operation makes the aforementioned quite
relevant.

Presented as a talk at the Complexity, Science and Society Conference,
2005, University of Liverpool, UK.
---

On the other hand, Goedelian type arguments (such as
http://www.osl.iu.edu/~kyross/pub/new-godelian.pdf) have been widely
accepted to be disproved since Hofstadter's Escher, Goedel and Bach in
the 70s or before.

I consider myself as someone within the busy beaver field since my own
research on what we call experimental algorithmic information theory
is very related to. I don't see how either Solomonoff's induction or
the Busy Beaver problem can be used as evidence or be conceived as an
explaination of the human mind as a hypercomputer. I don't see in the
development of the two fields anything not Turing computable.

There are known values of the busy beaver up to 4 state 2 symbol
Turing machines (although it seems they claim to have calculated up to
6 states...). To determine whether a Turing machine halts up to that
number of states is a relatively easy task by using very computable
tricks, (including the Christmas Tree method).

I think their main argument is that (a) once known the value of a busy
beaver for n states, one learns how to crack the set of n+1 states and
eventually get it. (i) They then use a kind of mathematical induction
to proof that any given Turing machine with a fixed number of states
will eventually fail, while the human mind can go on. However it seems
pretty clear that the method evidently fails for n large enough, and
hence disproving their claim. Now suppose their claim is right (a),
now let's conceive the following method: (b) that each time we learn
how to crack n+1 we build a Turing machine T that computes n+1, using
their own argument (i) then Turing machine are hypercomputers!

I might be missing something, if so please feel free to point it out.

Best regards,



-- 
Hector Zenilhttp://zenil.mathrix.org


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Vladimir Nesov
On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED] wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


Why, you can also mark up the remaining territory by true and
false, these labels just won't mean anything there. Set up to sets,
T and F, place all true things in T, all false things in F, and all
unknown things however you like, but don't tell anybody how. Some
people like to place all unknown things in F, their call.
Mathematically it can be convenient, but really, even of computable
things you can't really compute that much, so the argument is void for
all practical concerns anyway.

-- 
Vladimir Nesov
[EMAIL PROTECTED]


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Hector Zenil
On Tue, Jun 17, 2008 at 5:58 PM, Abram Demski [EMAIL PROTECTED] wrote:


 Hector Zenil,

 I do not think I understand you. Your argument seems similar to the following:

 I do not see why Turing machines are necessary. If we can compute a
 function f(x) by some Turing machine, then we could compute it up to
 some value x=n. But we could construct a lookup table of all values
 f(0), f(1), f(2),... , f(n) which contains just as much information.

 Obviously the above is a silly argument, but I don't know how else to
 interpret you. A Turing machine can capture a finite number of the
 outputs of a hypercomputer. Does that in any way make the
 hypercomputer reducible to the Turing machine?


This nicely boils the fallacy down from 20 pages to a few lines.
Merely providing the lookup table or adding more states is not
sufficient to turn a Turing machine into a hypercomputer as it would
follow from the paper main argument: that humans can always find
bb(n+1) once bb(n) calculated, therefore humans are capable of
hypercomputing (modulo other strong assumptions).

In fact the paper acknowledges that more information is needed at each
jump, so eventually one would reach either a physical or a feasible
limit unless the brain/mind is infinite in capabilities, falling into
the traditional claims on hypercomputation, and not necessarily a new
one.

I recall that my suggestion was (reductio ad absurdum) to encode (or
provide the program) a n-state Turing machine T_n after knowing bb(n)
so at every moment when people is working on bb(n+1) there is always a
T_n behind able to calculate bb(n). Once the hyperhuman finds bb(n+1)
then he encodes T_{n+1} to compute bb(n+1) while the hyperhuman H
computes bb(n+2) but one knows that at the next step one will be able
to code T_{n+2} to calculate bb(n+2), just as H does. Following their
argument, if there is always a machine able to calculate bb(n+1) for
any n when bb(n) is calculated (as there is a hyperhuman according to
their claim), therefore T (the universal Turing machine that emulates
all those T_i for all i) would turn into a hypercomputer (absurd since
it would collapse the classes of computability!).

Notice that my use of hypercomputer is the traditional use of a
computer: a machine able to compute at  a Turing degree other than the
first.

I still might be missing something, but hope this clarifies my objection.

People might be also interested in the work of Kevin Kelly:
Uncomputability: The Problem of Induction Internalized, Theoretical
Computer Science, pp. 317: 2004, 227-249.

as an epistemological approach to traditional computability, as some
have suggested in this thread induction as evidence for
hypercomputability.


-- 
Hector Zenilhttp://zenil.mathrix.org


 On Tue, Jun 17, 2008 at 4:35 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:
 On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED] wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


 Why, you can also mark up the remaining territory by true and
 false, these labels just won't mean anything there. Set up to sets,
 T and F, place all true things in T, all false things in F, and all
 unknown things however you like, but don't tell anybody how. Some
 people like to place all unknown things in F, their call.
 Mathematically it can be convenient, but really, even of computable
 things you can't really compute that much, so the argument is void for
 all practical concerns anyway.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] the uncomputable

2008-06-17 Thread Mike Archbold

 Mike Archbold,

 It seems you've made a counterargument without meaning to.

 When we make this transition, it seems to me that the shift is so radical
 that it is impossible to justify making the step, because as I mentioned
 it involves a surreptitious shift from quantity to quality.

 I maintain that the jump is justified. To me it is like observing the
 sequence 1, 2, 4, 8, 16, 32... and concluding that each number is
 twice the previous. It is a jump from several quantities to a single
 quality.



Fair enough.  I think what we are saying is that in the transition of
quantity to quality -- as in your example -- is a kind of appeal to the
infinite, ie., is an instance of hypercompuation?  That does have a Godel
sound to it, as I understand it, we appeal beyond the data in question
although I have only seen Godel's writings (not read them).

I read more of the paper.  I like the part about the Zeus Machine.  Cool.

I guess I am a bit more aligned to the philosophy side than the
Turing-Godel-computational side of the house.  In my studies as I
mentioned of Hegel's Logic, there is a constant interplay between quantity
and quality, given as measure -- measure here being the result of quantity
and quality intermixing.

I guess measure in this sense is roughly equivalent to hypercomputation if
I have my Godels and Hegels lined up in a row.  Hegel's philosophy was of
course totally predicated on the mind which as I said he held to be
infinite.  Although, we have to be careful in so much as there exist
multiple definitions of the infinite.

Mike Archbold





 On Tue, Jun 17, 2008 at 4:35 PM, Vladimir Nesov [EMAIL PROTECTED]
 wrote:
 On Tue, Jun 17, 2008 at 11:38 PM, Abram Demski [EMAIL PROTECTED]
 wrote:
 V. N.,
 What is inhuman to me, is to claim that the halting problem is no
 problem on such a basis: that the statement Turing machine X does not
 halt only is true of Turing machines that are *provably* non-halting.
 And this is the view we are forced into if we abandon the reality of
 the uncomputable.


 Why, you can also mark up the remaining territory by true and
 false, these labels just won't mean anything there. Set up to sets,
 T and F, place all true things in T, all false things in F, and all
 unknown things however you like, but don't tell anybody how. Some
 people like to place all unknown things in F, their call.
 Mathematically it can be convenient, but really, even of computable
 things you can't really compute that much, so the argument is void for
 all practical concerns anyway.

 --
 Vladimir Nesov
 [EMAIL PROTECTED]


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com





---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Learning without Understanding?

2008-06-17 Thread Jim Bromer
- Original Message 

From: Richard Loosemore [EMAIL PROTECTED]

Brad Paulsen wrote: 
 CHILDREN LEARN SMART BEHAVIORS WITHOUT KNOWING WHAT THEY KNOW
 http://www.physorg.com/news132839991.html

It's garbage science.  Or at least, it is a garbage headline.

There is a whole body of experiments done with adults in which subjects 
are asked to learn about several conceptual categories as a result of 
seeing only exemplars of the categories, without ever being told 
explicitly what the reasons are for a given instance being in one 
category or another.

These adults can easily pick up the categories even when they cannot 
easily articulate what the criteria are.  This is concept building, and 
it is one of the most fundamental activities of the human mind.

Is it surprising or new that children do the same thing?  It should be 
stupidly obvious that they do the same thing.  Children spend all their 
time voraciously separating the world out into categories, using almost 
nothing but exemplar-based learning.

Just because I believe that there is much of value in cognitive science, 
  doesn't mean I will defend everything done in its name.

Richard Loosemore
--

Well, cognitive science progresses by questioning other conclusions and then 
devising new experiments that can produce more insightful results.

One of the problems with this kind of experiment is that children in the 
(relatively) more affluent communities of the industrialized world already have 
a (relatively) sophisticated capability to assess certain aspects of images on 
a video screen.  The fact that a group of cognitive scientists might be totally 
unaware of the potential significance of this kind of complex awareness is an 
oopsie that can only be due to the innocence of youth.  I wonder what the 
average age of the researchers were and if they fully realized what they were 
doing?

But the issue so important that the experiment does deserve some attention.  If 
a more sophisticated set of experiments could provide more detail about how 
implicit knowledge is acquired and becomes explicit, then the  results might be 
important.

Jim Bromer


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


[agi] Have you hugged a cephalopod today?

2008-06-17 Thread Brad Paulsen

From The More stuff we already know department...

NEW RESEARCH ON OCTOPUSES SHEDS LIGHT ON MEMORY
http://www.physorg.com/news132920831.html

Cheers,

Brad


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com