J. Andrew Rogers wrote:
On Apr 6, 2008, at 11:58 AM, Richard Loosemore wrote:
Artificial Intelligence research does not have a credible science
behind it. There is no clear definition of what intelligence is,
there is only the living example of the human mind that tells us that
some things are "intelligent".
The fact that the vast majority of AGI theory is pulled out of /dev/ass
notwithstanding, your above characterization would appear to reflect
your limitations which you have chosen to project onto the broader field
of AGI research. Just because most AI researchers are misguided fools
and you do not fully understand all the relevant theory does not imply
that this is a universal (even if it were).
Ad hominem. Shameful.
This is not about mathematical proof, it is about having a credible,
accepted framework that allows us to say that we have already come to
an agreement that intelligence is X, and so, starting from that
position we are able to do some engineering to build a system that
satisfies the criteria inherent in X, so we can build an intellgence.
I do not need anyone's "agreement" to prove that system Y will have
property X, nor do I have to accommodate pet theories to do so. AGI is
mathematics, not science.
AGI *is* mathematics?
Oh dear.
I'm sorry, but if you can make a statement such as this, and if you are
already starting to reply to points of debate by resorting to ad
hominems, then it would be a waste of my time to engage.
I will just note that if this point of view is at all widespread - if
there really are large numbers of people who agree that "AGI is
mathematics, not science" - then this is a perfect illustration of
just why no progress is being made in the field.
Richard Loosemore
Plenty of people can agree on what X is and
are satisfied with the rigor of whatever derivations were required.
There are even multiple X out there depending on the criteria you are
looking to satisfy -- the label of "AI" is immaterial.
What seems to have escaped you is that there is nothing about an
agreement on X that prescribes a real-world engineering design. We have
many examples of tightly defined Xs in theory that took many decades of
R&D to reduce to practice or which in some cases have never been reduced
to real-world practice even though we can very strictly characterize
them in the mathematical abstract. There are many AI researchers who
could be accurately described as having no rigorous framework or
foundation for their implementation work, but conflating this group with
those stuck solving the implementation theory problems of a
well-specified X is a category error.
There are two unrelated difficult problems in AGI: choosing a rigorous X
with satisfactory theoretical properties and designing a real-world
system implementation that expresses X with satisfactory properties.
There was a time when most credible AGI research was stuck working on
the former, but today an argument could be made that most credible AGI
research is stuck working on the latter. I would question the
credibility of opinions offered by people who cannot discern the
difference.
And in case you are tempted to do what (e.g.) Russell and Norvig do in
their textbook...
I'm not interested in lame classical AI, so this is essentially a
strawman. To the extent I am personally in a "theory camp", I have been
in the broader algorithmic information theory camp since before it was
on anyone's radar.
It is not that these investors understand the abstract ideas I just
described, it is that they have a gut feel for the rate of progress
and the signs of progress and the type of talk that they should be
encountering if AGI had mature science behind it. Instead, what they
get is a feeling from AGI researchers that each one is doing the
following:
1) Resorting to a bottom line that amounts to "I have a really good
personal feeling that my project really will get there", and
2) Examples of progress that look like an attempt to dress a doughnut
up as a wedding cake.
Sure, but what does this have to do with the topic at hand? The problem
is that investors lack any ability to discern a doughnut from a wedding
cake.
J. Andrew Rogers
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?&
Powered by Listbox: http://www.listbox.com
-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription:
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com