I don't like to insult US academia too severely, because I feel it's been
one of the most productive intellectual establishments in the history
of the human race.

However, in my 8 years as a professor I did find it frequently
frustrating, and one of the many reasons was the narrow-mindedness
of most academic researchers -- which the experience you cite
exemplifies.

Here is a more extreme example.  I once had a smart young
Chinese colleague who had relocated to the US from mainland
China and had done his PhD thesis on neural networks.  He
had been in the US for 4 years, and done his PhD at a US
university.  His research was fairly interesting, showing how to
more efficiently make a certain class of neural nets learn to
approximate a certain class of nonlinear functions.
And, he was quite surprised and fascinated when I explained to him that
the word "neural" referred to neurons in the brain.  No one
had ever explained to him what "neural" meant -- he thought it
was just a label for the type of mathematical network his
advisor had told him to study.
(This was in the early 90's, when neural networks were not
as widely famous yet.)

As undergraduates, most folks who study AI are actually interested
in "AI in the grand sense", along with more narrow-focused, short-term,
practical stuff.  But as part of the process of being taught to become
professional researchers, during grad school and the pre-tenure years
of professordom, one learns to distinguish real science and engineering
from fantastical fluff.

Similarly, even if you start out your bio career interested in life
extension and immortality, you soon learn that this is culturally
unacceptable and if you want to be a real scientist, you need to take
a different approach and, say, spend the next decade of your career
trying to understand everything possible about one particular gene
or protein.

Obviously, science evolved in this way in order to protect itself
against the natural human tendency to self-delusion and collective-
delusion.  But it has swung too far in the conservative, paranoid,
innovation-unfriendly direction!  Even though much historical
progress in science was made via large leaps rather than tiny,
conservative incremental steps, the current scientific community
is highly culturally biased against anyone who wants to make
a large leap.  Science has drifted into a cultural configuration that
is obsessed with making incremental progress with a very small
increment size.

Which of course creates many exciting opportunities for individuals
who are willing to put up with some cultural marginalization and
take larger risks based on intuitive insights (and also have the
perseverance to do the long, tedious legwork to validate their
insights, making their large leaps real rather than just hypothetical
and potential).

-- Ben

Joshua Fox wrote:
Singularitarians often accuse the AI establishment of a certain
close-mindedness. I always suspected that this was the usual biased
accusation of rebels against the old guard.

Although I have nothing new to add, I'd like to give some confirmatory
evidence on this from one who is not involved in AGI research.

When I recently met a PhD in AI from one of the top three programs in
the world, I expected some wonderful nuggets of knowledge on the
future of AGI, if only as speculations, but I just heard the
establishment line as described by Kurzweil et al.: AI will continue
to solve narrow problems, always working in tandem with humans who
will handle important part of the tasks.There is no need, and no
future, for human-level AI. (I responded, by the way, that there is
obviously a demand for human-level intelligence, given the salaries
that we knowledge-workers are paid.)

I was quite surprised, even though I had been prepped for exactly this.

Joshua

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

Reply via email to