Re: [singularity] Re: Revised version of Jaron Lanier's thought experiment.

2008-02-23 Thread Joshua Fox
On 24/02/2008, Joshua Fox <[EMAIL PROTECTED]> wrote:
> Eric B. Ramsay wrote:
>  > Imagine a sufficiently large computer that works according to the 
> architecture of our ordinary
>  > PC's. In the space of Operating Systems (code interpreters), we can
>  > find an operating  system such that it will run the input from the 
> rainstorm such that it appears identical to a computer running a brain
>
>
> To "find" this operating system with reasonable resources would
>  require intelligence  -- the exact intelligence which Lanier is
>  looking for but failing to identify.

Yes it would require intelligence to "find" it, but your mental state
is not contingent on someone else "finding" it. Nor is this an
argument against functionalism.

Consider Arithmetical Functionalism: the theory that a calculation is
multiply realisable, in any device that has the right functional
organisation. But this might mean that somewhere in the vastness of
the universe, a calculation such as 2 + 2 = 4 might be being
implemented purely by chance: in the causal relationship between atoms
in an interstellar gas cloud, for example. This is clearly ridiculous,
so *either* Arithmetical Functionalism is false *or* it is impossible
that a calculation will be implemented accidentally. Right?




-- 
Stathis Papaioannou

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] mass-market Singularity fiction

2008-02-11 Thread Joshua Fox
A nice idea would be a "Bad Singularity Science" website along the lines of
http://intuitor.com/moviephysics/ and http://www.badastronomy.com/bad/movies
.

Like it or not, sci-fi is already the main gateway in making people aware
about the Singularity  ("Terminator!" "Matrix!") and this should be used to
educate.

SIAI's "Three Laws" campaign http://www.asimovlaws.com was excellent,
although if we are to take the model of those sites, the style would be a
bit more entertainment-oriented (while still educating) .

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=94885151-ef48f7

[singularity] mass-market Singularity fiction

2008-02-05 Thread Joshua Fox
The new Terminator series brings up
againthe
concept of  mass-market Singularity fiction.

The folly of argument from fiction has been discussed enough.

I want to raise another question: What is the relevance, if any, of
Singularity fiction?

Speculative fiction has often influenced the real world. Walden 2, Old-New
Land, Utopia, 1984, Brave New World, etc. changed people's political
outlook.  Science fiction inspired  many people to become engineers and
scientists,  including some who accomplished great things.

What is the real-world role of Singularity fiction?  Should we differentiate
between fiction which reaches the narrow intellectual/geek audience and that
which spreads to a mass audience?

Should we use fiction as a platform for getting  attention by contrasting it
to a more insightful  analysis (as in the SIAI I Robot campaign) ? Or can we
hold up specific works of fiction, with all their  limitations, as
inspiration for working to attain a better future or head off threats?

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=93892203-2c9636

Re: [singularity] EidolonTLP

2008-01-23 Thread Joshua Fox
This video's low-quality rendering and speech --  lower quality than
what is commonly available in computing today -- is used as a signal
that we are dealing with a computer!

I am reminded of the fonts used in 1970's sci-fi movies to give a
futuristic feel. These fonts reflected computer capabilities at the
time of the making of the movie.

Joshua

2008/1/23, Vladimir Nesov <[EMAIL PROTECTED]>:
> On Jan 23, 2008 1:06 AM, Daniel Allen <[EMAIL PROTECTED]> wrote:
> > It is entertaining.
> >
> > I love the greeting -- "Greetings, little people" -- and the graphics along
> > with the ambient and almost haunting background music.
>
> But speech is so boring that it must be a GOFAI...
>
> --
> Vladimir Nesovmailto:[EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=88859652-363ae4


Re: [singularity] The establishment line on AGI

2008-01-18 Thread Joshua Fox
Ben Goertzel wrote:
>  Imre Lakatos, with his theory of scientific research programmes.
Peter Jens wrote:
> The Logic of Scientific Discovery by  Popper.

I haven't read these; I'll try to get to them.

However, even if their philosophy of science is superior, I wonder if
they are as good as Kuhn at explaining the mystery of the current
state of AGI: Even  though it makes perfect sense to research this
field, very little work is now being done, and some scientists in
related fields seem to almost recoil at the thought of it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87385932-438541


Re: [singularity] The establishment line on AGI

2008-01-15 Thread Joshua Fox
>
> "paradigm" ...I still find it a fuzzy term,

Kuhn reviews this fuzziness in his epilogue on the 3rd edition.

But one definition of "paradigm" is the shared examples which drive a field.
For example, chess for GOFAI, or the Turing Test for AI. These two are
notparadigms for a new field of AGI  -- in fact, shared AGI paradigms
do not
yet exist.

(In my PhD, I was always a bit annoyed that we illustrated linguistic
phenomena by the same tired old examples -- it seemed to me that it would
have been more scientific to adduce statistical samples. That's true, but
now I realize that these examples tie a community together.)


AGI-08 conference ,
>
OpenCog AGI project
>
-- AGI email list
> -- 2006 AGI workshop
> -- two AGI edited volumes
>
Not to mention books, articles, collaborations, Webmind (which formed a
community of people thinking about AGI), AGIRI, the role in SIAI, the Dyn.
Psych. Journal, the AGI ontology,...

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85974834-73e7f4

Re: [singularity] The establishment line on AGI

2008-01-14 Thread Joshua Fox
I just read Kuhn's *Structure of Scientific Revolutions*. It could have been
written as an explanation of why the field of AGI is as it is today: There
is not yet a tight-knit scientific community for AGI, driven by a shared new
"paradigm."

Practical conclusions:

1. When people ask why AGI is not getting academic attention, refer them to
(popularizations and summaries) of Kuhn. It is strange that this is not done
more often. (Thanks, Richard.) Part of the problem is that the phrase
"paradigm shift" has been flogged to meaninglessness, but that does not
change the essence of the argument.

2. Take guidance from Kuhn and related researchers on how to launch a
paradigm shift. (Kuhn himself states that his ideas are prescriptive as well
as descriptive.)

This would involve, most importantly, shared "paradigms." These are specific
examples that encapsulate laws, which give a fundamentally different
world-view, not just improving on the older approach, but understanding the
world in a fundamentally different way.

Also, this would involve creating a close-knit community through
conferences, journals, common terminologies/ontologies, email lists,
articles, books, fellowships, collaborations, correspondence, research
institutes, doctoral programs, and other such devices. (Popularization is
not on the list of community-builders, although it may have its own value.)
Ben has been involved in many efforts in these directions -- I wonder if he
was thinking of Kuhn.

Joshua

On Thu, 22 Mar 2007 Joshua Fox wrote:
> Richard, thanks for the reference to Kuhn.  I was aware of his
"paradigm  shift"
> concepts, although I have not yet  read his writings.
> > On Tue, 20 Mar 2007 Richard Loosemore wrote:
> > It sounds like you might be asking about paradigm shifts in
the  technical sense of
> > that term.  Have you read Kuhn and Lakatos?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=85562421-b91ef0

Re: [singularity] Requested: objections to SIAI, AGI, the Singularity and Friendliness

2007-12-27 Thread Joshua Fox
Kaj and Tom,

Great idea!

Here's an objection to the few current Friendly AGI and related efforts.

"What you're saying about your  project seems makes sense, though I don't
quite understand it. But even though an ineffectual bunch of dreamy nerds
may be good for tinkering with gadgets, they're no good at getting a major
funded engineering initiative underway and finished, so there's no use
caring about your project too much."

(No, I don't personally agree with the above statement.)

I do think many people (even  fairly intelligent ones) think this way when
exposed to the concept: I wonder if people might have thought this way about
Goddard as a pioneering space-flight rocketeer in the 1920's.

Joshua



2007/12/27, Kaj Sotala <[EMAIL PROTECTED]>:
>
> For the recent week, I have together with Tom McCabe been collecting
> all sorts of objections that have been raised against the concepts of
> AGI, the Singularity, Friendliness, and anything else relating to
> SIAI's work. We've managed to get a bunch of them together, so it
> seemed like the next stage would be to publicly ask people for any
> objections we may have missed.
>
> The objections we've gathered so far are listed below. If you know of
> any objection related to these topics that you've seriously
> considered, or have heard people bring up, please mention it if it's
> not in this list, no matter how silly it might seem to you now. (If
> you're not sure of whether the objection falls under the ones already
> covered, send it anyway, just to be sure.) You can send your
> objections to the list or to me directly. Thank you in advance for
> everybody who replies.
>
> AI & The Singularity
> --
>
> * We are nowhere near building an AI.
> * AI has supposedly been around the corner for 20 years now.
> * Computation isn't a sufficient prerequisite for consciousness.
> * Computers can only do what they're programmed to do.
> * There's no reason for anybody to want to build a superhuman AI.
> * The human brain is not digital but analog: therefore ordinary
> computers cannot simulate it.
> * You can't build a superintelligent machine when we can't even define
> what intelligence means.
> * Intelligence isn't everything: bacteria and insects are more
> numerous than humans.
> * There are limits to everything. You can't get infinite growth.
> * Extrapolation of graphs doesn't prove anything. It doesn't show that
> we'll have AI in the future.
> * Intelligence is not linear.
> * There is no such thing as a human-equivalent AI.
> * Intelligence isn't everything. An AI still wouldn't have the
> resources of humanity.
> * Machines will never be placed in positions of power.
> * A computer can never really understand the world the way humans can.
> * Godel's Theorem shows that no computer, or mathematical system, can
> match human reasoning.
> * It's impossible to make something more intelligent/complex than
> yourself.
> * AI is just something out of a sci-fi movie, it has never actually
> existed.
> * Creating an AI, even if it's possible in theory, is far too complex
> for human programmers.
> * Human consciousness requires quantum computing, and so no
> conventional computer could match the human brain.
> * A Singularity through uploading/BCI would be more feasible/desirable.
> * True, conscious AI is against the will of God/Yahweh/Jehovah, etc.
> * AI is too long-term a project, we should focus on short-term goals
> like curing cancer.
> * The government would never let private citizens build an AGI, out of
> fear/security concerns.
> * The government/Google/etc. will start their own project and beat us
> to AI anyway.
> * A brain isn't enough for an intelligent mind - you also need a
> body/emotions/society.
>
> Friendliness
> --
>
> * Ethics are subjective, not objective: therefore no truly Friendly AI
> can be built.
> * An AI forced to be friendly couldn't evolve and grow.
> * Shane Legg proved that we can't predict the behavior of
> intelligences smarter than us.
> * A superintelligence could rewrite itself to remove human tampering.
> Therefore we cannot build Friendly AI.
> * A super-intelligent AI would have no reason to care about us.
> * The idea of a hostile AI is anthropomorphic.
> * It's too early to start thinking about Friendly AI.
> * Development towards AI will be gradual. Methods will pop up to deal with
> it.
> * "Friendliness" is too vaguely defined.
> * What if the AI misinterprets its goals?
> * Couldn't AIs be built as pure advisors, so they wouldn't do anything
> themselves?
> * A post-Singularity mankind won't be anything like the humanity we
> know, regardless of whether it's a positive or negative Singularity -
> therefore it's irrelevant whether we get a positive or negative
> Singularity.
> * It's unethical to build AIs as willing slaves.
> * You can't suffer if you're dead, therefore AIs wiping out humanity
> isn't a bad thing.
> * Humanity should be in charge of its own destiny, not machi

[singularity] Outsourced Brain

2007-10-28 Thread Joshua Fox
 This is from a leading political columnist with no apparent link to
transhumanism
http://www.nytimes.com/2007/10/26/opinion/26brooks.html

It's funny that he has reached conclusions like "I have melded my mind with
the heavens, communed with the universal consciousness" with respect to his
contemporary personal computing systems.

It's in jest, but still, perhaps society is coming around.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58346130-5a30e1

Re: [singularity] Artificial Genital Intelligence

2007-10-08 Thread Joshua Fox
>From this week's announcement about artificial life:

Craig Venter, the controversial DNA researcher involved in the race to
decipher the human genetic code, has built a synthetic chromosome out of
laboratory chemicals and is poised to announce the creation of the first new
artificial life form on Earth.

.

The DNA sequence is based on the bacterium Mycoplasma genitalium 


2007/10/8, Benjamin Goertzel <[EMAIL PROTECTED]>:
>
> On 10/8/07, Natasha Vita-More < [EMAIL PROTECTED]> wrote:
> >
> >  At 09:42 PM 10/7/2007, you wrote:
> >
> > Minx interviews Goertzel:
> >  www.imminst.org/forum/index.php?act=ST&f=11&t=18067
> >
> >  ;-)
> >  Haha!  Ya *hard* science ... Yo go Ben!
>
>
> Exactly... Scientific work can be a long, hard slog.  Vey long and
> verrry hard
>
>
> >  (Er, bty, she could have been a
> > contestant on "Rock of Love" VH1 reality show I was just watching ...)
> >
> >  Natasha
> >
> >  Natasha Vita-More Planetary Collegium, University of Plymouth - Faculty
> of
> > Technology School of Computing, Communications and Electronics, Centre
> for
> > Advanced Inquiry in the Interactive Arts
> >
> >  If you draw a circle in the sand and study only what's inside the
> circle,
> > then that is a closed-system perspective. If you study what is inside
> the
> > circle and everything outside the circle, then that is an open system
> > perspective. - Buckminster Fuller 
> >  This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> >
> >
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=51020901-6c7c5d

[singularity] Venter

2007-10-07 Thread Joshua Fox
Could one of you fine folk explain the significance of Venter's recent
announcement
http://www.guardian.co.uk/science/2007/oct/06/genetics.climatechange

They didn't build a genome from raw non-biological molecules. They used
biological systems as building blocks. Does this work qualify as a truly
major step?

Also, Venter's somewhat garbled Wikipedia entry
http://en.wikipedia.org/wiki/Craig_Venter#Mycoplasma_laboratorium suggests
that he is "Singularity aware."  He appeared at a talk with Kurzweil
http://www.milkeninstitute.org/events/events.taf?EvID=456&EventID=GC05&cat=allconf&function=show&level1=program&level2=agenda.
Can we surmise that many leading scientists who are not in AI and who
are
not known as Transhumanists in fact believe in the likelihood of a
Singularity. If so, how do we draw them out from mere scientific belief to
helping bring a positive Singularity?

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=50909305-524c37

[singularity] Re: [agi] Religion-free technical content

2007-09-30 Thread Joshua Fox
Following up on responses to Russell Wallace's thread (and moving it
to the Singularity list), I'd like to say that I, too, am eager for
rational, well-reasoned anti-Singularitarian  arguments.

No, I don't think that Singularitarianism is a religion, and I believe
that Singularitarians do search rationally for possible
counter-arguments, but we must always keep our minds open to
alternatives, especially for such important and extraordinary matters.
That's why I so appreciated "What If the Singularity Does NOT Happen"
http://www-rohan.sdsu.edu/faculty/vinge/longnow  by none other than
Vernor Vinge himself. Other such thinking is always be welcome.

Joshua

2007/9/29, Kaj Sotala <[EMAIL PROTECTED]>:
> (As a sidenote - if you really are convinced that any talk about
> Singularity is religious nonsense, I don't know if I'd consider it a
> courtesy for you not to bring up your views. I'd feel that it would be
> more appropriate to debate the matter out,

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=48209072-16fd31


Re: [singularity] Benefits of being a kook

2007-09-25 Thread Joshua Fox
I wonder if real kook science gets the treatment that the WSJ gave the
Singularity Summit? for example, do conferences of "paranormal
scientists" get written up in mocking tones in leading media? Even
when "creation science"  attracts attention in the mainstream media,
it is not in these terms.

Steve Mirsky's Anti-Gravity column in Scientific American has
dismissed transhumanism in along with kook science, but I think that
the serious media generally does not pay this level of (negative)
attention to quackery.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=45623151-7c70b7


Re: [singularity] Benefits of being a kook

2007-09-23 Thread Joshua Fox
Perhaps some historians of science have analyzed earlier cases in
which leading journalists not only dismissed, but actively mocked,
potential significant inventions which later were realized. (There is
the standard futurist list of one-liners; but I'd like to see more
depth.)

It would be interesting to know whether the mockery truly slowed the
invention, and how the mockery was overcome, if at all.

These cases  should be kept in context and so compared to kook science
which was mocked in its time (or not) and significant innovations
which were treated with respect even before they were realized.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=45054375-dce5ae


Re: [singularity] Summit on the front page of the San Francisco Chronicle!

2007-09-10 Thread Joshua Fox
Is it just my imagination, or has the quality level of popular-press
treatments of the Singularity improved in the last year or so?

Aside from minor glitches, this one from the AP is pretty good
http://biz.yahoo.com/ap/070908/superintelligent_machines.html?.v=2

If so, plaudits to the  Singularity Institute and all who have made
popularization efforts.

Joshua



2007/9/7, Joshua Fox <[EMAIL PROTECTED]>:
>
> Thanks. That's actually one of the better-written pieces I've seen in the
> popular press.
>
> Way better than this one in the same publication 
> http://sfgate.com/cgi-bin/article.cgi?f=/chronicle/archive/2004/01/11/LVG1J459UE1.DTL
>
>
>
>
> 2007/9/7, David Orban <[EMAIL PROTECTED]>:
>
> > The Singularity Summit made the front page of the San Francisco
> > Chronicle today! Congratulations to Tyler and the team for the PR
> > coup. Of course the question is also whether the exposure to the
> > public of the meme is a good thing, if the meme of the Singularity is
> > ready for the treatment...
> >
> > http://www.davidorban.com/blog/archives/2007/09/singularity_on.html
> >
> > 
> > David Orban
> > www.davidorban.com
> > skype davidorban
> > sl davidorban
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> >
> > http://v2.listbox.com/member/?&;
> >
>
>
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40096704-f96029

Re: [singularity] Summit on the front page of the San Francisco Chronicle!

2007-09-07 Thread Joshua Fox
Thanks. That's actually one of the better-written pieces I've seen in the
popular press.

Way better than this one in the same publication
http://sfgate.com/cgi-bin/article.cgi?f=/chronicle/archive/2004/01/11/LVG1J459UE1.DTL




2007/9/7, David Orban <[EMAIL PROTECTED]>:

> The Singularity Summit made the front page of the San Francisco
> Chronicle today! Congratulations to Tyler and the team for the PR
> coup. Of course the question is also whether the exposure to the
> public of the meme is a good thing, if the meme of the Singularity is
> ready for the treatment...
>
> http://www.davidorban.com/blog/archives/2007/09/singularity_on.html
>
> 
> David Orban
> www.davidorban.com
> skype davidorban
> sl davidorban
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=39574028-e5843e

[singularity] Book: Goertzel/Bugaj

2007-08-31 Thread Joshua Fox
Ben Goertzel and Stephan Vladimir Bugaj, _The Path to Posthumanity: 21st
Century Technology and Its Radical Implications for Mind, Society and
Reality_, 2006, deserves more attention than it has received. It is in some
ways the best book-length introduction to the Singularity.

I find only one mention of the book on the Singularity and SL4 lists (
http://www.sl4.org/archive/0205/3611.html). It also didn't make a major
impact on the singularity-aware blogosphere. I can't figure out why.

Unfortunately, it costs $74, but I got it at a public library through
Inter-Library Loan (or ask a library to order it.) A draft is available
online at http://intelligenesiscorp.com/agiriorg/path/index.htm  but the
published book has some significantly rewritten chapters and so is worth
reading.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=37651325-4b2341

Re: [singularity] Good Singularity intro in mass media

2007-08-25 Thread Joshua Fox
> I think the classic paper by Vernor Vinge expresses it pretty well.
> http://mindstalk.net/vinge/vinge-sing.html
Yes, there is plenty of good material out there. I was just wondering if,
for better or worse, any had made it to the general-circulation
(non-technology-focused) mass media

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=35752776-1bdc02

[singularity] Good Singularity intro in mass media

2007-08-24 Thread Joshua Fox
Can anyone recall an intelligent, supportive introduction to the Singularity
in a _non-technological_ , wide-distribution medium in the US? I am not
looking for book or conference reviews, sociological analyses of
Singularitarianism, and uninformed editorializing, but rather for a clear
short popular mass-media explanation of the Singularity.

The idea came to me when I stumbled upon a well-informed, well-written
intro  to the Singularity from December 2004 in Hebrew
http://www.ynet.co.il/articles/0,7340,L-3017313,00.html . This is the
website of the most popular newspaper in Israel. (It's reprinted from the
leading local popular science magazine.)

I realized also that we almost never see reactions to the Singularity
concept from a  wider population. Even the skeptical responses we experience
are usually from a narrower well-educated group.

If there really are no such articles in English-language mass media, and if
mass outreach is of value (I'm not too sure of that: what great inventors or
scientists relied on mass support?), maybe there's a hole to fill here.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=35395504-20e32e

Re: [singularity] Reduced activism

2007-08-19 Thread Joshua Fox
Thanks, all, for your answers.

Samantha: I was not claiming those points myself. I do not believe most of
them.  I was raising them to jog the imagination of people who have reduced
their activism, to get some understanding of why they may have done that.

Joshua


2007/8/20, Samantha Atkins <[EMAIL PROTECTED]>:

>
> On Aug 19, 2007, at 12:26 PM, Matt Mahoney wrote:
>
> > I was never really a Singularity activist, but
> >
> > 1. I realized the singularity is coming and nothing can stop it.
>
> Not so.  Humanity could so harm its technological base as to postpone
> Singularity on this planet for quite some time.   We could still bomb
> ourselves back into the Stone Age.  We could do a Nehemiah Scudder
> thing in the US and slow ourselves down for at least another century
> and perhaps toss around some nukes to boot.   The race toward
> stupidity may overtake our best efforts.  The push to control and
> monitor everything may get a huge shot in the arm by the next real or
> contrived terrorist attack and we may lose the freedom necessary to
> the work as a result.   I haven't even touched on natural disasters.
>
> > 2. The more I study the friendly AI problem, the more I realize it is
> > intractable.
>
> Largely agreed.
>
> > 3. Studying the singularity raises issues (e.g. does consciousness
> > exist?)
> > that conflict with hardcoded beliefs that are essential for survival.
>
> Huh?  Are you conscious?
>
> > 4. The vast majority of people do not understand the issues anyway.
>
> So?  Isn't that the way it always is with great advances?
>
> See my answers below.
>
> >
> >
> > --- Joshua Fox <[EMAIL PROTECTED]> wrote:
> >
> >> This is the wrong place to ask this question, but I can't think of
> >> anywhere
> >> better:
> >>
> >> There are people who used to be active in blogging, writing to the
> >> email
> >> lists, donating money, public speaking, or holding organizational
> >> positions
> >> in Singularitarian and related fields -- and are no longer
> >> anywhere near as
> >> active. I'd very much like to know why.
> >>
> >> Possible answers might include:
> >>
> >> 1. I still believe in the truthfulness and moral value of the
> >> Singularitarian position, but...
> >> a. ... eventually we all grow up and need to focus on career
> >> rather than
> >> activism.
>
> I never considered it something that required a strong appeal to the
> public at large.   I also do think that expecting the Singularity to
> solve all our problems to the point of focusing only on it is a very
> illogical tact for all but a few researchers working on it.   It is
> the latest pie in the sky it will all be utter perfection by and
> by.There is something that feels more than a bit juvenile in much
> of the attitude of many of us.
>
> >> b. ... I just plain ran out of energy and interest.
> >> c. ... public outreach is of no value or even dangerous; what
> >> counts is the
> >> research work of a few small teams.
>
> Mainly I agree with this.
>
> >> d. ... why write on this when I'll just be repeating what's been
> >> said so
> >> often.
>
> Too much time is wasted with repetition of the same old questions and
> ideas.  I am on way too many email lists and have too many interests
> for my own good.
>
> >> e. ... my donations are meaningless compared to what a dot-com
> >> millionaire
> >> can give.
> >> 2. I came to realize the deep logical (or: moral) flaws in the
> >> Singularitarian position. [Please tell us they are.]
>
> A position that says we should be in a great hurry to get to a state
> of affairs that we cannot remotely understand or control and where we
> will be nearly totally at the mercy of an incomprehensible and
> utterly alien intelligence at least deserves serious questioning now
> and again.
>
> >> 3. I came to understand that Singularitarianism has some logical
> >> and moral
> >> validity, but no more than many other important causes to which I
> >> give my
> >> time and money.
> >>
>
> I am 53 years old and have too little net worth.  I have much to do
> to get my own house in order.  I give to a few causes like life
> extension.   Most of the AGI groups that I believe have most traction
> are not that easy to donate to.   I don't believe at this point that
> the Singularity Institute is likely to produce a working AGI.Many
> things it does do are interesting and I would consider donating to it
> for those reasons.   But I think FAI is a vast distraction from much
> needed AGI.
>
> - samantha
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33540931-2737e4

[singularity] Reduced activism

2007-08-19 Thread Joshua Fox
This is the wrong place to ask this question, but I can't think of anywhere
better:

There are people who used to be active in blogging, writing to the email
lists, donating money, public speaking, or holding organizational positions
in Singularitarian and related fields -- and are no longer anywhere near as
active. I'd very much like to know why.

Possible answers might include:

1. I still believe in the truthfulness and moral value of the
Singularitarian position, but...
a. ... eventually we all grow up and need to focus on career rather than
activism.
b. ... I just plain ran out of energy and interest.
c. ... public outreach is of no value or even dangerous; what counts is the
research work of a few small teams.
d. ... why write on this when I'll just be repeating what's been said so
often.
e. ... my donations are meaningless compared to what a dot-com millionaire
can give.
2. I came to realize the deep logical (or: moral) flaws in the
Singularitarian position. [Please tell us they are.]
3. I came to understand that Singularitarianism has some logical and moral
validity, but no more than many other important causes to which I give my
time and money.

And of course I am also interested to learn other answers.

Again, I would like to hear from those who used to be more involved, not
just those who have  disagreed with Singularitarianism all along.

Unfortunately, most such people are not reading this, but perhaps some have
maintained at least this connection; or list members may be able to report
indirectly (but please, only well-confirmed reports rather than
supposition).

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33464183-3e2d0d

Re: [singularity] Is the world a friendly or unfriendly AI?

2007-07-15 Thread Joshua Fox

The world in its pre-technological state though not an intelligence
is basically unFriendly. It is not out to get you, but you can die in this
world from all sorts of causes, like not getting oxygen. However, because we
evolved in the world, we are in the amazingly improbable state of usually
being able to survive for a few decades in this unFriendly environment.

Taking into account the world's technology, as you do in this email, the
world is  much Friendlier, since the technology was developed to achieve
human goals, i.e. usually your goals. You live longer and get more of what
you want.

As it improves, this will be more and more true unless  a weapon-like
technology helps a user achieve his human goals, but too effectively, thus
overriding your goals or a technology stops helping us achieve our goals
because of a bug or mistake, e.g.,  out-of-control nanotch.

The world's technology is still not intelligent,  but it does incorporate
human intelligence. It is "indifferent" as you mention, since it does not
have a personality, but a superAGI might  also not have a personality as we
understand it. As long as the technology helps us with your goals, we'd have
to call it Friendly.

Joshua

2007/7/14, Stathis Papaioannou <[EMAIL PROTECTED]>:


Despite the fact that it seems to lack a single unified consciousness
the world of humans and their devices behaves as if it is both vastly
more intelligent and vastly more powerful than any unassisted
individual human. If you could build a machine that ran a planet all
by itself just as well as 6.7 billion people can, doing all the things
that people do as fast as people do them, then that would have to
qualify as a superintelligent AI even if you can envisage that with a
little tweaking it could be truly godlike.

The same considerations apply to me in relation to the world as apply
to an ant relative to a human or to humanity relative to a vastly
greater AI (vastly greater than humanity, not just vastly greater than
a human). If the world decided to crush me there is nothing I could do
about it, no matter how strong or fast or smart I am. As it happens,
the world is mostly indifferent to me and some parts of it will
destroy me instantly if I get in their way: if I walk into traffic
only a few metres from where I am sitting. But even if it wanted to
help me there could be problems: if the world decided it wanted to
cater to my every command I might request paperclips and it might set
about turning everything into paperclip factories, or if it wanted to
make me happy it might forcibly implant electrodes in my brain. And
yet, I feel quite safe living with this very powerful, very
intelligent, potentially very dangerous entity all around me. Should I
worry more as the world's population and technological capabilities
increase further, rendering me even weaker and more insignificant in
comparison?



--
Stathis Papaioannou

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22307053-3fd9f2

Re: [singularity] Re: Personal attacks

2007-05-30 Thread Joshua Fox

I have disagreements with [Eliezer] on many essential points


Ben,

I'd be fascinated to see your disagreements elucidated.  (I recall
reading about a difference of opinion is on whether guaranteed
Friendly AI is feasible. I'd like to see that explained in detail,
together with other points of disagreement)

If you've already written this up, perhaps I just missed the relevant
articles, so could you send a pointer.

Thanks,

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] "Friendly" question...

2007-05-28 Thread Joshua Fox

It is not at all sensible.  Today we have no real idea how to build a working 
AGI.


Right. The Friendly AI work is aimed at a future system. Fermi and
company planned against meltdown _before_ they let their reactor go
critical.


...spontaneously ...

People are working on an AGI that can do things spontaneously.  It
does not yet exist.


...concept extraction and learning ... algorithms and ...come understand 
software and
hardware in depth ...and develop a will to be better greater than all other

If these are the best ways to achieve its goal, and if it is _truly_
intelligent, then of course that is what it would do. How long it
takes researchers to create such an AGI or whether they manage to help
it avoid the dangers I mention is another question.

By the way, the standard example of seemingly harmless but potentially
deadly AGI goal is making paper-clips. I mentioned theorem proving for
variety, although the difference between goals that do and don't
affect the material world might be worth some thought.

I just read that even before their first airplane, the Wright Brothers
thought not only about the basics of heavier-than-air powered flight
but about safety, specifically about stability. This kept them ahead
of competing planes-- which could fly but only straight and in still
air--for a few years.

Why not at least ponder the safety of inventions before they exist?

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] "Friendly" question...

2007-05-27 Thread Joshua Fox

Abram,

Let's say that the builders want to keep things safe and simple for
starters, and concentrate on the best possible AGI theorem-prover, rather
than some complex do-gooding machine.

The best way for the machine to achieve its assigned goal is to improve not
only its own software but also its hardware, and so, by hook or by crook,
with trickiness and wile (remember, this is an Artificial _General_
Intelligence, not just a glorified Deep Blue; if necessary, it
improves its own wiliness), it converts
Planet Earth into silicon chips (or actually, into  better-than-silicon
hardware that it invents if necessary; call it "computronium").

Of course, the  AGI builder would put in safeguards to keep this from
happening, but when you start trying to figure out what safeguards would
work on something which is _smarter_than_you_, you find yourself deep into
full-fledged Friendliness research before you know it.

(The above is just my modest effort to summarize Yudkowsky's writings,
which express all this better than I do.)

Joshua



2007/5/27, Abram Demski <[EMAIL PROTECTED]>:


Joshua Fox,could you give an example scenario of how an AGI theorem-prover
would wipe out humanity?


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Re: [singularity] "Friendly" question...

2007-05-26 Thread Joshua Fox

Jonathan, although I certainly can't speak for either Ben or the SIAI, note
that the two items you reference were written by different people -- Ben
Goertzel and Eliezer Yudkowsky -- and they can't be expected to be in
agreement. Neither of these items, certainly not Goertzel's, can be
considered "official."

Ben only recently joined the SIAI as Director of Research along with a
number of advisors and researchers; I'm certainly keeping my eyes open to
learn about the mode of collaboration between these smart people.

Mason, on the question of maximizing entropy, I think that Samantha
summarized Yudkowsky's Friendliness insights well: Eliezer has decided that
he wants to preserve the best of human values, since we don't have anything
better for now. A super-intelligence may help us refine these values, but
for now the most important thing -- even more important than pinning down
the exact form of utopia we want -- is to ensure that the soon-to-come AGI
does not wipe out humanity as a side effect.

When you understand the following, you will have surpassed most AI experts
in understanding the risks: If the first AGI is given or decides to try for
almost any goal, including a simple "harmless" goal like being as good as
possible at proving theorems, then humanity will be wiped out by accident.


Joshua



2007/5/25, Jonathan H. Hinck <[EMAIL PROTECTED]>:


Dear Dr. Goertzel,

I realize that you are not the overseer for the Singularity General
Discussion  group, but, if you are reading this, I have a question I am
hoping you would answer if you have the time and inclination:

How would you reconcile your call for an "A.I. Manhattan Project" (the
article for which you posted at 
http://www.kurzweilai.net/meme/frame.html?main=memelist.html?m=3%23701
) with the concerns your institute has expressed regarding the potential
pitfalls of governmental regulation (at 
http://www.singinst.org/upload/CFAI/policy.html#policies
)?

--
Jonathan Hinck
Serials & Archives Librarian
Viterbo University Library
900 Viterbo Drive
La Crosse, WI  54601
 --
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

[singularity] AI dangers

2007-05-24 Thread Joshua Fox

Not-too-sophisticated discussion of the dangers of robotics, with a
reference to Ben Goertzel

http://news.thomasnet.com/IMT/archives/2007/05/bots_with_brains_robot_laws_ethics_sinister_debate.html

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Fwd: [singularity] Replicable Patterns of Breakthroughs

2007-05-22 Thread Joshua Fox

For all it's worth, the development of the atomic bomb is an
often-made comparison
http://www.mail-archive.com/singularity@v2.listbox.com/msg00585.html ,
as is the invention of powered heavier-than-air flight.


Historiographers of science have various theories, among them  Kuhn's
idea of 'paradigm shift'
http://www.mail-archive.com/singularity@v2.listbox.com/msg00553.html

Joshua


2007/5/18, Mark H. Herman <[EMAIL PROTECTED]  >:

   I imagine the following may have already been considered,
nevertheless: It would seem constructive to undertake an
analysis of breakthroughs in various fields (e.g. engineering,
art, chemistry) to search for patterns that might be
replicable. A general example of what I mean by a "pattern"
would be, "thesis, antithesis, synthesis." Examples of
patterns that such an analysis might uncover could include
patterns of formal logic, the novel application in one field of
a structure established in another field, the retrieval of
insights from historical theories or practices that were once
competitive, but were found inadequate and long forgotten,
etc. An analysis of such patterns and the identification of
meta-patterns would seem to require broad familiarity with the
various disciplines in which the breakthroughs occurred;
however, the work of identifying the patterns of specific
breakthroughs, which might require extensive and deep knowledge
in the respective fields, could be divided amongst various
experts of various fields. Perhaps something like this would
be worth adding to the agenda of the AI Impact Initiative or
some similar interdisciplinary body.

-Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


Re: [singularity] Replicable Patterns of Breakthroughs

2007-05-21 Thread Joshua Fox

For all it's worth, the development of the atomic bomb is an often-made
comparison
http://www.mail-archive.com/singularity@v2.listbox.com/msg00585.html , as is
the invention of powered heavier-than-air flight. I posted on it here.

Historiographers of science have various theories, among them  Kuhn's idea
of 'paradigm shift'
http://www.mail-archive.com/singularity@v2.listbox.com/msg00553.html

Joshua


2007/5/18, Mark H. Herman <[EMAIL PROTECTED] >:


I imagine the following may have already been considered,
nevertheless: It would seem constructive to undertake an
analysis of breakthroughs in various fields (e.g. engineering,
art, chemistry) to search for patterns that might be
replicable. A general example of what I mean by a "pattern"
would be, "thesis, antithesis, synthesis." Examples of
patterns that such an analysis might uncover could include
patterns of formal logic, the novel application in one field of
a structure established in another field, the retrieval of
insights from historical theories or practices that were once
competitive, but were found inadequate and long forgotten,
etc. An analysis of such patterns and the identification of
meta-patterns would seem to require broad familiarity with the
various disciplines in which the breakthroughs occurred;
however, the work of identifying the patterns of specific
breakthroughs, which might require extensive and deep knowledge
in the respective fields, could be divided amongst various
experts of various fields. Perhaps something like this would
be worth adding to the agenda of the AI Impact Initiative or
some similar interdisciplinary body.

-Mark


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Re: [singularity] Help get the 400k SIAI matching challenge on DIGG's front page

2007-05-13 Thread Joshua Fox

Private companies like Google are, as far as I am aware, spending  exactly $0 
on AGI. ...


Yes. But note that Peter Norvig, Google's director of research and a
top guru in narrow AI, is scheduled to speak at SIAI's next
Singularity Summit

This does not mean that he agrees with everything that SIAI folks say,
but it does mean that he takes the organization and its ideas
seriously.

To me, this, together with other seemingly AGI-positive statements out
of Google, mean that Google does not unanimously dismiss AGI as
nonsense, and may even understand its value.

And so, if pre-superhuman AGI has any commercial value, and if Google
willing to invest in speculative ventures -- both of which seem to be
true --  I would be surprised if Google did not soon fund AGI
research.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07


[singularity] Why do you think your AGI design will work?

2007-04-24 Thread Joshua Fox

Ben has confidently stated that he believes Novamente will work
(http://www.kurzweilai.net/meme/frame.html?m=3 and
others).

AGI builders, what evidence do you have that your design will work?

This is an oft-repeated question, but I'd like to focus on two possible
bases for saying that an invention will work before it does.
1. A clear, simple, mathematical theory, verified by experiment. The
experiments can be "pure science" rather than technology tests.
2. Functional tests of component parts or of crude prototypes.

Maybe I am missing something in the articles I have read, but do
contemporary AGI builders have a verified theory and/or verified components
and prototypes?

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=8eb45b07

Re: [singularity] Multiverse and Alien Singularities

2007-03-29 Thread Joshua Fox יהושע פוקס

> > John Ku
> > ... motivations (e.g. creativity, pursuing knowledge for its own sake,

...

> Matt Mahoney
> ...Such a narrow view...
Chuck Esterbrook
... ego centric views of mankind...



"Creativity, pursuing knowledge for its own sake," are considered primary
motivations only by a narrow fraction of humanity which happens to include
many of the Singularitarians and overeducated intellectual types who
populate this list. (Yes, myself included.)

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The establishment line on AGI

2007-03-28 Thread Joshua Fox

On the subject of the adoption, or not, of radical new directions, I'm
now reading Richard Rhodes' _The Making of the Atom Bomb_ through
Singularitarian eyes.

It is instructive to consider the analogies--and crucial
differences--between the development of nuclear technology in the
first half of the twentieth century, and AGI in the first half of the
twenty-first.

References:
   http://www.sl4.org/archive/0510/12520.html
   http://www.sl4.org/archive/0605/14689.html
   http://www.mail-archive.com/agi@v2.listbox.com/msg03251.html
   http://www.singinst.org/ourresearch/publications/cognitive-biases.pdf
 page 20

Similarities
- From the very beginning, science and science fiction which fed into
each other in predicting the Utopia of unlimited energy and the hell
of world destruction which nuclear energy could bring. Some understood
the existential risks and rewards; others just saw nuclear energy as a
science or a technology.
- The boundaries of physics arouse quasi-spiritual feelings more than
most fields of science.
- Even though from the first there was an understanding of the risks
of nuclear science, Szilard had to run around in late 1938 imploring
everyone not to spill the secrets. He had to pursue wealthy sponsors
for private scientific funding.
- Leading scientists, including some who were without doubt
intellectually courageous, did not believe that the chain-reactions
were possible.
- The development went slowly, but the chain reaction went foom!
- It seems to me that there was ongoing acceleration in scientific
development on the way to the atom bomb.
- Nuclear physics entered its home stretch to the bomb with a rising
sense of urgency, just as a world war was brewing. Of course, no one
knows if the struggle against today's extremist totalitarian
anti-Semitic ideology is the precursor to a full-blown world war.

Differences
- Nuclear physics was the rock-star science of its time. Einstein was
_the_ icon of the genius scientist. AGI doesn't seem to have anyone
with this top pop-icon status. But perhaps Pinker, in cognitive
science, is close.
- Although some very intelligent people go into computing and AGI,  it
doesn't have the automatic draw for geniuses that cutting-edge physics
did then and still does today.
- Nuclear physics had the full support and funding of the scientific
establishment.
- In nuclear physics, there was never a roller-coaster of excessive
promise followed by temporary disappointment, slowing down subsequent
work, comparable to "AI Winter." (Dare I say "Nuclear Winter"?)
- The Singularity's promise and peril is much greater than that of
nuclear energy.
- Nuclear energy never fulfilled its Utopian promise. Let's hope that AGI does.
- Nuclear-reaction skeptics could reasonably have said that fission
chain reactions are seen nowhere on earth, and would require extremely
rare materials and massive hardware,.and that therefore that
foreseeable technology might  not be able to achieve physical
phenomena known only in astronomy. Intelligence, on the other hand,
exists in  6.5 billion meat blobs on earth .

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The establishment line on AGI

2007-03-21 Thread Joshua Fox

Richard, thanks for the reference to Kuhn.  I was aware of his "paradigm
shift" concepts, although I have not yet read his writings.

Pondering this question, I came across an example in a field which I
studied.

To me, the fact that it is in another field provides some distance which
highlights the AGI situation. I hope that the following is not too far
afield for this list, nor too obvious to those knowledgeable in the history
of science. Maybe it's even interesting!

In diachronic linguistics, most scholars try to reconstruct an unattested
parent language only a thousand years or so before the earliest attestations
of language families, as for example, Proto-Indo-European or Proto-Semitic.
However, some eminent professors, such as Aron Dolgopolsky, claim to be able
to reconstruct far earlier, to a proto-language which is the ancestor
several steps back from the more commonly reconstructed proto-languages.

What do the respected professors in the field think about these rebels,
called  Nostraticists? (My views are identical to the establishment; I claim
no fearless independent-mindedness.)

They think that

- The Nostraticists are NOT  pseudo-scientists. Their techniques are the
same as mainstream scholars, but they push the evidence to more speculative
conclusions. They do work that is on the edge of weird, but only on the
edge.
- The Nostraticists' reconstructions cannot be called provably wrong;
rather, they do not have the evidence for well-founded conclusions. They are
triangulating too far using noisy, sparse data and uncertain lines of
reconstruction.
- The Nostraticists may well be proven right. Indeed, similar hypotheses
about very large language groups gained acceptance by the consensus.
- Future methodologies may help, but we have absolutely no idea what those
methodologies might be.
- The Nostraticists are NOT Young Turks. In fact, their school had a small
golden age decades ago -- and the leading Nostraticists were young, then --
but Dolgopolsky and others of the school are now old, retired, or deceased.
There may be a few new scholars in the field, but it didn't take off.
- A grad student who is turned on by the Nostratic approach can seek out a
professor and study with them. This is very rare in practice. Such a student
will NOT automatically be considered an academic leper, but may be left at
the margins of the academic consensus.
- The Nostraticists' ideas excite a sense of wonder, even in the most jaded.
- Requests for speculation on the Nostraticists' far-out ideas will not be
met with energetic visions of future possibilities, but with a cautious,
lukewarm, "we just don't know."  A layperson might be disappointed at what
appears to be a total lack of imagination and intellectual courage.

The comparison to AGI is by no means exact, but it does highlight the
scholarly conservativism and the avoidance of speculative -- even though
scientifically based and potentially very fruitful -- paths of inquiry

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The establishment line on AGI

2007-03-20 Thread Joshua Fox

The situation in AGI seems akin to that in space science, where many
well-trained researchers in the field tell us that there is no future in
human space flight, and that we should limit our dreams to unmanned
exploration.

Can anyone suggest historical examples of fields where almost none of the
scientific or engineering establishment would accept the possibility of a
breakthrough, which nonetheless soon came?

What was the situation in nuclear physics in 1935, before the great advances
towards the Manhattan Project, or in fluid physics and mechanical
engineering in 1895, before the Wright Brothers? Or can someone give some
other cases? I am not referring to the usual quotes from isolated skeptical
senior scientists, nor to dismissiveness from the lay population, but to a
situation where the entire field ignores an upcoming breakthrough.

And conversely, what is an area where an entire field recognized the
possibility of revolutionary change, which in fact came? General computing
of the last 60 years? Spaceflight engineering in 1953?

Joshua



2007/3/19, Shane Legg <[EMAIL PROTECTED]>:



Ben,

I think these things go in cycles.  AI had its time of big funding, but
that didn't produce much and so it stopped.  The impression I get with
string theory is that pressure is building up to cut it back unless it
comes up with better results.  At least in the case of string theory
they can produce lots of fancy mathematics which counts as a kind
of "evidence", or more accurately, it makes it count as "serious science".

With the Genome project at least it was clearly a finite goal that would
eventually be achieved.  Funders like that because they feel confident
that what they are funding will eventually be done.  With AGI, well a lot
of people still seem unsure if it is even possible, and if it is then it
might
happen in a few hundred years.  Funders don't like that story much.

Of course some of us believe otherwise...

Shane

--
This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] The establishment line on AGI

2007-03-19 Thread Joshua Fox

Singularitarians often accuse the AI establishment of a certain
close-mindedness. I always suspected that this was the usual biased
accusation of rebels against the old guard.

Although I have nothing new to add, I'd like to give some confirmatory
evidence on this from one who is not involved in AGI research.

When I recently met a PhD in AI from one of the top three programs in
the world, I expected some wonderful nuggets of knowledge on the
future of AGI, if only as speculations, but I just heard the
establishment line as described by Kurzweil et al.: AI will continue
to solve narrow problems, always working in tandem with humans who
will handle important part of the tasks.There is no need, and no
future, for human-level AI. (I responded, by the way, that there is
obviously a demand for human-level intelligence, given the salaries
that we knowledge-workers are paid.)

I was quite surprised, even though I had been prepped for exactly this.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Joshua Fox

 The issue at hand is that AGI has a very high failure rate,


So do efforts against poverty, human rights abuses, diseases, etc.
That doesn't stop the philanthropists.


If you were Bill Gates, would you want to give your competitors enough
funding to develop AGI and then be worth ten Microsofts?


Funding can come in the form of philanthropy or for-profit investment.
If Bill Gates (or anyone else) wants to get richer, and if he believes
that AGI is worth ten Microsofts, then the logical step is to invest
in AGI, not to ignore or fight it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Philanthropy & Singularity

2007-03-16 Thread Joshua Fox

Does anyone know what Bill Gates thinks about the singularity?
(Or for that matter, other great philanthropists.)


Yes, I too have wondered why Singularity efforts have not received more
funding. There are a lot of very rich high-tech zillionaires who want to
give to charity but literally don't know what to give to.  Take a look at
Steve Kirsch's site, including this
http://www.kirschfoundation.org/who/reflection_4.html . Kirsch seems like a
outstanding example of well-considered Silicon Valley philanthropy --
http://www.kirschfoundation.org/done/accomplish.html -- ask yourself if he
is a potential Singularity donor.

These people don't seem part of the well-oiled machine which funnels
donations from "old money" to hospitals, museums, etc. Some pour huge
amounts into space flight, including efforts which do not seem commercially
viable.

Why has the singularity and AGI not triggered such an interest? Thiel's
donations to SIAI seem like the exception which highlights the rule.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] AGI+LE Poll Question (results)

2007-03-13 Thread Joshua Fox

My two cents on the poll questions:


My Life Extension motivation is [what]% of the reason
why I'm interested in AGI+Singularity


2%. Not that I want to get old and/or die, but the Life Extension
concept just doesn't rev my engines.


I'm interested in AGI+Singularity because ...


I want to see sentient life redeemed from its suffering and
transcending to a higher level. For some reason, the SL4 stuff feels
existentially important to me, whereas SL3 stuff like Life Extension
feels no more than just interesting.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Vinge & Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Joshua Fox

Any comments on this: http://news.com.com/2100-11395_3-6160372.html

Google has been mentioned in the context of  AGI, simply because they have
money, parallel processing power, excellent people, an orientation towards
technological innovation, and important narrow AI successes and research
goals. Do Page's words mean that Google is seriously working towards AGI? If
so, does anyone know the people involved? Do they have a chance and do they
understand the need for Friendliness?

Also: Vinge's notes on his Long Now Talk, "What If the Singularity Does NOT
Happen"  are at   http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm

I'm delighted to see counter-Singularity analysis from a respected
Singularity thinker. This further reassurance that the the flip-side is
being considered deepens my beliefs in pro-Singularity arguments.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] BBC on AI

2006-12-28 Thread Joshua Fox

BBC has a piece on the future of Strong AI, with Irving
Wladawsky-Berger, Noel Sharkey, and Ray Kurzweil. (It's in their
Business Daily show, of all places!)  It's pretty good as such things
go. It is available at the moment at
http://www.bbc.co.uk/worldservice/programmes/business_daily.shtml.

Is this just my impression, or does BBC have more singularity-related
material, including not-too-sensationalized pieces, than other
mainstream media? If so, I wonder who's the motivating force behind
this at the BBC.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-20 Thread Joshua Fox

Ben,

If I am beating a dead horse, please feel free to ignore this, but I'm
imagining a prototype that shows glimmerings of AGI. Such a system, though
not useful or commercially viable, would  sometimes act in interesting, even
creepy, ways. It might be inconsistent and buggy, and work in a limited
domain.

This sets a low barrier, since existing systems occasionally meet this
description. The key difference is that the hypothesized prototype would
have an AGI engine under it and would rapidly improve.

Joshua





According the approach I have charted out (the only one I understand),
the true path to AGI does not really involve commercially valuable
intermediate stages.  This is for reasons similar to the reasons that
babies are not very economically useful.

.But my best guess is that this is an illusion.  IMO by
far the best path to a true AGI is by building an artificial baby and
educating it and incrementally improving it, and by its very nature
this path does not lead to incremental commercially viable results.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Joshua Fox

Ben,

The question which I would ask, were I a potential funder "How soon can I
see something that, though not true AGI, makes me say 'Wow, I've never seen
anything like that before.' ?"

I appreciate that this is an incredibly challenging project, and that in
some cases investors will accept a ten-year horizon, but as a software
professional I'd say that a working intermediate system, showing real core
functionality, is critical to keeping a project focused and on track.

You mention "intermediate steps to AI", but the question is whether these
are narrow-AI applications (the bane of AGI projects) or some sort of
(incomplete) AGI.

Yours,

Joshua

2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>:


Hi Joshua,

Thanks for the comments

Indeed, the creation of a thinking machine is not a typical VC type
project.  I know a few VC's personally and am well aware of their way
of thinking and the way thir businesses operate.  There is a lot of
"technology risk" in the creation of an AGI, as compared to the sorts
of projects that VC's are typical interested in funding today.  There
is just no getting around this fact.  From a typical VC perspective,
building a thinking machine is a project with too much risk and too
much schedule uncertainty in spite of the obviously huge payoff upon
success.

Of course, it's always possible a rule-breaking VC could come along
with an interest in AGI.  VC's have funded nanotech projects with a
10+ year timescale to product, for example.

Currently our fundraising focus is on:

a) transhumanist angel investors interested in funding the creation of
true AGI

b) seeking VC money with a view toward funding the rapid construction
and monetization of software products that are
-- based on components of our AGI codebase
-- incremental steps toward AGI.

With regard to b, we are currently working with a business consultant
to formulate a professional "investor toolkit" to present to
interested VC's.

Unfortunately, US government grant funding for out-of-the-mainstream
AGI projects is very hard to come by these days.  OTOH, the Chinese
government has expressed some interest in Novamente, but that funding
source has some serious issues involved with it, needless to say...

-- Ben G


On 12/11/06, Joshua Fox < [EMAIL PROTECTED]> wrote:
>
> Ben,
>
> I saw the video.  It's wonderful to see this direct aim at the goal of
the
> positive Singularity.
>
> If I could comment from the perspective of the software industry, though

> without expertise in the problem space, I'd say that there are some
phrases
> in there which would make me, were I a VC, suspicious. (Of course VC's
> aren't the direct audience, but ultimately someone has to provide the
> funding you allude to.)
>
> When a visionary says that he requires more funding and ten years, this
> often indicates an unfocused project that will never get on-track. In
> software projects it is essential to aim for real results, including a
beta
> within a year and multiple added-value-providing versions within
> approximately 3 years. I think that this is not just investor impatience
--
> experience shows that software projects planned for a much longer
schedule
> tend to get off-focus.
>
> I know that you already realize this, and that you do have the focus;
you
> mention your plans, which I assume include meaningful intermediate
> achievements in this incredibly challenging and extraordinary task, but
this
> the impression which comes across in the talk.
>
> Yours,
>
> Joshua
>
>
>
> 2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>:
> >
> > Hi,
> >
> > For anyone who is curious about the talk "Ten Years to the Singularity
> > (if we Really Really Try)" that I gave at Transvision 2006 last
> > summer, I have finally gotten around to putting the text of the speech

> > online:
> >
> > http://www.goertzel.org/papers/tenyears.htm
> >
> > The video presentation has been online for a while
> >
> > video.google.com/videoplay?docid=1615014803486086198
> >
> > (alas, the talking is a bit slow in that one, but that's because the
> > audience was in Finland and mostly spoke English as a second
> > language.)  But the text may be preferable to those who, like me, hate
> > watching long videos of people blabbering ;-)
> >
> > Questions, comments, arguments and insults (preferably clever ones)
> welcome...
> >
> > -- Ben
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?list_id=11983
> >
>
>  
>  This list

Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Joshua Fox

Ben,

I saw the video.  It's wonderful to see this direct aim at the goal of the
positive Singularity.

If I could comment from the perspective of the software industry, though
without expertise in the problem space, I'd say that there are some phrases
in there which would make me, were I a VC, suspicious. (Of course VC's
aren't the direct audience, but ultimately someone has to provide the
funding you allude to.)

When a visionary says that he requires more funding and ten years, this
often indicates an unfocused project that will never get on-track. In
software projects it is essential to aim for real results, including a beta
within a year and multiple added-value-providing versions within
approximately 3 years. I think that this is not just investor impatience --
experience shows that software projects planned for a much longer schedule
tend to get off-focus.

I know that you already realize this, and that you do have the focus; you
mention your plans, which I assume include meaningful intermediate
achievements in this incredibly challenging and extraordinary task, but this
the impression which comes across in the talk.

Yours,

Joshua



2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>:


Hi,

For anyone who is curious about the talk "Ten Years to the Singularity
(if we Really Really Try)" that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text of the speech
online:

http://www.goertzel.org/papers/tenyears.htm

The video presentation has been online for a while

video.google.com/videoplay?docid=1615014803486086198

(alas, the talking is a bit slow in that one, but that's because the
audience was in Finland and mostly spoke English as a second
language.)  But the text may be preferable to those who, like me, hate
watching long videos of people blabbering ;-)

Questions, comments, arguments and insults (preferably clever ones)
welcome...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Counter-argument

2006-10-10 Thread Joshua Fox
Thanks for all the input. The best answer that I can contribute to the original question, based on some of our answers, is as follows: As the Skeptic 
article points out, the challenges in achieving true AGI are enormous, suggesting that AGI will follow the course of space flight and fusion power, among many other technologies, which seemed so promising 50 years ago -- no one would have predicted that they would make so little progress. 
[This is an argument by analogy, and therefore weak. Moreover, Kurzweil
would answer  by saying that The Law of Accelerating Returns applies to Information Technology (and not to other technologies).]The Skeptic article was great (though I can also give counter-arguments to many of their points).  I really want to see more of this stuff.
So, I'll still have to leave this challenge open.  Until I can get more material on "Why a Singularity probably won't happen," I'll always have the nagging suspicion that most discussion of the Singularity is preaching to the converted.
Joshua



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Counter-argument

2006-10-04 Thread Joshua Fox
Could I offer Singularity-list readers this intellectual challenge: Give an argument supporting the thesis "Any sort of Singularity is very unlikely to occur in this century."
 
Even if you don't actually believe the point, consider it a debate-club-style challenge. If there is already something on the web somewhere, could you please point me to it. I've been eager for this piece ever since I learned of the Singularity concept.  I know of  the "objections" chapter in Kurzweil's 
Singularity is Near, the relevant parts of Vinge's seminal essay, as well the ideas of Lanier, Huebner, and a few others, but in all the millions of words out there I can't remember seeing a well-reasoned article with the above claim as its major thesis.  (Note, I'm looking for "why the Singularity won't happen" rather than "why the Singularity is a bad idea" or "why technology is not accelerating".) 

Joshua

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]