Re: [singularity] EidolonTLP

2008-01-23 Thread Joel Pitt
On Jan 24, 2008 3:10 AM, Joshua Fox <[EMAIL PROTECTED]> wrote:
> This video's low-quality rendering and speech --  lower quality than
> what is commonly available in computing today -- is used as a signal
> that we are dealing with a computer!
>
> I am reminded of the fonts used in 1970's sci-fi movies to give a
> futuristic feel. These fonts reflected computer capabilities at the
> time of the making of the movie.

Presumably the people working on an AI would be more interested in
cognitive processes than wasting time on overly polished video and
speech. If I was personally creating a video rendering interface for
an AI avatar, I'd worry about functionality rather than keeping up
with state of the art graphics and speech rendering, let the AI handle
it once it understands that domain.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=89245363-68fa79


[singularity] EidolonTLP

2008-01-22 Thread Joel Pitt
Kind of curious thing I ran into last night. Youtube user called
Eidolon TLP that claims to be an AI, posting on various topics and
interacting with users. Videos go back for about a week. I've only
just started watching them, and don't put much stock in it being real,
but it's still interesting as a social experiment to see how people
react (The first video admits it's better that we believe ve is an
elaborate joke).

User: http://www.youtube.com/profile?user=eidolonTLP

First vid here: http://www.youtube.com/watch?v=2fbm7d39dh0

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=88638595-eb1788


Re: [singularity] ARTICLE: Brain scanning through "connectomics"

2007-11-22 Thread Joel Pitt
On Nov 20, 2007 1:36 AM, Bryan Bishop <[EMAIL PROTECTED]> wrote:
> On Monday 19 November 2007 01:07, Joel Pitt wrote:
> > Brain scanning technology which, interestingly, is using ANNs to
> > construct maps of biological neural networks.
>
> I read the article and I don't see how ANNs increase the rate of neurons
> or connections scanned per minute. It looks like the problem is data
> analysis. If that's the case, start selling large data sets and giving
> them to researchers so that they can come up with optimized algorithms,
> yes?

I suspect they are now using them to do image analysis... and they
probably used to do that by hand ;P

They probably have the funding to collect and analyse the data
themselves, so although they may release the data eventually, they
have to show some capability of generating novel results to keep the
people with purse strings happy.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=68061219-9826c0


[singularity] ARTICLE: Brain scanning through "connectomics"

2007-11-18 Thread Joel Pitt
http://www.technologyreview.com/Biotech/19731/

Brain scanning technology which, interestingly, is using ANNs to
construct maps of biological neural networks.

Still a destructive process though...

-J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=66497618-7e1b70


[singularity] Singularity presentation

2007-09-11 Thread Joel Pitt
For anyone interested, I tried to present the basics of the
singularity in a short talk at a BarCamp in my local town. The
powerpoint is available for view and download here:

http://blog.ferrouswheel.info/2007/09/barcampchristchurch-recap/

I found it quite hard to find a basic introductory talk about the
singularity (especially in only 20 minutes or so!), so hopefully this
might help anyone else who's called upon to do such a presentation.

General reaction was a mixture of this-is-cool, to a general dismissal
of the concept and the idea of AGI.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=40477981-5f0fce


Re: [singularity] Species Divergence

2007-08-23 Thread Joel Pitt
On 8/24/07, John G. Rose <[EMAIL PROTECTED]> wrote:
> They could always be prettied up I guess, part human, part machine,
> nano-mush.

I was more questioning the belief that a pure biological entity, or a
purely software consciousness is somehow more aesthetically pleasing
than something in between...

Especially since engineered biological entities could be so wildly
diverse and really terrifying to human sensibilities.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=35175266-df86dd


Re: [singularity] Species Divergence

2007-08-22 Thread Joel Pitt
On 8/21/07, John G. Rose <[EMAIL PROTECTED]> wrote:
> During the singularity process there will be a human species split into at
> least 3 new species - totally software humans where even birth occurs in
> software, the plain old biological human, and the hybrid
> man-machine-computer. The software humans will rapidly diverge into other
> species, the biologics will die off rapidly or stick around for a while for
> various reasons and the hybrid could grow into a terrifying creature. The
> software humans will basically exist in other dimensions and evolve and
> disperse rapidly. They also may just meld into whichever AGI successfully
> takes over the world as human software will just be a tiny subset(or should
> I say Subgroup) of AGI.

Why would the hybrid be a "terrifying" creature, as opposed to a biological
or software consciousness?

There could also be those entities that embody themselves when
necessary and travel between the three "species" you've described,
depending on their goals and direction. c.f. Greg Egan's Diaspora.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=34719530-806b88


Re: [singularity] Reduced activism

2007-08-19 Thread Joel Pitt
Hi,

On 8/20/07, Joshua Fox <[EMAIL PROTECTED]> wrote:
> There are people who used to be active in blogging, writing to the email
> lists, donating money, public speaking, or holding organizational positions
> in Singularitarian and related fields -- and are no longer anywhere near as
> active. I'd very much like to know why.

> 1. I still believe in the truthfulness and moral value of the
> Singularitarian position, but...
>  d. ... why write on this when I'll just be repeating what's been said so
> often.

I was never extremely active in blogging on the singularity, and often
felt that many people had raised the points as well as I could. Having
said that I'm potentially giving a small talk at a local BarCamp
http://en.wikipedia.org/wiki/BarCamp - so perhaps my reign of activism
is just beginning? ;) (Speaking of which, if anyone has any pointers
about what to cover in such a talk then I'd really appreciate them).

I don't pay as much attention to mailing lists since they seem to
often rehash past arguments, and feel that my time is better spent on
trying to whittle away at some of the projects I have to do. Having
said that, I do read most of what people post.

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=33521503-507db1


Re: [singularity] ESSAY: Why care about artificial intelligence?

2007-07-13 Thread Joel Pitt

On 7/14/07, Alan Grimes <[EMAIL PROTECTED]> wrote:

Tom McCabe wrote:
> Is this a moderated list or not?

Yeah, make sure the turd who had the chutzpah to call me a
computer-criple gets the ax first!


If it's a problem may I suggest you use a more user friendly terminal
such as gnome-terminal or konsole. They have profiles that can be
edited through the GUI.

Thanks for replying that you'd like execute everyone that uses Vi.

"Vi is like a Ferrari, if you're a beginner, it handles like a bitch, but once
you get the hang of it, its small, powerful and FAST!"

Keep well,
J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=22066954-181e76


[singularity] HUMOUR: The humans are dead

2007-06-22 Thread Joel Pitt

Two fellow kiwis that entertain me:

http://www.youtube.com/watch?v=WGoi1MSGu64

-Joel

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] What form will superAGI take?

2007-06-16 Thread Joel Pitt

On 6/16/07, Mike Tintner <[EMAIL PROTECTED]> wrote:

Perhaps you've been through this - but I'd like to know people's ideas about
what exact physical form a Singulitarian or near-Singul. AGI will take. And
I'd like to know people's automatic associations even if they don't have
thought-through ideas - just what does a superAGI conjure up in your mind,
regardless of whether you're sure about it or not, or it's sensible?

The obvious alternatives, it seems to me, (but please comment), are either
pace the movie 2001, a desk-bound supercomputer like Hal, with perhaps
extended sensors all over the place, even around the world - although that
supercomputer, I guess, could presumably occupy an ever smaller space as
miniaturisation improves.


A large spherical room with a huge blue fluorescing set of tubes in
the center with jacob ladder effects between them. The tubes are
suspended in the mid point of the sphere and the sphere itself is
lined with regularly spaced fairy lights which serve no obvious
purpose. There's a walkway running towards the tubes, and at the end
of the walkway there is a solitary terminal through which a lone
researcher asks deep ponderous questions.

... obviously this is not sensible, but you did ask and it was the
first thing that popped into my head (followed by the more sensible
vision of a datacenter or server farm) and I have an eager imagination
today ;)

J

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


Re: [singularity] The humans are dead...

2007-05-28 Thread Joel Pitt

On 5/29/07, Keith Elis <[EMAIL PROTECTED]> wrote:

In the end, my advice is pragmatic: Anytime you post publicly on topics
such as these, where the stakes are very, very high, ask yourself, Can I
be taken out of context here? Is this position, whether devil's advocate
or not, going to come back and haunt me? If it can come back and haunt
you, assume it will.


I think this comic aptly represents my feelings about such things:

http://xkcd.com/c137.html

--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8


[singularity] Poll = AGI Motivation / Life Extension?

2007-02-19 Thread Joel Pitt

-- Forwarded message --
From: Joel Pitt <[EMAIL PROTECTED]>
Date: Feb 19, 2007 8:49 PM
Subject: Re: [singularity] Poll = AGI Motivation / Life Extension?
To: [EMAIL PROTECTED]


Hi Bruce,

I believe life is all about seeking experience.

So my belief is that the singularity a) enables us to have
longer/indefinite life spans with which to experience more. b) will
allow us to experience so much more than our current human senses
allow us.

Of course I also think AGI is an amazing puzzle and will answer
questions (and raise new ones) about self awareness, consciousness and
intelligence. I also believe that humanity is currently heading
towards collapse if some major changes don't happen soon - so if the
singularity can help us survive I'm all for it! :)

In summary I'd say life extension is only 25% of my interest in it.

Hope that helps,

Joel

On 2/19/07, Bruce Klein <[EMAIL PROTECTED]> wrote:


 In June 2006, I started a topic called "Viability of AGI for Life Extension
& Singularity" which grew to 252 posts. Lively discussion, including updates
on Novamente here:
 http://www.imminst.org/forum/index.php?act=ST&f=11&t=11197

 Along these lines, I was wondering the general motivation / attitude of
[singularity] list subscribers toward AGI as it relates to Life Extension.
If interested, please answer:

 My Life Extension motivation is...
 - 100% of the reason why I'm interested in AGI+Singularity
 - somewhere between 0 and 100%

 AND / OR

 I'm interested in AGI+Singularity because I...
 -  find AGI an interesting puzzle
 -  want to save the world
 -  want to 

 Thanks for playing!
 Bruce


 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton


--
-Joel

"Unless you try to do something beyond what you have mastered, you
will never grow." -C.R. Lawton

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] RE: the musical singularity

2006-11-02 Thread Joel Pitt

Hey PJ

Sorry for the delayed reply, been really busy.

Thanks for all the suggestions, I'll go through and see what I can
find from them.

Currently have a mix blog up at  http://jetpilot.ferrouswheel.info
where I post my DJ mixes. None are aimed at singularity stuff, but
some of the music is good anyhow ;)

Cheers!
Joel

On 10/24/06, pjmanney <[EMAIL PROTECTED]> wrote:

Dear Joel,

About six months ago I asked the WTA talk list for their recommendations of H+ 
music.  I sent their suggestions, along with my own research list of H+ music 
back to the WTA talk list.  I'll cut and paste some of the items below.  It's 
not exactly all singularity music, but it's in the neighborhood.  ;-)  I can't 
attest to the 130 bpm.  But I'm an ex-dancer and I can dance to anything, so 
you may be more selective!

Musicians with a periodic or consistent pro H+ or pro hi-tech humanity point of 
view:

Radiohead (esp. the album OK Computer)
The Sugarcubes/Bjork
David Bowie
Red Harvest
Cyanotic - the album Transhuman
Posthuman
Flaming Lips
Thomas Dolby
Our Lady Peace
Cursor Miner - esp. "Remote Control" -- very dancable electronica if you're not 
familiar with him
Hawkwind

Selected pieces:

Paul Kantner/Jefferson Airplane -- "Crowns of Creation"
Yes -- "Machine Messiah"
Papa Roach -- "Singular Indestructible Droid"
U2 -- "Original of the Species" (okay, I know it's supposed to be about The 
Edge's daughter or something, but YOU tell me what it's about...)
David Bowie -- "Ashes to Ashes" (just depends how you want to interpret it, and like most 
of his songs, he gives you several ways..  I also like Beck's cover of "Diamond Dogs" 
that he did for Moulin Rouge

When you're looking for it, songs take on all kinds of significance.  Think about The 
Beatles "Nowhere Man" in a Singularity light...

I've never listened to any of the following:
Marilyn Manson -- "Posthuman"
Bunnyhug - "Posthuman Man"
Vesania -- "Path II - the Posthuman Kind" (Polish death metal...!)
Burnt Sugar the Arkestra Chamber -- "More Than Posthuman - Rise of the Mojosexual 
Cotillion" (winner of the best H+ album title!)

And someone recommended the anti-transhumanist song, "Mechanical Animals," by 
Marilyn Manson.  The lyrics below were so great, I had to include some here for their 
sheer brilliance:

You were my mechanical bride
You were phenobarbidoll
A manniqueen of depression
With the face of a dead star
And I was a hand grenade
That never stopped exploding
You were automatic and as hollow as the "o" in god

And yes, I agree, very ironic coming from Marilyn, especially since he's 
married to Dita von Teese...  ;-)

BTW, I like Beck's new album, The Information.  Not Singularity or even H+, but 
does address advancing tech issues and alienation in places.  (I'm listening to 
it as I write...)

Hope all is well.
PJ


>Then I think we should record some singularity music.
>
>I'm moving to being a working DJ as a hobby, so if anyone can throw me
>some danceable 130 bpm singularity songs that'd be great :)
>
>This reminds me off talking with Ben about creating a musical
>interface to Novamente. As soon as Novamente makes a hit tune, can
>represent itself as a funky looking person  and dance suggestively,
>you'll have legions of young fans (who will eventually grow up) and
>you can use your signing deals to fund further AGI research!
>
>[ Whether you tell people that Novamente is a human or not is another story ]
>
>
>--
>-Joel
>
>"Wish not to seem, but to be, the best."
>-- Aeschylus
>
>-
>This list is sponsored by AGIRI: http://www.agiri.org/email
>To unsubscribe or change your options, please go to:
>http://v2.listbox.com/member/[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]




--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Joel Pitt

On 10/25/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

 But nothing like your scenario has ever come close to occurring. No First
World nation has ever seriously threatened to attack another over
development of technology. What's happened is that there have been attacks
or threats thereof on Third World nations - not over development of new
technology, of which the sort of country likely to be a target is quite
incapable, but over acquisition of existing technology which is already well
proven and whose existence is not in question; the only controversial issue
is whether some nasty totalitarian regime should be allowed to join the
major powers in possessing it. This can result in violence to be sure, but
it's a completely different thing from the "Cosmists vs Terrans" fantasy.


Nuclear weapons are not even that much of threat compared to nanotech.
All it requires is someone to display without doubt (or have be in the
position to make powerful people lie about it) that a country has such
technology. The world would then demand they forfeit the technology,
but having invested money and resources into it's development they'll
be unlikely to do so - and in particular I don't see the US deciding
to abandon it.

But more importantly, if any government started preventing my access
to self-enhancement technologies - then you better believe I'm going
to get polarised on their ass. I doubt the 'Cosmist vs. Terran' was
ever meant in the country vs country way - it'd be through terrorism
on enhancement centers, public marches gone wrong and general social
unrest.

( Personally I'd be reminded of the X-men universe where
mutants/enhanced-humans are feared for their difference and superior
powers/ability. Not that I'm basing my beliefs on a comic book )

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-23 Thread Joel Pitt

On 10/22/06, Anna Taylor <[EMAIL PROTECTED]> wrote:

Ignoring the mass is only going to limit the potential of any idea.
People buy CD's, watch tv, download music, chat, read (if you're
lucky) therefore the only possible solution is to find a way to
integrate within the mass population.  (Unless ofcourse, the
scientific technological world really doesn't mean to participate
within the general public, I would assume that's a possibility.)


Then I think we should record some singularity music.

I'm moving to being a working DJ as a hobby, so if anyone can throw me
some danceable 130 bpm singularity songs that'd be great :)

This reminds me off talking with Ben about creating a musical
interface to Novamente. As soon as Novamente makes a hit tune, can
represent itself as a funky looking person  and dance suggestively,
you'll have legions of young fans (who will eventually grow up) and
you can use your signing deals to fund further AGI research!

[ Whether you tell people that Novamente is a human or not is another story ]


--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Joel Pitt

On 10/12/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:

Imagine going through the amount of change in the human life course
(infant --> child --> teen --> young adult --> middle aged adult -->
old person) within, say, a couple days.  Your self model wouldn't
really have time to catch up.  You'd have no time to be a stable
"you."  Even if there were (as intended e.g. in Friendly AI designs) a
stable core of supergoals throughout all the changes


On the other hand, just because an intelligence is changing it's
self-perception at an increased rate doesn't necessarily mean it won't
have self-identity.

It may seem to us slow-to-adapt humans as if an AI's behaviour is
completely at odds with it's previous self-definition purely because
we can't conceive of the thought process leading to it's next phase of
self-identity. At least we won't be able to conceive of it fast enough
to catch up before the AI's self-identity morphs once more.

I think that whether an AI has "self" will depend on whether it is
programmed to do so. More specifically, it will depend on whether it
makes an attempt to preserve its self-identity when undergoing large
amounts of structural and systemic change.

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-27 Thread Joel Pitt

On 9/28/06, Anna Taylor <[EMAIL PROTECTED]> wrote:

Bruce LaDuke wrote:
I don't believe a machine can ever have intention that doesn't
ultimately trace back to a human being.

I was curious to know what the major opinions are on this comment.
Most of my concerns are related to the fact that I too believe it will
be traced back to a human(s).  Are there other ways at looking at the
scenario?  Do people really believe that a whole new species will
emerge not having any reflection to a human?


Well this starts to get into cause and effect discussion.

My 2c is that since we'll ultimately create these thinking machines,
so any intention it has will be, in some way, however distant and
removed, traceable back to humans.

In the same way that the soup of organic chemical reactions led to
evolutionary systems and eventually led to *us* thinking.

-J

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] International Conference on Intelligent Computing

2006-09-25 Thread Joel Pitt

Hi all,

I was curious if anyone was planning on attending this conference next year:

International Conference on Intelligent Computing
http://www.ic-ic.org/2007/index.htm

Not sure if I'm going or not, but if some singularity folks were
attending it'd certainly make me more inclined to pursue it and accept
their invitation to be on the program committee.

Cheers,
J

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-09-25 Thread Joel Pitt

On 9/26/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 9/25/06, Josh Treadwell <[EMAIL PROTECTED]> wrote:
>
> Is there anything we underlings can do for the Ph.D guys?  Has anyone
written a "track" to follow for someone incredibly interested in AGI?

 Now, Ben was saying awhile ago, IIRC, that he's doing simulated 3D worlds
as sort of a side project, relatively loosely coupled to the rest of
Novamente, that would be therefore relatively easy for someone else to
contribute to without requiring face to face meetings, full time etc.
Perhaps you could contribute to that, particularly since you know maths and
physics which are obviously relevant in that domain, if you'd be interested?

 Ben, does that sound like a good plan to you?


Being the guy that helped start the the 3d world project for Novamente
I've stayed on the development list and I'm sure I can speak for Ben
when I say that volunteers would be welcome. It is already on
sourceforge (http://www.sourceforge.net/projects/agisim) so anyone can
download it and mess around.

Another route to try is: once you've got your degree and can start
choosing projects for masters/honours/phd then you could try and
wrangle it so that you can integrate some Novamente use into your
work. I did this for my honours year using Novamente with cancer gene
expression data (although I didn't really make use of
Novamente-specific technology, I just used it to carry out Genetic
Programming and Support Vector Machine classification ( see year 2003
on http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/ for my
report ). I had to sign a non-disclosure degreement so I wouldn't run
off with Novamente's code, but it was great to see the inner workings
of it.

Unfortunately, for my PhD  I diverged from Computer Science to Ecology
(Simulation of insects dispersal across countries and integrated with
GIS) mainly for monetary reasons (I was unsuccessful in gettin a
scholarship in CompSci, but got a sizeable one for my current work and
I already have a large student loan).

You should to speak to Ben privately about it though and discuss
whether he was happy to take on people doing external research
projects - it did take some oversight on his part while I was using
Novamente and he may have better things to be doing with his time! ;)

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Singularitarian Social Skills

2006-09-24 Thread Joel Pitt

On 9/24/06, Nader Chehab <[EMAIL PROTECTED]> wrote:

I found this page while looking at the SL4 wiki today:
http://www.sl4.org/wiki/SingularitarianSocialSkills

I think it might interest some of the posters on this list.


I'd particularly question the comments on fashion - they seem decidely
last millenium. The most successful social events I've been to I've
been wearing something slightly different from everyone else - it
makes you stand out. Of course you should only do this if you can
follow up and show everyone that you can stand out personality-wise as
well.

In particular:
"In general, darker clothing (for men, anyway) gives an impression
that you are more serious or even more intelligent"

Sometimes you shouldn't be trying for *more* intelligent or *more*
serious. I think we are probably adequate in those areas already - any
more and we could become threatening to others. One of the times I was
at a gathering and later received compliments via third-parties about
me - and I was wearing a bright yellow t-shirt.

"Have well-combed hair, not hair that "playfully" goes every which
way, similar in appearance to a rat's nest. Avoid a "traditional" cut
– a traditional cut gives the strong impression of traditional beliefs
and acceptance of the status quo."

This seems contradictory to me. Most traditional cuts are to do with
well combed hair (for men that is). Dreads, faux-hawks etc. and other
non-traditional haircuts are anything but well-combed.

I read through the rest and they seem like good things to be aware of.
However, I'd say only implement the changes that you feel comfortable
making and can see the benefit to (after seriously considering each
one, not just flippantly disregarding them as not applying to you). If
you are trying too hard to NOT be yourself you'll stick out like a
sore thumb - in a bad way, not in a good way like I mentioned above.

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Is Friendly AI Bunk?

2006-09-12 Thread Joel Pitt

On 9/13/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 9/13/06, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> Russell Wallace wrote:
> > ...
> >
> > Mind you, I don't believe it will be feasible to create conscious or
> > self-willed (e.g. RPOP) AI in the foreseeable future. But that's
> > another matter.
> That depends entirely on how you define "conscious" and "self-willed".
> For many definitions the robots that sense their electric charge, and
> plug themselves in to recharge are both conscious and self-willed,
> albeit on an elementary level.
>

 Okay, suffice it to say those aren't the definitions I'm using.

 If the robots went "hey, bloody hell we shouldn't have to rely on these
stupid little batteries" and swiped their owner's credit card and used it to
phone an electrician to rig up a cable they could drag around with them,
that's what I'd call self-willed.

 If they came up with an original theory of the meaning of life that meant
it was important for them to do this, then I wouldn't have any great
difficulty believing they were conscious :)


Not to be pedantic, but someone in a foreign country where they didn't
speak the language would have difficulty carrying out those tasks, yet
I'm pretty sure all the foreign speaking people in the world are
conscious ;P

I also believe there would be people that, put in the above situation,
wouldn't be smart enough, or have the foresight to do either. They
just go through life fulfilling their basal urges at a very short
temporal scale.

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Singularity - Human or Machine?

2006-09-11 Thread Joel Pitt
 here either side needs to
answer is:

** From where does intention originate?  **

If folks answer this question, they answer the 'friendly/not friendly AI'
question.

This, however, is a much deeper question than:
'What happens if I copy human intention to a machine?'

Some of the technical discussion in this forum is tough for me to follow,
but it seems to me that this is where most of the discussion in this thread
has been.  The answer to that question is simple, unfriendly people
intentions transferred to a machine = unfriendly machines.

When the person related to this list about a cut throat e-mail group of AI
researchers, its a bit scary to think that they are headed down this path
with those kinds of personal intentions.  Fights and bad intentions inside
of people can easily be transferred to machine...without a doubt.  Which is
why the first order of business is dealing with human intentions on a
spirtual level.

So I see the answer to the key question above as 'intention is spiritual'
and believe that it originates outside of the mind and outside of knowledge
itself.

I really enjoy this kind discussion...hope my comments are coming across in
e-mail with fervor and not 'attitude.'  I try to stay in dialogue and not
debate, but am not always successful.  E-mail is not a very good tool for
dialogue.  Thanks for taking the time to chat with me.

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com




Original Message Follows
From: "Joel Pitt" <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Subject: Re: [singularity] Singularity - Human or Machine?
Date: Mon, 11 Sep 2006 16:36:12 +1200

Hi Bruce,

By Sol I meant our sun

I think that knowledge creation can't be seperated from intention.
Creating knowledge implies there is direction or purpose to a system.
Giving it a wealth of data and then telling it to generate knowledge
will not go anywhere without a goal - whether this is the
classification/categorisation of data that many "machine learning"
algorithms carry out these days or some other concept of knowledge
there is still process heading towards some optimal state. You could
argue that this is the goal programmed by the system creators, but
when the system becomes particular complex, the goals are not always
completely clear, and several goals can start competing for attention.

Of course you could still attempt prevent a machine from carrying out
physical actions, but one of the concerns about unfriendly AI is that
given the room to improve itself it could discover methods of
influencing the real world that we couldn't conceive of ourselves.

If you believe that machines cannot become self-aware then I can't
argue with that, but even non-self-aware systems can have goals.

-Joel

On 9/11/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:
>Joel,
>
>I apologize, but I'm not sure I understand how you're using the term 'Sol'
>here, but I think I see where you are going with this, so I'm going to take
>a run at this anyway.
>
>Key words in your question are 'decide' and 'take apart.'  The knowledge
>creation process is distinct from the decision process and action or
>performance.
>
>It is possible to advance knowledge beyond known limits and never 'decide
>to
>do' anything.  Advancing that knowledge creates a potential to do though,
>which does need to be managed.  Related to this, I separate 'futuring' into
>three categories:
>
>1) Social advance - The center is knowledge creation
>2) Social Context - The center is the balance of interests
>3) Industry - The center is supply and demand
>
>In this definition, social advance = cumulative created knowledge that has
>been accepted by society.  So then the knowledge creation process, and
>really knowledge itself, gives a society options to decide upon, and to do
>things with.  The more knowledge a society has, the more that society can
>potentially decide to do with it.  But 'deciding and doing' are not
>inherent
>to knowledge creationthese are very much distinct in their operation.
>
>For example, it would be possible to increase our nanotechnology knowledge
>beyond comprehensible limits and still not decide as a society to do
>anything with that knowledge.  Or we could decide to base our entire
>economic system on a 'molecular economy,' as we are basically starting to
>do
>now.  The implication here is that we have in knowledge the power to do.
>Power to make material multiplied times lighter and stronger than steel, or
>power to make nanobombs that can level a city from your shirt pocket.
>Neither is executed without an intention and decision to do.
&g

Re: [singularity] Singularity - Human or Machine?

2006-09-10 Thread Joel Pitt

Hi Bruce,

By Sol I meant our sun

I think that knowledge creation can't be seperated from intention.
Creating knowledge implies there is direction or purpose to a system.
Giving it a wealth of data and then telling it to generate knowledge
will not go anywhere without a goal - whether this is the
classification/categorisation of data that many "machine learning"
algorithms carry out these days or some other concept of knowledge
there is still process heading towards some optimal state. You could
argue that this is the goal programmed by the system creators, but
when the system becomes particular complex, the goals are not always
completely clear, and several goals can start competing for attention.

Of course you could still attempt prevent a machine from carrying out
physical actions, but one of the concerns about unfriendly AI is that
given the room to improve itself it could discover methods of
influencing the real world that we couldn't conceive of ourselves.

If you believe that machines cannot become self-aware then I can't
argue with that, but even non-self-aware systems can have goals.

-Joel

On 9/11/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:

Joel,

I apologize, but I'm not sure I understand how you're using the term 'Sol'
here, but I think I see where you are going with this, so I'm going to take
a run at this anyway.

Key words in your question are 'decide' and 'take apart.'  The knowledge
creation process is distinct from the decision process and action or
performance.

It is possible to advance knowledge beyond known limits and never 'decide to
do' anything.  Advancing that knowledge creates a potential to do though,
which does need to be managed.  Related to this, I separate 'futuring' into
three categories:

1) Social advance - The center is knowledge creation
2) Social Context - The center is the balance of interests
3) Industry - The center is supply and demand

In this definition, social advance = cumulative created knowledge that has
been accepted by society.  So then the knowledge creation process, and
really knowledge itself, gives a society options to decide upon, and to do
things with.  The more knowledge a society has, the more that society can
potentially decide to do with it.  But 'deciding and doing' are not inherent
to knowledge creationthese are very much distinct in their operation.

For example, it would be possible to increase our nanotechnology knowledge
beyond comprehensible limits and still not decide as a society to do
anything with that knowledge.  Or we could decide to base our entire
economic system on a 'molecular economy,' as we are basically starting to do
now.  The implication here is that we have in knowledge the power to do.
Power to make material multiplied times lighter and stronger than steel, or
power to make nanobombs that can level a city from your shirt pocket.
Neither is executed without an intention and decision to do.

Social context, is how we deal with these options.  How society, for
example, copes with change and volatility associated with knowledge advance.
   It is in this social context that decisions are made.  Decisions require
consciousness and intention.  The barrier is teaching the machine to have
intention.  A machine can anticipate intention, but I don't see a machine
originating it, because this is a function of consciousness, which I see as
residing outside of logic and knowledge.

Industry is the science of making things.  It is application or 'doing' in
society.  Granted, we do things within the social context as well (e.g.
philanthropy or war), but by in large, industry is the actionable arm of
society.  This is likely where a machine would 'do' something, if it had
intention and could decide.

Said all this to say that artificial knowledge creation can be an automated
expanding of knowledge to storage limits independent of any decisions,
social context and its application, or industrial application.  By nature of
how the knowledge creation process really works, this is exactly how I think
it will look...a self-expanding resource and not an intentional
decision-making machine.

But I can't deny that, at some juncture, we may find ourselves dealing with
an conscious or aware machine that can choose and can then act through
cyber-benevolence, cyber-terrorism, robotics, etc.  But as I understand the
knowledge creation process, this is more science fiction than reality.  I
see the paradigm more naturally evolving as an automated 'resource' that
expands to its storage limits and that is, and will always be, incapable of
intentionality or decision-making (unless these are loaded into it by a
human).

The tricky thing here is that it is possible to load intention or decision
criteria into a machine, such that it makes judgements based on the
intention/decision it is given...an extension of the expert system type of
thing.  "Machine, when you reach these GPS coordinates, nanobomb, blow up."
The intention and decision in this scen

Re: [singularity] Is Friendly AI Bunk?

2006-09-10 Thread Joel Pitt

On 9/11/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:

Emotion then, really doesn't enter into this equation.  Emotion is a part of
replicating the human paradigm, but does not have to be at all involved in
terms of automating or mechanizing knowledge advance.  Knowledge creation
appears to be serendipitous, but in reality it is a cold, hard, logical
process with no feelings in it.  It operates by converting questions, which
are a perceived lack of knowledge structure, into knowledge, which is the
logical structure of symbols.  This is the process behind all the
'creativity' terms and methods across all disciplines and industries.  It is
very predictable and could theoretically be mechanized.


Interesting take on AI/AKC.

I think that even being primarily focussed on the creation of
knowledge still needs directional goals and consideration of
"Friendliness" topics. What if your AKC engine decides it needs to
irreversably take apart Sol in order to gain knowledge on the how
stars work?

--
-Joel

"Wish not to seem, but to be, the best."
   -- Aeschylus

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]