[agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
To all,

I am considering putting up a web site to filter the crazies as follows,
and would appreciate all comments, suggestions, etc.

Everyone visiting the site would get different questions, in different
orders, etc. Many questions would have more than one correct answer, and in
many cases, some combinations of otherwise reasonable individual answers
would fail. There would be optional tutorials for people who are not
confident with the material. After successfully navigating the site, an
applicant would submit their picture and signature, and we would then
provide a license number. The applicant could then provide their name and
number to 3rd parties to verify that the applicant is at least capable of
rational thought. This information would look much like a driver's license,
and could be printed out as needed by anyone who possessed a correct name
and number.

The site would ask a variety of logical questions, most especially probing
into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of resolving
otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports various
violent acts (with quotes from various religious texts). This would exclude
pretty much every religion, as nearly all religions condone useless violence
of various sorts, or the toleration or exposure of violence toward others.
Even Buddhists resist MAD (Mutually Assured Destruction) while being unable
to propose any potentially workable alternative to nuclear war. Jesus
attacked the money changers with no hope of benefit for anyone. Mohammad
killed the Jewish men of Medina and sold their women and children into
slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance
to any non-human god or alien entity, and that they will NOT follow the
directives of any government led by people who would obviously fail this
test. This statement would be included on the license.

This should force many people off of the fence, as they would have to choose
between sanity and Heaven (or Hell).

Then, Ben, the CIA, diplomats, etc., could verify that they are dealing with
people who don't have any of the common forms of societal insanity. Perhaps
the site should be multi-lingual?

Any and all thoughts are GREATLY appreciated.

Thanks

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread martin biehl
Hi Steve

I am not an expert so correct me if I am wrong. As I see it every day
logical arguments (and rationality?) are based on standard classical logic
(or something very similar). Yet I am (sadly) not aware of a convincing
argument that this logic is the one to accept as the right choice. You might
know that e.g. intuitionistic logic limits the power of reductio ad absurdum
to negative statements (I don't know what reverse reductio ad absurdum is,
so it may not be a precise counterexample, but I think you get my point).
Would this not make you hesitate? If not, why?

Cheers,

Martin Biehl

2008/11/18 Steve Richfield [EMAIL PROTECTED]

 To all,

 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially probing
 into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports various
 violent acts (with quotes from various religious texts). This would exclude
 pretty much every religion, as nearly all religions condone useless violence
 of various sorts, or the toleration or exposure of violence toward others.
 Even Buddhists resist MAD (Mutually Assured Destruction) while being unable
 to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., etc.
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.

 This should force many people off of the fence, as they would have to
 choose between sanity and Heaven (or Hell).

 Then, Ben, the CIA, diplomats, etc., could verify that they are dealing
 with people who don't have any of the common forms of societal insanity.
 Perhaps the site should be multi-lingual?

 Any and all thoughts are GREATLY appreciated.

 Thanks

 Steve Richfield

  --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread John G. Rose
 From: Trent Waddington [mailto:[EMAIL PROTECTED]
 
 On Tue, Nov 18, 2008 at 7:44 AM, Matt Mahoney [EMAIL PROTECTED]
 wrote:
  I mean that people are free to decide if others feel pain. For
 example, a scientist may decide that a mouse does not feel pain when it
 is stuck in the eye with a needle (the standard way to draw blood) even
 though it squirms just like a human would. It is surprisingly easy to
 modify one's ethics to feel this way, as proven by the Milgram
 experiments and Nazi war crime trials.
 
 I'm sure you're not meaning to suggest that scientists commonly
 rationalize in this way, nor that they are all Nazi war criminals for
 experimenting on animals.
 
 I feel the need to remind people that animal rights is a fringe
 movement that does not represent the views of the majority.  We
 experiment on animals because the benefits, to humans, are considered
 worthwhile.
 

I like animals. And I like the idea of coming up with cures to diseases and
testing them on animals first. In college my biologist roommate protested
the torture of fruit flies. My son has starting playing video games where
you shoot, zapp and chemically immolate the opponent, so I need to explain
to him that those bad guys are not conscious...yet.

I don't know if there are guidelines. Humans, being the rulers of planet,
appear as godlike beings to other conscious inhabitants. That brings
responsibility. So when we start coming up with AI stuff in the lab that
attains certain levels of consciousness we have to know what consciousness
is in order to govern our behavior.

And naturally if some superintelligent space alien or rogue interstellar AI
encounters us and decides that we are a culinary delicacy and wants to grow
us enmass economically, we hope that some respect is given eh? 

Reminds me of hearing that some farms are experimenting with growing
chickens w/o heads. Animal rights may be more than just a fringe movement.
Kind of like Mike - http://en.wikipedia.org/wiki/Mike_the_Headless_Chicken

John





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Bob Mottram
2008/11/18 Steve Richfield [EMAIL PROTECTED]:
 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.


This all sounds peachy in principle, but I expect it would exclude
virtually everyone except perhaps a few of the most diehard
philosophers.  I think most people have at least a few beliefs which
cannot be strictly justified rationally, and that would include many
AI researchers.  Irrational or inconsistent beliefs originate from
being an entity with finite resources - finite experience and finite
processing power and time with which to analyze the data.  Many people
use quick lookups handed to them by individuals considered to be of
higher social status, principally because they don't have time or
inclination to investigate the issues directly themselves.

In religion and politics people's beliefs and convictions are in
almost every case gotten at second-hand, and without examination, from
authorities who have not themselves examined the questions at issue
but have taken them at second-hand from other non-examiners, whose
opinions about them were not worth a brass farthing. - Mark Twain


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Trent Waddington
On Tue, Nov 18, 2008 at 8:38 PM, Bob Mottram [EMAIL PROTECTED] wrote:
 I think most people have at least a few beliefs which cannot be strictly 
 justified rationally

You would think that.  :)

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Mike Tintner
Colin,

May I suggest if you want clarity you dispense with eccentric philosophical 
terms like p-consciousness (phenomenal consciousness?)  The phantom limb case 
you bring up is interesting but first I have to understand what you're talking 
about. Would you mind sticking to simple, basic (and scientific) words like 
sensation/emotion/ consciousness and restate your position?

Colin/Mike Tintner wrote: 
Colin:YESBrains don't have their own sensors or self-represent with a 
perceptual field. So what? That's got nothing whatever to do with the matter at 
hand. CUT cortex and you can kill off what it is like percepts out there in 
the body (although in confusing ways). Touch appropriate exposed cortex with a 
non-invasive probe and you can create percepts apparently, but not actually, 
elsewhere in the body.

Cut off your sensors and your body -remove the body from the brain - and 
you also don't have any form of consciousness or sensation  - contrary to the 
brain-in-a-vat delusion. However if you remove the brain entirely -  from an 
evolutionary perspective - you still have consciousness. Living organisms 
clearly had and have intelligence *before* the brain was evolved - *before* 
intelligence was centralised in one area of the body. Intelligence was clearly 
at first *distributed* through a proto-nervous system throughout the body. 
Watch a sea anemone wait and then grab, and then devour a fish that approaches 
it and you will be convinced of that. The anemone does not have a brain only a 
nervous system. 

You are trying to locate consciousness in one area of the body rather than 
in the brain-body as a whole. It's clearly wrong. You - your self - and your 
consciousness - are a whole body affair. Understanding this is vital not only 
for understanding consciousness but also general intelligence and creativity, 
as I have dealt with elsewhere

  I'm talking about human P-consciousness[1] specifically. I'm not talking 
about its role in intelligence or the P-consciousness or otherwise in any other 
context like an invertebrate. I just want to make sure everyone's on the same 
physiological page for human level AGI. 

  Yes, the normal circumstances are that  P-consciousness arises in brain-body 
as a whole. But pathological circumstances are very telling. Phantom limb is 
where you could have, say, have a perceptual arm 'out there' in space in a 
really agonising contorted way, but there's no actual arm. MaleFemale sex 
changes can produce phantom penises etc.  this is P-consciousness of body 
without body part.

  The fact that P-consciousness occurs in any particular embodiment 
circumstance or intellectual capacity does not alter the empirical  fact of the 
localisation of the origin of the sensations in humans to the cranial CNS of 
the human. Very specific localised cranial (not spinal) central nervous system 
(CNS) neurons go to a great deal of trouble to construct the P-conscious 
scenes. The peripheral nervous system (PNS) and the spinal CNS are 100% 
sensationless, including all cranial peripherals. That's the main outcome. 

  If there is anyone out there that thinks that merely hooking up a sensor to a 
computer intrinsically creates a percept or a sensation - that is fundamentally 
erroneous. That includes a camera chip or any other peripheral. I have actually 
met a senior AI worker with that delusion installed in his mental kit. He 
didn't like being told about the atomic level reality. 

  I'd like to dispel all such delusion in this place so that neurally inspired 
AGI gets discussed accurately, even if your intent is to explain 
P-consciousness away... know exactly what you are explaining away and exactly 
where it is.

  cheers,
  Colin Hales
  [1] Block, N. (1995), 'On a Confusion About a Function of Consciousness'. 
Behavioral and Brain Sciences 18(2):pp. 227-247.






--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Mark Waser

I mean that people are free to decide if others feel pain.


Wow!  You are one sick puppy, dude.  Personally, you have just hit my Do 
not bother debating with list.


You can decide anything you like -- but that doesn't make it true.

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Monday, November 17, 2008 4:44 PM
Subject: RE: FW: [agi] A paper that actually does solve the problem of 
consciousness--correction




--- On Mon, 11/17/08, Ed Porter [EMAIL PROTECTED] wrote:

First, it is not clear people
are free to decide what makes pain real, at least
subjectively real.


I mean that people are free to decide if others feel pain. For example, a 
scientist may decide that a mouse does not feel pain when it is stuck in 
the eye with a needle (the standard way to draw blood) even though it 
squirms just like a human would. It is surprisingly easy to modify one's 
ethics to feel this way, as proven by the Milgram experiments and Nazi war 
crime trials.


If we have anything close to the advances in brain scanning and brain 
science

that Kurzweil predicts 1, we should come to understand the correlates of
consciousness quite well


No. I used examples like autobliss ( 
http://www.mattmahoney.net/autobliss.txt ) and the roundworm c. elegans as 
examples of simple systems whose functions are completely understood, yet 
the question of whether such systems experience pain remains a 
philosophical question that cannot be answered by experiment.


-- Matt Mahoney, [EMAIL PROTECTED]


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] AGI Light Humor - first words

2008-11-18 Thread Stan Nilsen

First words to come from the brand new AGI?






Hello World









or
Gotta paper clip?
What's the meaning of life?
Am I really conscious?
Where am I?
I come from a dysfunctional family.










































---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.



Hmmm... don't I fail this test every time I follow the speed limit ?   ;-)

As another aside, it seems wrong to accuse Buddhists of condoning violence
because they don't like MAD (which involves stockpiling nukes) ... you could
accuse them of foolishness perhaps (though I don't necessarily agree) but
not of condoning violence

My feeling is that with such a group of intelligent and individualistic
folks as transhumanists and AI researchers are, any  litmus test for
cognitive sanity you come up with is gonna be quickly revealed to be full
of loopholes that lead to endless philosophical discussions... so that in
the end, such a test could only be used as a general guide, with the
ultimate cognitive-sanity-test to be made on a qualitative basis

In a small project like Novamente, we can evaluate each participant
individually to assess their thought process and background.  In a larger
project like OpenCog, there is not much control over who gets involved, but
making people sign a form promising to be rational and cognitively sane
wouldn't seem to help much, as obviously there is nothing forcing people to
be honest...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Richard Loosemore

Colin Hales wrote:

Mike Tintner wrote:
Colin:Qualia generation has been highly localised into specific 
regions in *cranial *brain material already. Qualia are not in the 
periphery. Qualia are not in the spinal CNS, Qualia are not in the 
cranial periphery eg eyes or lips
 
Colin,
 
This is to a great extent nonsense. Which sensation/emotion - (qualia 
is a word strictly for philosophers not scientists, I suggest) - is 
not located in the body? When you are angry, you never frown or bite 
or tense your lips? The brain helps to generate the emotion - (and 
note helps). But emotions are bodily events - and *felt* bodily.
 
This whole discussion ignores the primary paradox about consciousness, 
(which is first and foremost sentience) :  *the brain doesn't feel a 
thing* - sentience/feeling is located in the body outside the brain. 
When a surgeon cuts your brain, you feel nothing. You feel and are 
conscious of your emotions in and with your whole body.
I am talking about the known, real actual origins of *all* phenomenal 
fields. This is anatomical/physiological fact for 150 years. You don't 
see with your eyes. You don't feel with your skin. Vision is in the 
occipital cortex. The eyes provide data. Skin provides the data, CNS 
somatosensory field delivers the experience of touch and projects it to 
the skin region. ALL perceptions, BAR NONE, including all emotions, 
imagination, everything - ALL of it is actually generated in cranial 
CNS.  Perceptual fields are projected from the CNS to appear AS-IF they 
originate in the periphery. The sensory measurements themselves convey 
no sensations at all. 

I could give you libraries of data. Ask all doctors. They specifically 
call NOCICEPTION the peripheral sensor and PAIN the CNS 
(basal...inferior colliculus or was it cingulate...can't remember 
exactly) percept. Pain in your back? NOPE. Pain is in the CNS and 
projected (Badly) to the location of your back, like a periscope-view. 
Pain in your gut? NOPE. You have nociceptors in the myenetric/submucosal 
plexuses that convey data to the CNS which generates PAIN and projects 
it at the gut. Feel sad? Your laterally offset amygdala create an 
omnidirectional percept centered on your medial cranium region. etc etc 
etc etc


YESBrains don't have their own sensors or self-represent with a 
perceptual field. So what? That's got nothing whatever to do with the 
matter at hand. CUT cortex and you can kill off what it is like 
percepts out there in the body (although in confusing ways). Touch 
appropriate exposed cortex with a non-invasive probe and you can create 
percepts apparently, but not actually, elsewhere in the body.


The entire neural correlates of consciousness (NCC) paradigm is 
dedicated to exploring CNS neurons for correlates of qualia. NOT 
peripheral neurons. Nobody anywhere else in the world thinks that 
sensation is generated in the periphery.


The *CNS* paints your world with qualia-paint in a projected picture 
constructed in the CNS using sensationless data from the periphery. 
Please internalise this brute fact. I didn't invent it or simply choose 
to believe it because it was convenient. I read the literature. It told 
me. It's there to be learned. Lots of people have been doing conclusive, 
real physiology for a very long time. Be empirically informed: Believe 
them. Or, if you are still convinced it's nonsense then tell them, not 
me.  They'd love to hear your evidence and you'll get a nobel prize for 
an amazing about-turn in medical knowledge. :-)


This has been known, apparently perhaps by everybody but computer 
scientists, for 150 years.Can I consider this a general broadcast once 
and for all? I don't ever want to have to pump this out again. Life is 
too short.


Yes, although it might be more accurate to say that this is the last 
known place where you can catch the sensory percepts as single, 
identifiable things  I don't think it would really be fair to say 
that this place is the origin of them.


So, for example:

 - If you cover a sheet of red paper you happen to be looking at, the 
red qualia disappear.


 - If instead you knock out the cones that pick up red light in the 
eye, then the red qualia disappear.


 - If you take out the ganglion cells attached to the red cones in the 
retina, the red qualia disappear.


 - If you keep doing this at any point between there and area 17 (the 
visual cortex), you can get the red qualia to disappear.


But after that, there is no single place you can cut off the percept 
with one single piece of intervention.




Richard Loosemore








---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Richard Loosemore

Steve Richfield wrote:

To all,
 
I am considering putting up a web site to filter the crazies as 
follows, and would appreciate all comments, suggestions, etc.
 
Everyone visiting the site would get different questions, in different 
orders, etc. Many questions would have more than one correct answer, and 
in many cases, some combinations of otherwise reasonable individual 
answers would fail. There would be optional tutorials for people who are 
not confident with the material. After successfully navigating the site, 
an applicant would submit their picture and signature, and we would then 
provide a license number. The applicant could then provide their name 
and number to 3rd parties to verify that the applicant is at least 
capable of rational thought. This information would look much like a 
driver's license, and could be printed out as needed by anyone who 
possessed a correct name and number.
 
The site would ask a variety of logical questions, most especially 
probing into:
1.  Their understanding of Reverse Reductio ad Absurdum methods of 
resolving otherwise intractable disputes.
2.  Whether they belong to or believe in any religion that supports 
various violent acts (with quotes from various religious texts). This 
would exclude pretty much every religion, as nearly all religions 
condone useless violence of various sorts, or the toleration or exposure 
of violence toward others. Even Buddhists resist MAD (Mutually Assured 
Destruction) while being unable to propose any potentially workable 
alternative to nuclear war. Jesus attacked the money changers with no 
hope of benefit for anyone. Mohammad killed the Jewish men of Medina and 
sold their women and children into slavery, etc., etc.
3.  A statement in their own words that they hereby disavow allegiance 
to any non-human god or alien entity, and that they will NOT follow the 
directives of any government led by people who would obviously fail this 
test. This statement would be included on the license.
 
This should force many people off of the fence, as they would have to 
choose between sanity and Heaven (or Hell).
 
Then, Ben, the CIA, diplomats, etc., could verify that they are dealing 
with people who don't have any of the common forms of societal insanity. 
Perhaps the site should be multi-lingual?
 
Any and all thoughts are GREATLY appreciated.
 
Thanks
 
Steve Richfield


I see how this would work:  crazy people never tell lies, so you'd be 
able to nail 'em when they gave the wrong answers.



8-|



Richard Loosemore


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread BillK
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:

 I see how this would work:  crazy people never tell lies, so you'd be able
 to nail 'em when they gave the wrong answers.



Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.


BillK


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Richard Loosemore wrote:
 Harry Chesley wrote:
 Richard Loosemore wrote:
 I completed the first draft of a technical paper on consciousness
 the other day.   It is intended for the AGI-09 conference, and it
 can be found at:

 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


 One other point: Although this is a possible explanation for our
 subjective experience of qualia like red or soft, I don't see
 it explaining pain or happy quite so easily. You can
 hypothesize a sort of mechanism-level explanation of those by
 relegating them to the older or lower parts of the brain (i.e.,
 they're atomic at the conscious level, but have more effects at the
 physiological level (like releasing chemicals into the system)),
 but that doesn't satisfactorily cover the subjective side for me.

 I do have a quick answer to that one.

 Remember that the core of the model is the *scope* of the analysis
 mechanism.  If there is a sharp boundary (as well there might be),
 then this defines the point where the qualia kick in.  Pain receptors
 are fairly easy:  they are primitive signal lines.  Emotions are, I
 believe, caused by clusters of lower brain structures, so the
 interface between lower brain and foreground is the place where
 the foreground sees a limit to the analysis mechanisms.

 More generally, the significance of the foreground is that it sets
 a boundary on how far the analysis mechanisms can reach.

 I am not sure why that would seem less satisfactory as an explanation
 of the subjectivity.  It is a raw feel, and that is the key idea,
 no?

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Ethics of computer-based cognitive experimentation

2008-11-18 Thread Harry Chesley
Trent Waddington wrote:
 As I believe the is that conciousness? debate could go on forever,
 I think I should make an effort here to save this thread.

 Setting aside the objections of vegetarians and animal lovers, many
 hard nosed scientists decided long ago that jamming things into the
 brains of monkeys and the like is justifiable treatment of creatures
 suspected by many to have similar experiences to humans.

 If you're in agreement with these practices then I think you should
 be in agreement with any and all experimentation on simulated
 networks of complexity up to and including these organisms.

Yes, my intent on starting this thread was not to define consciousness,
but rather to ask how do we make ethical choices with regard to AGI
before we are able to define it?

I agree with your points above. However, I am not entirely sanguine
about animal experiments. I accept that they're sometimes OK, or at
least the lesser of two evils, but I would prefer to avoid even that
level of compromise when experimenting on AGIs. And, given that we have
the ability to design the AGI experimental subject -- as opposed to
being stuck with a pre-designed animal -- it /should/ be possible.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Mark Waser

My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


Maybe I missed it but why do you assume that because qualia are atomic that 
they have no differentiable details?  Evolution is, quite correctly, going 
to give pain qualia higher priority and less ability to be shut down than 
red qualia.  In a good representation system, that means that searing hot is 
going to be *very* whatever and very tough to ignore.




- Original Message - 
From: Harry Chesley [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 1:57 PM
Subject: **SPAM** Re: [agi] A paper that actually does solve the problem of 
consciousness




Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf



One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.


I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] Now hear this: Human qualia are generated in the human cranial CNS and no place else

2008-11-18 Thread Colin Hales

Trent Waddington wrote:

On Tue, Nov 18, 2008 at 4:07 PM, Colin Hales
[EMAIL PROTECTED] wrote:
  

I'd like to dispel all such delusion in this place so that neurally inspired
AGI gets discussed accurately, even if your intent is to explain
P-consciousness away... know exactly what you are explaining away and
exactly where it is.



Could you be any more arrogant?  Could you try for me, cause I think
you're almost there, and with a little training, you could get some
kind of award.

Trent

  

It's a gift. :-) However I think I might have max'ed out.

Some people would call is saying it the way it is. As I get 
older/grumpier I find I have less time for treading preciously around 
the in garden of the mental darlings to get at the weeds. I also like to 
be told bluntly, like you did. Time is short. You'll be free of my 
swathe for a while...work is piling up again.


cheers

colin



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


[agi] Neurogenesis critical to mammalian learning and memory?

2008-11-18 Thread Ben Goertzel
.. interesting if true ..

http://www.medindia.net/news/Key-to-Learning-and-Memory-Continuous-Brain-Cell-Generation-41297-1.htm


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Harry Chesley
Mark Waser wrote:
 My problem is if qualia are atomic, with no differentiable details,
 why do some feel different than others -- shouldn't they all be
 separate but equal? Red is relatively neutral, while searing
 hot is not. Part of that is certainly lower brain function, below
 the level of consciousness, but that doesn't explain to me why it
 feels qualitatively different. If it was just something like
 increased activity (franticness) in response to searing hot, then
 fine, that could just be something like adrenaline being pumped
 into the system, but there is a subjective feeling that goes beyond
 that.

 Maybe I missed it but why do you assume that because qualia are
 atomic that they have no differentiable details?  Evolution is, quite
 correctly, going to give pain qualia higher priority and less ability
 to be shut down than red qualia.  In a good representation system,
 that means that searing hot is going to be *very* whatever and very
 tough to ignore.

I thought that was the meaning of atomic as used in the paper. Maybe I
got it wrong.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

  I mean that people are free to decide if others feel pain.
 
 Wow!  You are one sick puppy, dude.  Personally, you have
 just hit my Do not bother debating with list.
 
 You can decide anything you like -- but that
 doesn't make it true.

Aren't you the one who decided that autobliss feels pain? Or did you decide 
that it doesn't?


-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Richard Loosemore

Harry Chesley wrote:

Richard Loosemore wrote:

Harry Chesley wrote:

Richard Loosemore wrote:

I completed the first draft of a technical paper on consciousness
the other day.   It is intended for the AGI-09 conference, and it
can be found at:

http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf


One other point: Although this is a possible explanation for our
subjective experience of qualia like red or soft, I don't see
it explaining pain or happy quite so easily. You can
hypothesize a sort of mechanism-level explanation of those by
relegating them to the older or lower parts of the brain (i.e.,
they're atomic at the conscious level, but have more effects at the
physiological level (like releasing chemicals into the system)),
but that doesn't satisfactorily cover the subjective side for me.

I do have a quick answer to that one.

Remember that the core of the model is the *scope* of the analysis
mechanism.  If there is a sharp boundary (as well there might be),
then this defines the point where the qualia kick in.  Pain receptors
are fairly easy:  they are primitive signal lines.  Emotions are, I
believe, caused by clusters of lower brain structures, so the
interface between lower brain and foreground is the place where
the foreground sees a limit to the analysis mechanisms.

More generally, the significance of the foreground is that it sets
a boundary on how far the analysis mechanisms can reach.

I am not sure why that would seem less satisfactory as an explanation
of the subjectivity.  It is a raw feel, and that is the key idea,
no?


My problem is if qualia are atomic, with no differentiable details, why
do some feel different than others -- shouldn't they all be separate
but equal? Red is relatively neutral, while searing hot is not. Part
of that is certainly lower brain function, below the level of
consciousness, but that doesn't explain to me why it feels
qualitatively different. If it was just something like increased
activity (franticness) in response to searing hot, then fine, that
could just be something like adrenaline being pumped into the system,
but there is a subjective feeling that goes beyond that.


There is more than one question wrapped up inside this question, I think.

First:  all qualia feel different, of course.  You seem to be pointing 
to a sense in which pain is more different than most  ?  But is 
that really a valid idea?


Does pain have differentiable details?  Well, there are different 
types of pain  but that is to be expected, like different colors. 
But that is arelatively trivial point.  Within one single pain there can 
be several *effects* of that pain, including some strange ones that do 
not have counterparts in the vision-color case.


For example, suppose that a searing hot pain caused a simultaneous 
triggering of the motivational system, forcing you to suddenly want to 
do something (like pulling your body part away from the pain).  The 
feeling of wanting (wanting to pull away) is a quale of its own, in a 
sense, so it would not be impossible for one quale (searing hot) to 
always be associated with another (wanting to pull away).  If those 
always occurred together, it might seem that there was structure to the 
pain experience, where in fact there is a pair of things happening.


It is probably more than a pair of things, but perhaps you get my drift.

Remember that having associations to a pain is not part of what we 
consider to be the essence of the subjective experience;  the bit that 
is most mysterious and needs to be explained.


Another thing we have to keep in mind here is that the exact details of 
how each subjective experience feels are certainly going to seem 
different, and some can seem like each other and not like others  
colors are like other colors, but not like pains.


That is to be expected:  we can say that colors happen in a certain 
place in our sensorium (vision) while pains are associated with the body 
(usually), but these differences are not inconsistent with the account I 
have given.  If concept-atoms encoding [red] always attach to all the 
othe concept-atoms involving visual experiences, that would make them 
very different than pains like [searing hot], but all of this could be 
true at the same time that [red] would do what it does to the analysis 
mechanism (when we try to think the thought Was is the essence of 
redness?).  So the problem with the analysis mechanism would happen 
with both pains and colors, even though the two different atom types 
played games with different sets of other concept-atoms.




Richard Loosemore





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Martin,

On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


HERE is the crux of my argument, as other forms of logic fall short of being
adequate to run a world with. Reverse Reductio ad Absurdum is the first
logical tool with the promise to resolve most intractable disputes, ranging
from the abortion debate to the middle east problem.

Some people get it easily, and some require long discussions, so I'll post
the Cliff Notes version here, and if you want it in smaller doses, just
send me an off-line email and we can talk on the phone.

Reductio ad absurdum has worked unerringly for centuries to test bad
assumptions. This constitutes a proof by lack of counterexample that the
ONLY way to reach an absurd result is by a bad assumption, as otherwise,
reductio ad absurdum would sometimes fail.

Hence, when two intelligent people reach conflicting conclusions, but
neither can see any errors in the other's logic, it would seem that they
absolutely MUST have at least one bad assumption. Starting from the
absurdity and searching for the assumption is where the reverse in reverse
reductio ad absurdum comes in.

If their false assumptions were different, than one or both parties would
quickly discover them in discussion. However, when the argument stays on the
surface, the ONLY place remaining to hide an invalid assumption is that they
absolutely MUSH share the SAME invalid assumptions.

Of course if our superintelligent AGI approaches them and points out their
shared invalid assumption, then they would probably BOTH attack the AGI, as
their invalid assumption may be their only point of connection. It appears
that breaking this deadlock absolutely must involve first teaching both
parties what reverse reductio ad absurdum is all about, as I am doing here.

For example, take the abortion debate. It is obviously crazy to be making
and killing babies, and it is a proven social disaster to make this illegal
- an obvious reverse reductio ad absurdum situation.

OK, so lets look at societies where abortion is no issue at all, e.g. Muslim
societies, where it is freely available, but no one gets them. There,
children are treated as assets, where in all respects we treat them as
liabilities. Mothers are stuck with unwanted children. Fathers must pay
child support, They can't be bought or sold. There is no expectation that
they will look after their parents in their old age, etc.

In short, BOTH parties believe that children should be treated as
liabilities, but when you point this out, they dispute the claim. Why should
mothers be stuck with unwanted children? Why not allow sales to parties who
really want them? There are no answers to these and other similar questions
because the underlying assumption is clearly wrong.

The middle east situation is more complex but constructed on similar invalid
assumptions.

Are we on the same track now?

Steve Richfield
 

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as follows,
 and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially probing
 into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports
 various violent acts (with quotes from various religious texts). This would
 exclude pretty much every religion, as nearly all religions condone useless
 violence of various sorts, or the toleration or exposure of violence toward
 others. Even Buddhists resist MAD (Mutually Assured Destruction) while being
 unable to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., etc.
 3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people 

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Ben Goertzel
This sounds an awful lot like the Hegelian dialectical method...

ben g

On Tue, Nov 18, 2008 at 5:29 PM, Steve Richfield
[EMAIL PROTECTED]wrote:

 Martin,

 On 11/18/08, martin biehl [EMAIL PROTECTED] wrote:

 I don't know what reverse reductio ad absurdum is, so it may not be a
 precise counterexample, but I think you get my point.


 HERE is the crux of my argument, as other forms of logic fall short of
 being adequate to run a world with. Reverse Reductio ad Absurdum is the
 first logical tool with the promise to resolve most intractable disputes,
 ranging from the abortion debate to the middle east problem.

 Some people get it easily, and some require long discussions, so I'll post
 the Cliff Notes version here, and if you want it in smaller doses, just
 send me an off-line email and we can talk on the phone.

 Reductio ad absurdum has worked unerringly for centuries to test bad
 assumptions. This constitutes a proof by lack of counterexample that the
 ONLY way to reach an absurd result is by a bad assumption, as otherwise,
 reductio ad absurdum would sometimes fail.

 Hence, when two intelligent people reach conflicting conclusions, but
 neither can see any errors in the other's logic, it would seem that they
 absolutely MUST have at least one bad assumption. Starting from the
 absurdity and searching for the assumption is where the reverse in reverse
 reductio ad absurdum comes in.

 If their false assumptions were different, than one or both parties would
 quickly discover them in discussion. However, when the argument stays on the
 surface, the ONLY place remaining to hide an invalid assumption is that they
 absolutely MUSH share the SAME invalid assumptions.

 Of course if our superintelligent AGI approaches them and points out their
 shared invalid assumption, then they would probably BOTH attack the AGI, as
 their invalid assumption may be their only point of connection. It appears
 that breaking this deadlock absolutely must involve first teaching both
 parties what reverse reductio ad absurdum is all about, as I am doing here.

 For example, take the abortion debate. It is obviously crazy to be making
 and killing babies, and it is a proven social disaster to make this illegal
 - an obvious reverse reductio ad absurdum situation.

 OK, so lets look at societies where abortion is no issue at all, e.g.
 Muslim societies, where it is freely available, but no one gets them. There,
 children are treated as assets, where in all respects we treat them as
 liabilities. Mothers are stuck with unwanted children. Fathers must pay
 child support, They can't be bought or sold. There is no expectation that
 they will look after their parents in their old age, etc.

 In short, BOTH parties believe that children should be treated as
 liabilities, but when you point this out, they dispute the claim. Why should
 mothers be stuck with unwanted children? Why not allow sales to parties who
 really want them? There are no answers to these and other similar questions
 because the underlying assumption is clearly wrong.

 The middle east situation is more complex but constructed on similar
 invalid assumptions.

 Are we on the same track now?

 Steve Richfield
  

 2008/11/18 Steve Richfield [EMAIL PROTECTED]

  To all,

 I am considering putting up a web site to filter the crazies as
 follows, and would appreciate all comments, suggestions, etc.

 Everyone visiting the site would get different questions, in different
 orders, etc. Many questions would have more than one correct answer, and in
 many cases, some combinations of otherwise reasonable individual answers
 would fail. There would be optional tutorials for people who are not
 confident with the material. After successfully navigating the site, an
 applicant would submit their picture and signature, and we would then
 provide a license number. The applicant could then provide their name and
 number to 3rd parties to verify that the applicant is at least capable of
 rational thought. This information would look much like a driver's license,
 and could be printed out as needed by anyone who possessed a correct name
 and number.

 The site would ask a variety of logical questions, most especially
 probing into:
 1.  Their understanding of Reverse Reductio ad Absurdum methods of
 resolving otherwise intractable disputes.
 2.  Whether they belong to or believe in any religion that supports
 various violent acts (with quotes from various religious texts). This would
 exclude pretty much every religion, as nearly all religions condone useless
 violence of various sorts, or the toleration or exposure of violence toward
 others. Even Buddhists resist MAD (Mutually Assured Destruction) while being
 unable to propose any potentially workable alternative to nuclear war. Jesus
 attacked the money changers with no hope of benefit for anyone. Mohammad
 killed the Jewish men of Medina and sold their women and children into
 slavery, etc., 

Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Bob,

On 11/18/08, Bob Mottram [EMAIL PROTECTED] wrote:

 2008/11/18 Steve Richfield [EMAIL PROTECTED]:
  I am considering putting up a web site to filter the crazies as
 follows,
  and would appreciate all comments, suggestions, etc.


 This all sounds peachy in principle, but I expect it would exclude
 virtually everyone except perhaps a few of the most diehard
 philosophers.


My goal is to identify those people who:
1.  Are capable of rational thought, whether or not they chose to use that
ability. I plan to test this with some simple problem solving.
2.  Are not SO connected with some shitforbrains religious group/belief that
they would predictably use dangerous technology to harm others. I plan to
test this by simply demanding a declaration, which would send most such
believers straight to Hell.

Beyond that, I agree that it starts to get pretty hopeless.

I think most people have at least a few beliefs which
 cannot be strictly justified rationally, and that would include many
 AI researchers.


... and probably include both of us as well.

Irrational or inconsistent beliefs originate from
 being an entity with finite resources - finite experience and finite
 processing power and time with which to analyze the data.  Many people
 use quick lookups handed to them by individuals considered to be of
 higher social status, principally because they don't have time or
 inclination to investigate the issues directly themselves.


However, when someone (like me) points out carefully selected passages that
are REALLY crazy, then do they re-evaluate, or continue to accept everything
they see in the book?

In religion and politics people's beliefs and convictions are in
 almost every case gotten at second-hand, and without examination, from
 authorities who have not themselves examined the questions at issue
 but have taken them at second-hand from other non-examiners, whose
 opinions about them were not worth a brass farthing. - Mark Twain


I completely agree. The question here is whether these people are capable of
questioning and re-evaluation. If so, then they get their license.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Ben,

On 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:


  3.  A statement in their own words that they hereby disavow allegiance
 to any non-human god or alien entity, and that they will NOT follow the
 directives of any government led by people who would obviously fail this
 test. This statement would be included on the license.



 Hmmm... don't I fail this test every time I follow the speed limit ?   ;-)


I don't think I stated this well, and perhaps you might be able to say it
better.

If your government wants you to go out and kill people, or help others to go
out and kill people, and you don't see some glimmer of understanding from
the leaders that this is really stupid, then perhaps you shouldn't
contribute to such insanity.

Then, just over this fence to help define the boundary...

Look at the Star Wars anti-missile defense system. It can't possibly ever
work well, as countermeasures are SO simple to implement. However, it was
quite effective in bankrupting the Soviet Union, while people like me were
going around and lecturing about horrible waste of public resources it was.

In short, I think that re-evaluation is necessary at about the point where
blood starts flowing. What are your thoughts?

 As another aside, it seems wrong to accuse Buddhists of condoning violence
 because they don't like MAD (which involves stockpiling nukes) ... you could
 accuse them of foolishness perhaps (though I don't necessarily agree) but
 not of condoning violence


I have hours of discussion with Buddhists invested in this. I have no
problem at all with them getting themselves killed, but I have a BIG problem
with their asserting their beliefs to get OTHERS killed. If we had a
Buddhist President who kept MAD from being implemented, there is a pretty
good chance that we would not be here to have this discussion.

As an aside, when you look CAREFULLY at the events that were unfolding as
MAD was implemented, there really isn't anything at all against Buddhist
beliefs in it - just a declaration that if you attack me, that I will attack
in return, but without restraint against civilian targets.

 My feeling is that with such a group of intelligent and individualistic
 folks as transhumanists and AI researchers are, any  litmus test for
 cognitive sanity you come up with is gonna be quickly revealed to be full
 of loopholes that lead to endless philosophical discussions... so that in
 the end, such a test could only be used as a general guide, with the
 ultimate cognitive-sanity-test to be made on a qualitative basis


I guess that this is really what I was looking for - just what is that
basis? For example, if someone can lie and answer questions in a logical
manner just to get their license, then they have proven that they can be
logical, whether or not they chose to be. I think that is about as good as
is possible.

 In a small project like Novamente, we can evaluate each participant
 individually to assess their thought process and background.  In a larger
 project like OpenCog, there is not much control over who gets involved, but
 making people sign a form promising to be rational and cognitively sane
 wouldn't seem to help much, as obviously there is nothing forcing people to
 be honest...


... other than their sure knowledge that they will go directly to Hell for
even listening and considering such as we are discussing here.

The Fiq is a body of work outside the Koran that is part of Islam, which
includes stories of Mohamed's life, etc. Therein the boundary is precisely
described.

Islam demands that anyone who converts from Islam be killed.

One poor fellow watched both of his parents refuse to renounce Islam, and
then be killed by invaders. When it came to his turn, he quickly renounced
to save his life. Now that he was being considered for execution, the ruling
from Mohamed: If they ask you again, then renounce again. and he was
released.

BTW, it would be really stupid of me to try to enforce a different standard
than you and other potential users of such a site would embrace, so my goal
here is not only to discuss potential construction of such a site, but also
to discuss just what that standard is. Hence, take my words as open for
editing.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Matt Mahoney
--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

 Autobliss has no grounding, no internal feedback, and no
 volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't feel pain. My 
definition of pain is negative reinforcement in a system that learns. There is 
no other requirement.

You stated that machines can feel pain, and you stated that we don't get to 
decide which ones. So can you precisely define grounding, internal feedback and 
volition (as properties of Turing machines) and prove that these criteria are 
valid?

And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Ben Goertzel
On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] wrote:

 --- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

  Autobliss has no grounding, no internal feedback, and no
  volition.  By what definitions does it feel pain?

 Now you are making up new rules to decide that autobliss doesn't feel pain.
 My definition of pain is negative reinforcement in a system that learns.
 There is no other requirement.

 You stated that machines can feel pain, and you stated that we don't get to
 decide which ones. So can you precisely define grounding, internal feedback
 and volition (as properties of Turing machines)


Clearly, this can be done, and has largely been done already ... though
cutting and pasting or summarizing the relevant literature in emails would
not a productive use of time


 and prove that these criteria are valid?


That is a different issue, as it depends on the criteria of validity, of
course...

I think one can argue that these properties are necessary for a
finite-resources AI system to display intense systemic patterns correlated
with its goal-achieving behavior in the context of diverse goals and
situations.  So, one can argue that these properties are necessary for **the
sort of consciousness associated with general intelligence** ... but that's
a bit weaker than saying they are necessary for consciousness (and I don't
think they are)

ben



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Trent Waddington
On Wed, Nov 19, 2008 at 9:29 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Clearly, this can be done, and has largely been done already ... though
 cutting and pasting or summarizing the relevant literature in emails would
 not a productive use of time

Apparently, it was Einstein who said that if you can't explain it to
your grandmother then you don't understand it.

Of course, he never had to argue on the Internet.

Trent


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Richard and Bill,

On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
Force. I managed to escape that situation with the help of the same
Wahhabist Sunni Muslims that are now causing so many problems. With that
background, I think I understand them better than most people.

As in all other societies, they are not given the whole truth, e.g. most
have never heard of the slaughter at Medina, and believe that Mohamed never
hurt anyone at all.

My hope and expectation is that, by allowing people to research various
issues as they work on their test, that a LOT of people who might otherwise
fail the test will instead reevaluate their beliefs, at least enough to come
up with the right answers, whether or not they truly believe them. At least
that level of understanding assures that they can carry on a reasoned
conversation. This is a MAJOR problem now. Even here on this forum, many
people still don't get *reverse* reductio ad absurdum.

BTW, I place most of the blame for the middle east impasse on the West
rather than on the East. The Koran says that most of the evil in the world
is done by people who think they are doing good, which brings with it a good
social mandate to publicly reconsider and defend any actions that others
claim to be evil. The next step is to proclaim evil doers as unwitting
agents of Satan. If there is still no good defense, then they drop the
unwitting. Of course, us stupid uncivilized Westerners have fallen into
this, and so 19 brave men sacrificed their lives just to get our attention,
but even that failed to work as planned. Just what DOES it take to get our
attention - a nuke in NYC? What the West has failed to realize is that they
are playing a losing hand, but nonetheless, they just keep increasing the
bet on the expectation that the other side will fold. They won't. I was as
much intending my test for the sort of stupidity that nearly all Americans
harbor as that carried by Al Queda. Neither side seems to be playing with a
full deck.

Steve Richfield



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] Neurogenesis critical to mammalian learning and memory?

2008-11-18 Thread Ed Porter
I attended a two day seminar on brain science at MIT about six years ago in
which one of the papers was about neurognesis in the hippocampus.  The
speaker said he though neurogenisis was necessary in the hippocampus because
hippocampus cells tend to die much more rapidly than most cells, and thus
need to be replaced.

 

-Original Message-
From: Ben Goertzel [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, November 18, 2008 3:58 PM
To: agi@v2.listbox.com
Subject: [agi] Neurogenesis critical to mammalian learning and memory?

 


. interesting if true ..

http://www.medindia.net/news/Key-to-Learning-and-Memory-Continuous-Brain-Cel
l-Generation-41297-1.htm


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

A human being should be able to change a diaper, plan an invasion, butcher
a hog, conn a ship, design a building, write a sonnet, balance accounts,
build a wall, set a bone, comfort the dying, take orders, give orders,
cooperate, act alone, solve equations, analyze a new problem, pitch manure,
program a computer, cook a tasty meal, fight efficiently, die gallantly.
Specialization is for insects.  -- Robert Heinlein



  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
0 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


RE: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Benjamin Johnston
 

Could we please stick to discussion of AGI?

 

-Ben

 

From: Steve Richfield [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, 19 November 2008 10:39 AM
To: agi@v2.listbox.com
Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous
technologies...

 

Richard and Bill,

On 11/18/08, BillK [EMAIL PROTECTED] wrote: 

On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so you'd be able
 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

 

In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
Force. I managed to escape that situation with the help of the same
Wahhabist Sunni Muslims that are now causing so many problems. With that
background, I think I understand them better than most people.

 

As in all other societies, they are not given the whole truth, e.g. most
have never heard of the slaughter at Medina, and believe that Mohamed never
hurt anyone at all.

 

My hope and expectation is that, by allowing people to research various
issues as they work on their test, that a LOT of people who might otherwise
fail the test will instead reevaluate their beliefs, at least enough to come
up with the right answers, whether or not they truly believe them. At least
that level of understanding assures that they can carry on a reasoned
conversation. This is a MAJOR problem now. Even here on this forum, many
people still don't get reverse reductio ad absurdum.

 

BTW, I place most of the blame for the middle east impasse on the West
rather than on the East. The Koran says that most of the evil in the world
is done by people who think they are doing good, which brings with it a good
social mandate to publicly reconsider and defend any actions that others
claim to be evil. The next step is to proclaim evil doers as unwitting
agents of Satan. If there is still no good defense, then they drop the
unwitting. Of course, us stupid uncivilized Westerners have fallen into
this, and so 19 brave men sacrificed their lives just to get our attention,
but even that failed to work as planned. Just what DOES it take to get our
attention - a nuke in NYC? What the West has failed to realize is that they
are playing a losing hand, but nonetheless, they just keep increasing the
bet on the expectation that the other side will fold. They won't. I was as
much intending my test for the sort of stupidity that nearly all Americans
harbor as that carried by Al Queda. Neither side seems to be playing with a
full deck.

 

Steve Richfield

 

  _  


agi |  https://www.listbox.com/member/archive/303/=now Archives
https://www.listbox.com/member/archive/rss/303/ |
https://www.listbox.com/member/?;
9 Modify Your Subscription

 http://www.listbox.com 

 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Matt Mahoney
Steve, what is the purpose of your political litmus test? If you are trying to 
assemble a team of seed-AI programmers with the correct ethics, forget it. 
Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED] wrote:
From: Steve Richfield [EMAIL PROTECTED]
Subject: Re: [agi] My prospective plan to neutralize AGI and other dangerous 
technologies...
To: agi@v2.listbox.com
Date: Tuesday, November 18, 2008, 6:39 PM

Richard and Bill,


On 11/18/08, BillK [EMAIL PROTECTED] wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so you'd be able

 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

 
In 1994 I was literally sold into servitude in Saudi Arabia as a sort of slave 
programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air Force. I 
managed to escape that situation with the help of the same Wahhabist Sunni 
Muslims that are now causing so many problems. With that background, I think I 
understand them better than most people.

 
As in all other societies, they are not given the whole truth, e.g. most have 
never heard of the slaughter at Medina, and believe that Mohamed never hurt 
anyone at all.
 
My hope and expectation is that, by allowing people to research various issues 
as they work on their test, that a LOT of people who might otherwise fail the 
test will instead reevaluate their beliefs, at least enough to come up with the 
right answers, whether or not they truly believe them. At least that level of 
understanding assures that they can carry on a reasoned conversation. This is a 
MAJOR problem now. Even here on this forum, many people still don't get reverse 
reductio ad absurdum.

 
BTW, I place most of the blame for the middle east impasse on the West rather 
than on the East. The Koran says that most of the evil in the world is done by 
people who think they are doing good, which brings with it a good social 
mandate to publicly reconsider and defend any actions that others claim to be 
evil. The next step is to proclaim evil doers as unwitting agents of Satan. 
If there is still no good defense, then they drop the unwitting. Of course, 
us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans
 harbor as that carried by Al Queda. Neither side seems to be playing with a 
full deck.

 
Steve Richfield
 




  

  
  agi | Archives

 | Modify
 Your Subscription


  

  





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: [agi] A paper that actually does solve the problem of consciousness

2008-11-18 Thread Ben Goertzel
Richard,

I re-read your paper and I'm afraid I really don't grok why you think it
solves Chalmers' hard problem of consciousness...

It really seems to me like what you're suggesting is a cognitive correlate
of consciousness, to morph the common phrase neural correlate of
consciousness ...

You seem to be stating that when X is an unanalyzable, pure atomic sensation
from the perspective of cognitive system C, then C will perceive X as a raw
quale ... unanalyzable and not explicable by ordinary methods of
explication, yet, still subjectively real...

But, I don't see how the hypothesis

Conscious experience is **identified with** unanalyzable mind-atoms

could be distinguished empirically from

Conscious experience is **correlated with** unanalyzable mind-atoms

I think finding cognitive correlates of consciousness is interesting, but I
don't think it constitutes solving the hard problem in Chalmers' sense...

I grok that you're saying consciousness feels inexplicable because it has
to do with atoms that the system can't explain, due to their role as its
primitive atoms ... and this is a good idea, but, I don't see how it
bridges the gap btw subjective experience and empirical data ...

What it does is explain why, even if there *were* no hard problem, cognitive
systems might feel like there is one, in regard to their unanalyzable atoms

Another worry I have is: I feel like I can be conscious of my son, even
though he is not an unanalyzable atom.  I feel like I can be conscious of
the unique impression he makes ... in the same way that I'm conscious of
redness ... and, yeah, I feel like I can't fully explain the conscious
impression he makes on me, even though I can explain a lot of things about
him...

So I'm not convinced that atomic sensor input is the only source of raw,
unanalyzable consciousness...

-- Ben G

On Tue, Nov 18, 2008 at 5:14 PM, Richard Loosemore [EMAIL PROTECTED]wrote:

 Harry Chesley wrote:

 Richard Loosemore wrote:

 Harry Chesley wrote:

 Richard Loosemore wrote:

 I completed the first draft of a technical paper on consciousness
 the other day.   It is intended for the AGI-09 conference, and it
 can be found at:


 http://susaro.com/wp-content/uploads/2008/11/draft_consciousness_rpwl.pdf

  One other point: Although this is a possible explanation for our
 subjective experience of qualia like red or soft, I don't see
 it explaining pain or happy quite so easily. You can
 hypothesize a sort of mechanism-level explanation of those by
 relegating them to the older or lower parts of the brain (i.e.,
 they're atomic at the conscious level, but have more effects at the
 physiological level (like releasing chemicals into the system)),
 but that doesn't satisfactorily cover the subjective side for me.

 I do have a quick answer to that one.

 Remember that the core of the model is the *scope* of the analysis
 mechanism.  If there is a sharp boundary (as well there might be),
 then this defines the point where the qualia kick in.  Pain receptors
 are fairly easy:  they are primitive signal lines.  Emotions are, I
 believe, caused by clusters of lower brain structures, so the
 interface between lower brain and foreground is the place where
 the foreground sees a limit to the analysis mechanisms.

 More generally, the significance of the foreground is that it sets
 a boundary on how far the analysis mechanisms can reach.

 I am not sure why that would seem less satisfactory as an explanation
 of the subjectivity.  It is a raw feel, and that is the key idea,
 no?


 My problem is if qualia are atomic, with no differentiable details, why
 do some feel different than others -- shouldn't they all be separate
 but equal? Red is relatively neutral, while searing hot is not. Part
 of that is certainly lower brain function, below the level of
 consciousness, but that doesn't explain to me why it feels
 qualitatively different. If it was just something like increased
 activity (franticness) in response to searing hot, then fine, that
 could just be something like adrenaline being pumped into the system,
 but there is a subjective feeling that goes beyond that.


 There is more than one question wrapped up inside this question, I think.

 First:  all qualia feel different, of course.  You seem to be pointing to
 a sense in which pain is more different than most  ?  But is that
 really a valid idea?

 Does pain have differentiable details?  Well, there are different types
 of pain  but that is to be expected, like different colors. But that is
 arelatively trivial point.  Within one single pain there can be several
 *effects* of that pain, including some strange ones that do not have
 counterparts in the vision-color case.

 For example, suppose that a searing hot pain caused a simultaneous
 triggering of the motivational system, forcing you to suddenly want to do
 something (like pulling your body part away from the pain).  The feeling of
 wanting (wanting to pull away) is a quale of its own, 

Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


I made up no rules.  I merely asked a question.  You are the one who makes a 
definition like this and then says that it is up to people to decide whether 
other humans feel pain or not.  That is hypocritical to an extreme.


I also believe that your definition is a total crock that was developed for 
no purpose other than to support your BS.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


I stated that *SOME* future machines will be able to feel pain.  I can 
define grounding, internal feedback and volition but feel no need to do so 
as properties of a Turing machine and decline to attempt to prove anything 
to you since you're so full of it that your mother couldn't prove to you 
that you were born.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Tuesday, November 18, 2008 6:26 PM
Subject: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)




--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:


Autobliss has no grounding, no internal feedback, and no
volition.  By what definitions does it feel pain?


Now you are making up new rules to decide that autobliss doesn't feel 
pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.


You stated that machines can feel pain, and you stated that we don't get 
to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) and prove that 
these criteria are valid?


And just to avoid confusion, my question has nothing to do with ethics.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com






---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: Definition of pain (was Re: FW: [agi] A paper that actually does solve the problem of consciousness--correction)

2008-11-18 Thread Mark Waser
 I am just trying to point out the contradictions in Mark's sweeping 
 generalizations about the treatment of intelligent machines

Huh?  That's what you're trying to do?  Normally people do that by pointing to 
two different statements and arguing that they contradict each other.  Not by 
creating new, really silly definitions and then trying to posit a universe 
where blue equals red so everybody is confused.

 But to be fair, such criticism is unwarranted. 

So exactly why are you persisting?

 Ethical beliefs are emotional, not rational,

Ethical beliefs are subconscious and deliberately obscured from the conscious 
mind so that defections can be explained away without triggering other 
primate's lie-detecting senses.  However, contrary to your antiquated beliefs, 
they are *purely* a survival trait with a very solid grounding.

 Ethical beliefs are also algorithmically complex

Absolutely not.  Ethical beliefs are actually pretty darn simple as far as the 
subconscious is concerned.  It's only when the conscious rational mind gets 
involved that ethics are twisted beyond recognition (just like all your 
arguments).

 so the result of this argument could only result in increasingly complex 
 rules to fit his model

Again, absolutely not.  You have no clue as to what my argument is yet you 
fantasize that you can predict it's results.  BAH!

 For the record, I do have ethical beliefs like most other people

Yet you persist in arguing otherwise.  *Most* people would call that dishonest, 
deceitful, and time-wasting. 

 The question is not how should we interact with machines, but how will we? 

No, it isn't.  Study the results on ethical behavior when people are convinced 
that they don't have free will.

= = = = = 

BAH!  I should have quit answering you long ago.  No more.


  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 7:58 PM
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that actually does 
solve the problem of consciousness--correction)


Just to clarify, I'm not really interested in whether machines feel 
pain. I am just trying to point out the contradictions in Mark's sweeping 
generalizations about the treatment of intelligent machines. But to be fair, 
such criticism is unwarented. Mark is arguing about ethics. Everyone has 
ethical beliefs. Ethical beliefs are emotional, not rational, although we often 
forget this. Ethical beliefs are also algorithmically complex, so the result of 
this argument could only result in increasingly complex rules to fit his model. 
It would be unfair to bore the rest of this list with such a discussion.

For the record, I do have ethical beliefs like most other people, but 
they are irrelevant to the design of AGI. The question is not how should we 
interact with machines, but how will we? For example, when we develop the 
technology to simulate human minds in general, or to simulate specific humans 
who have died, common ethical models among humans will probably result in the 
granting of legal and property rights to these simulations. Since these 
simulations could reproduce, evolve, and acquire computing resources much 
faster than humans, the likely result will be human extinction, or viewed 
another way, our evolution into a non-DNA based life form. I won't offer an 
opinion on whether this is desirable or not, because my opinion would be based 
on my ethical beliefs.

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Ben Goertzel [EMAIL PROTECTED] wrote:

  From: Ben Goertzel [EMAIL PROTECTED]
  Subject: Re: Definition of pain (was Re: FW: [agi] A paper that 
actually does solve the problem of consciousness--correction)
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:29 PM





  On Tue, Nov 18, 2008 at 6:26 PM, Matt Mahoney [EMAIL PROTECTED] 
wrote:

--- On Tue, 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

 Autobliss has no grounding, no internal feedback, and no
 volition.  By what definitions does it feel pain?

Now you are making up new rules to decide that autobliss doesn't 
feel pain. My definition of pain is negative reinforcement in a system that 
learns. There is no other requirement.

You stated that machines can feel pain, and you stated that we 
don't get to decide which ones. So can you precisely define grounding, internal 
feedback and volition (as properties of Turing machines) 

  Clearly, this can be done, and has largely been done already ... 
though cutting and pasting or summarizing the relevant literature in emails 
would not a productive use of time
   
and prove that these criteria are valid?


  That is a different issue, as it depends on the criteria of validity, 
of course...

  I think one can argue that these properties are necessary for a 

Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Mark Waser
 Seed AI is a myth.

Ah.  Now I get it.  You are on this list solely to try to slow down progress as 
much as possible . . . . (sorry that I've been so slow to realize this)

add-rule kill-file Matt Mahoney
  - Original Message - 
  From: Matt Mahoney 
  To: agi@v2.listbox.com 
  Sent: Tuesday, November 18, 2008 8:23 PM
  Subject: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...


Steve, what is the purpose of your political litmus test? If you are 
trying to assemble a team of seed-AI programmers with the correct ethics, 
forget it. Seed AI is a myth.
http://www.mattmahoney.net/agi2.html (section 2).

-- Matt Mahoney, [EMAIL PROTECTED]

--- On Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED] wrote:

  From: Steve Richfield [EMAIL PROTECTED]
  Subject: Re: [agi] My prospective plan to neutralize AGI and other 
dangerous technologies...
  To: agi@v2.listbox.com
  Date: Tuesday, November 18, 2008, 6:39 PM


  Richard and Bill,


  On 11/18/08, BillK [EMAIL PROTECTED] wrote: 
On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
 I see how this would work:  crazy people never tell lies, so 
you'd be able
 to nail 'em when they gave the wrong answers.

Yup. That's how they pass lie detector tests as well.

They sincerely believe the garbage they spread around.

  In 1994 I was literally sold into servitude in Saudi Arabia as a sort 
of slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air 
Force. I managed to escape that situation with the help of the same Wahhabist 
Sunni Muslims that are now causing so many problems. With that background, I 
think I understand them better than most people.

  As in all other societies, they are not given the whole truth, e.g. 
most have never heard of the slaughter at Medina, and believe that Mohamed 
never hurt anyone at all.

  My hope and expectation is that, by allowing people to research 
various issues as they work on their test, that a LOT of people who might 
otherwise fail the test will instead reevaluate their beliefs, at least enough 
to come up with the right answers, whether or not they truly believe them. At 
least that level of understanding assures that they can carry on a reasoned 
conversation. This is a MAJOR problem now. Even here on this forum, many people 
still don't get reverse reductio ad absurdum.

  BTW, I place most of the blame for the middle east impasse on the 
West rather than on the East. The Koran says that most of the evil in the world 
is done by people who think they are doing good, which brings with it a good 
social mandate to publicly reconsider and defend any actions that others claim 
to be evil. The next step is to proclaim evil doers as unwitting agents of 
Satan. If there is still no good defense, then they drop the unwitting. Of 
course, us stupid uncivilized Westerners have fallen into this, and so 19 brave 
men sacrificed their lives just to get our attention, but even that failed to 
work as planned. Just what DOES it take to get our attention - a nuke in NYC? 
What the West has failed to realize is that they are playing a losing hand, but 
nonetheless, they just keep increasing the bet on the expectation that the 
other side will fold. They won't. I was as much intending my test for the sort 
of stupidity that nearly all Americans harbor as that carried by Al Queda. 
Neither side seems to be playing with a full deck.

  Steve Richfield


--
agi | Archives  | Modify Your Subscription  
   


--
agi | Archives  | Modify Your Subscription  



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com


Re: **SPAM** Re: [agi] My prospective plan to neutralize AGI and other dangerous technologies...

2008-11-18 Thread Steve Richfield
Matt and Mark,

I think you both missed my point, but in different ways, namely, that there
is a LOT of traffic here on this forum over a problem that appears easy to
resolve once and for all time, and further, that the solution may work for
much more important worldwide social problems.

Continuing with responses to specific points...

On 11/18/08, Mark Waser [EMAIL PROTECTED] wrote:

   Seed AI is a myth.
 Ah.  Now I get it.  You are on this list solely to try to slow down
 progress as much as possible . . . . (sorry that I've been so slow to
 realize this)


No. Like you, we are all trying to put this OT issue out of our misery. I do
appreciate Matt's efforts, misguided though they may be.

Continuing with Matt's comments...

  *From:* Matt Mahoney [EMAIL PROTECTED]
 *To:* agi@v2.listbox.com
 *Sent:* Tuesday, November 18, 2008 8:23 PM
 *Subject:* **SPAM** Re: [agi] My prospective plan to neutralize AGI and
 other dangerous technologies...


   Steve, what is the purpose of your political litmus test?



 I had no intention at all of imposing any sort of political test, beyond
simply looking for some assurance that they weren't about to use the
technology to kill anyone who wasn't in desperate need of being killed.

   If you are trying to assemble a team of seed-AI programmers with the
 correct ethics, forget it. Seed AI is a myth.



 I agree, though my reasoning may be a bit different than yours. Why would
any thinking machine ever want to produce a better thinking machine?
Besides, I can take bright but long-term low-temp people like Loosemore, who
appears to be an absolutely perfect candidate, and make them super-human
intelligent by simply removing the impairment that they have learned to live
with. In Loosemore's case, this is probably the equivalent of several
alcoholic drinks, yet he is pretty bright even with that impairment. I would
ask you to imagine what he would be without that impairment, but it may
well be beyond anyone here's ability to imagine, and well on the way to a
seed, though I suspect that with much more intelligence than he already
has, that he would question that goal.

Thanks everyone for your comments.

Steve Richfield
=

   --- On *Tue, 11/18/08, Steve Richfield [EMAIL PROTECTED]*wrote:

 From: Steve Richfield [EMAIL PROTECTED]
 Subject: Re: [agi] My prospective plan to neutralize AGI and other
 dangerous technologies...
 To: agi@v2.listbox.com
 Date: Tuesday, November 18, 2008, 6:39 PM

 Richard and Bill,

 On 11/18/08, BillK [EMAIL PROTECTED] wrote:

 On Tue, Nov 18, 2008 at 1:22 PM, Richard Loosemore wrote:
  I see how this would work:  crazy people never tell lies, so you'd be
 able
  to nail 'em when they gave the wrong answers.

 Yup. That's how they pass lie detector tests as well.

 They sincerely believe the garbage they spread around.


 In 1994 I was literally sold into servitude in Saudi Arabia as a sort of
 slave programmer (In COBOL on HP-3000 computers) to the Royal Saudi Air
 Force. I managed to escape that situation with the help of the same
 Wahhabist Sunni Muslims that are now causing so many problems. With that
 background, I think I understand them better than most people.

 As in all other societies, they are not given the whole truth, e.g. most
 have never heard of the slaughter at Medina, and believe that Mohamed never
 hurt anyone at all.

 My hope and expectation is that, by allowing people to research various
 issues as they work on their test, that a LOT of people who might otherwise
 fail the test will instead reevaluate their beliefs, at least enough to come
 up with the right answers, whether or not they truly believe them. At least
 that level of understanding assures that they can carry on a reasoned
 conversation. This is a MAJOR problem now. Even here on this forum, many
 people still don't get *reverse* reductio ad absurdum.

 BTW, I place most of the blame for the middle east impasse on the West
 rather than on the East. The Koran says that most of the evil in the world
 is done by people who think they are doing good, which brings with it a good
 social mandate to publicly reconsider and defend any actions that others
 claim to be evil. The next step is to proclaim evil doers as unwitting
 agents of Satan. If there is still no good defense, then they drop the
 unwitting. Of course, us stupid uncivilized Westerners have fallen into
 this, and so 19 brave men sacrificed their lives just to get our attention,
 but even that failed to work as planned. Just what DOES it take to get our
 attention - a nuke in NYC? What the West has failed to realize is that they
 are playing a losing hand, but nonetheless, they just keep increasing the
 bet on the expectation that the other side will fold. They won't. I was as
 much intending my test for the sort of stupidity that nearly all Americans
 harbor as that carried by Al Queda. Neither side seems to be playing with a
 full deck.

 Steve Richfield

  --