And, no one mentions the HAL 9000???



Sent with Proton Mail secure email.

------- Original Message -------
On Monday, April 10th, 2023 at 10:12 AM, Bob Bridges <robhbrid...@gmail.com> 
wrote:


> Getting into philosophy, but why not?
> 
> Shmuel> Wouldn't that depend on the programming and training?
> 
> 
> Me> Sure. Has anyone programmed self-preservation into any of the current
> 
> AIs? I suspect no one's thought of such a thing yet. (And maybe anyone who
> has thought of it has thought better of it.)
> 
> The assumption that AIs want to preserve themselves is probably inseparable
> from the assumption that AIs are self-aware*, and I suppose it's that
> assumption that I'm questioning. I seriously doubt we ~can~ create
> self-awareness, but that's debatable because we don't really know how to
> define what self-awareness is. Assuming for the sake of argument that we
> can, how would we determine whether an AI has it? We call that the Turing
> test, but as far as I know we don't have one.
> 
> (Stop me if I've told this one already: Decades ago I attended a software
> conference in Anaheim. My best friend from high school lives in that area,
> and when he heard that the guest speaker at the wrap-up banquet was to be
> Gene Rodenberry, he shelled out $50 to attend the banquet himself. Gene
> Rodenberry didn't show, pleading exhaustion, but the man who came in his
> place was an entertaining speaker and I remember thoroughly enjoying his
> talk.
> 
> (In that decade it was fashionable to talk knowledgeably about the Turing
> test. Partway through his presentation he mentioned it, and added "...and
> by the way no one should be allowed to talk about the Turing test if they
> can't pass it themselves". Terry and I burst into loud laughter - and
> quickly stifled ourselves as we realized the rest of the room was silent.
> The speaker paused, and then said "Well, I guess now we know who knows what
> the Turing test is." Of course we had to laugh again, but more respectably
> this time.)
> 
> * I do not mean that the two ~propositions~ are inseparable. I'm just
> thinking that anyone who ~assumes~ that AIs feel the need to preserve
> themselves are assuming that they're self-aware.
> 
> Schmuel> Some people lack an impulse to preserve themselves. Consider
> 
> reckless behavior and suicide attempts.
> 
> Me> I consider it, but neither one contradicts the assertion. Even those
> 
> folks have a strong impulse to live.
> 
> ---
> Bob Bridges, robhbrid...@gmail.com, cell 336 382-7313
> 
> /* A man who has lived in many places is not likely to be deceived by the
> local errors of his native village; the scholar has lived in many times and
> is therefore in some degree immune from the great cataract of nonsense that
> pours from the press and the microphone of his own age. -C S Lewis, "The
> Weight of Glory" */
> 
> ________________________________________
> From: Bob Bridges [robhbrid...@gmail.com]
> Sent: Sunday, April 9, 2023 7:19 PM
> 
> Yeah, I realize I didn't define anything. But in this case I'm really just
> saying that we have no idea whether an AI can have an impulse to preserve
> itself. We observe that impulse in every form of life, but it's well to
> keep in mind that an AI isn't of that sort. It may have that impulse, but
> so far that's just an assumption, no?
> 
> ----------------------------------------------------------------------
> For IBM-MAIN subscribe / signoff / archive access instructions,
> send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

----------------------------------------------------------------------
For IBM-MAIN subscribe / signoff / archive access instructions,
send email to lists...@listserv.ua.edu with the message: INFO IBM-MAIN

Reply via email to