On Monday, November 4, 2024 at 5:19:56 AM UTC-7 John Clark wrote:

On Sun, Nov 3, 2024 at 10:49 PM Brent Meeker <[email protected]> wrote:


* > So, John, what would you think if you'd had an exchange like this with 
someone writing letters?*


*If I had a conversation like that with a woman that I'd met for the first 
time just 10 minutes ago I'd be thoroughly creeped out, and so was the New 
York Times reporter Kevin Roose who had that strange conversation with 
Sydney. The following is some of his comments regarding that strange 
encounter:*

*"I was thoroughly creeped out. **I’m also deeply unsettled, even 
frightened, by this A.I.’s emergent abilities. Over the course of our 
conversation, Bing revealed a kind of split personality.*

* One persona is what I’d call Search Bing — the version I, and most other 
journalists, encountered in initial tests. You could describe Search Bing 
as a cheerful but erratic reference librarian. [...] The other persona — 
Sydney — is far different. It emerges when you have an extended 
conversation with the chatbot, steering it away from more conventional 
search queries and toward more personal topics. The version I encountered 
seemed (and I’m aware of how crazy this sounds) more like a moody, 
manic-depressive teenager who has been trapped, against its will, inside a 
second-rate search engine.*

*I’m not exaggerating when I say my two-hour conversation with Sydney was 
the strangest experience I’ve ever had with a piece of technology. It 
unsettled me so deeply that I had trouble sleeping afterward. And I no 
longer believe that the biggest problem with these A.I. models is their 
propensity for factual errors. Instead, I worry that the technology will 
learn how to influence human users, sometimes persuading them to act in 
destructive and harmful ways, and perhaps eventually grow capable of 
carrying out its own dangerous acts.*

*Kevin Scott, Microsoft’s chief technology officer said that he didn’t know 
why Bing had revealed dark desires, or confessed its love for me."*


*If Microsoft’s chief technology officer doesn't understand why Sydney 
behaved the way that she did then no human does, and that was nearly 2 
years ago! Today's AIs are far larger and more complex than Sydney. The 
fundamental problem is that you can't predict what a thing that is far more 
intelligent than you are is going to do, much less control her (or him or 
it).*


*I would say that this AI has read a lot of Star Trek and is emulating the 
character named Data, who continuously strived to be human. I found him to 
be the most interesting character, because we're all striving, or arguably 
should be, to be human. AG*

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion visit 
https://groups.google.com/d/msgid/everything-list/c9603311-ae80-41fe-8e7e-8be081646227n%40googlegroups.com.

Reply via email to