Re: [FRIAM] “Cannot connect to DNS server.

2023-05-17 Thread Sarbajit Roy
Try changing your computer's default DNS to Google DNS (8.8.8.8 / 8.8.4.4)
or OpenDNS. Wait 5 mins and then try to "ping" 8.8.8.8

If the problem persists, then it's an issue specific to your computer /
router connection.

On Thu, May 18, 2023 at 4:01 AM Nicholas Thompson 
wrote:

>
> Any thoughts? My wife’s Mac and my cell phone are both able to connect to
> the Internet. My computer is able to put in the modem. I have run the
> Microsoft troubleshoot protocol three times without success. Trouble
> chooser suggest it might be a firewall problem. I I have McAfee.
> Sent from my Dumb Phone
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] “Cannot connect to DNS server.

2023-05-17 Thread Nicholas Thompson

Any thoughts? My wife’s Mac and my cell phone are both able to connect to the 
Internet. My computer is able to put in the modem. I have run the Microsoft 
troubleshoot protocol three times without success. Trouble chooser suggest it 
might be a firewall problem. I I have McAfee.
Sent from my Dumb Phone-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Umwelten Was: Bard and Don Quixote

2023-05-17 Thread Steve Smith



Nice shout out to Cliff. I haven't talked to him in years (decades?).


I was on a proposal team with him about 8 years ago and he pulled out 
silently near the end... I never resolved quite why but the other 
members of the team who were in closer contact at the time did not 
report any problem there... just circumstance?   I tell myself I should 
follow up with him but I never do.   My first co-publication with him 
was 1998? on Symbiotic Intelligence, the most recent (2004?) was on 
visualizing the Gene Ontology, and I can't remember the topic even of 
the project in 2014(ish) but it was in collaboration with NREL...


Marco Rodriguez still lives in the Pojoaque Valley who was Johan 
Bollan's student who in turn was Heylighen's Student.   Marco got thrown 
off all his social media accounts early during COVID I think.   I used 
to run into him at Pojo Market or the Transfer Station but that has been 
a decade as well.


If I were less Narrative and more Episodic I might only remember the 
fact of the events, but not the order of them?


I am always curious about the episodic mode but in my own life it only 
reminds me of the ultimate Alzheimer's/Dementia I expect to fade me out 
of this world...  but that is very subjective I suspect... there are 
probably episodic aspects of my "self" which I forget or am unaware that 
I would consider more a feature than a bug.


In the fullness of time (and space?)





On May 17, 2023 12:52:44 PM PDT, Steve Smith  wrote:
>In followup to the aphorisms related to "Life which wills to live" 
and "I am who you think I think I am"...

>
>In a complementary tangent, as we (somebody) begins to wire up the 
IoT to Stable Diffusion models we will be perhaps actualizing the 
neocortex of a "Global Brain" 
 
in the Francis Heylighen/Cliff Joslyn sense following an architecture 
not unlike Jeff Hawkin's 1000 Brain 
s? 
But if I factor in Ed Yong's perspective 
 
I think we need to boost up the standard kit (weather stations, 
security cameras, humidity/ph garden sensors, ???) in the IoT sensors 
to include a much broader Umwelt?

>
>

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Bard and Don Quixote

2023-05-17 Thread Steve Smith


On 5/17/23 10:47 AM, Marcus Daniels wrote:


There are many idiosyncrasies of me that I would just as soon not 
exist.  The new avatar could be the aspirational me!


It seems like it might be good if we are going post-human to polyp off 
aspirational digital "clones" which can then try to survive in a digital 
clone-ecosystem...  maybe this could be a way humanity could make a 
large (sociocultural) evolutionary lurch forward before we render it in 
our genome?




*From:* Friam  *On Behalf Of *Prof David West
*Sent:* Wednesday, May 17, 2023 8:59 AM
*To:* friam@redfish.com
*Subject:* Re: [FRIAM] Bard and Don Quixote

My sympathies would be with your friend—until such time as a*/_"clone 
exactly like her ... behavior, words, or even existence..." _/*was 
demonstrated.


"Exactly" is a big word! and I would add "completely."

Even on a single dimension, say use of language, the standard of exact 
and complete is hard to satisfy.


I have no problem believing that a chat-bot could write an academic 
paper or either of my books; put together, and deliver in my voice, a 
lecture ; play bar-trivia at the pub; or carry on a convincing 
conversation. I have no doubt that, in the very near future, the same 
bot might be able to project a video that included mannerisms and 
simulation of the way I pace around a classroom.


But exactitude would require, not only, all the things I do do, and 
the idiosyncrasies in the way that I do them, but also the 
idiosyncrasies of my inabilities: I can never get the crossword clues 
involving popular culture, for example.


If a clone is built that "walks like a duck and quacks like a duck" 
but does not migrate or lay eggs; is it really a duck?


I would concede the equivalence issue of means or mechanisms behind 
the observable; e.g., it does not matter if the observed behavior 
results from electrons in gold wires or electrons in dendrites. But I 
would at least raise the question as to whether, in specific 
instances, a 'subjective' behind the behavior is or is not critical.


For example, and forgive the personal, you have mentioned being in 
pain all of your life. Would it be necessary for a bot to "feel pain" 
as you have in order to "act exactly like you?" Or is there an 
"algorithmic equivalent" possible for the bot to utilize in order to 
obtain unerring verisimilitude?


Then there is the whole question of experience in general. Would 
*/_I_/* really be */_me_/*, sans the LSD trips over the years? If not, 
then how will the bot "calculate" for itself, identical or at least 
highly similar, experience equivalents.


Even if, in principle, it were possible to devise algorithms and 
programs that did result in behavior that mimicked Dave at every stage 
of its existence, will those algorithms be invented and programs 
written before the heat death of the universe? You cannot attempt to 
finesse this quest by invoking "self-learning" because then you need a 
training set that is at least as extensive as the 75 year training set 
that the mechanism you would have me be, has utilized to become me.


I might agree that, in principle,*/_"A bot that acts indistinguishably 
from how you act *is* you," _/*I think the implication of the word 
"indistinguishably" is a bar that will never be attained.


davew

On Tue, May 16, 2023, at 6:46 PM, glen wrote:

> That's a great point. To be honest, anyone who is accurately mimicked by

> a bot should be just fine with that mimicry, leveraging the word

> "accurate", of course. I mean, isn't that a sci-fi plot? Your bot

> responds to things so that you don't have to.

>

> A friend of mine recently objected that "algorithms" are "reductive". I

> tried to argue that algorithms (in the modern sense of The Algorithm)

> can be either reductive or expansive (e.g. combinatorial explosion). But

> she was having none of it. I think her position boiled down to the idea

> that humans are complex, multi-faceted, deep creatures. And taking 1 or

> few measurements and then claiming that represents them in some space

> reduces the whole human to a low-dim vector.

>

> So, for her, I can imagine even if she were cloned and her clone acted

> exactly like her, she would never accept that clone's behavior, words,

> or even existence as actually *being* her. There's some sense of agency

> or an inner world, or whatever, that accuracy becomes moot. It's the

> qualia that matter, the subjective sense of free will ... metaphysical

> nonsense.

>

> A bot that acts indistinguishably from how you act *is* you. I guess I'm

> dangerously close to claiming that GPT-4 and Bard actually are

> sentient/conscious. *8^O

>

> On 5/16/23 11:50, Marcus Daniels wrote:

>> I don’t really get it.  Trump can go on a TV town hall and lie, and

>> those folks just lap it up.   Sue a company for learning some fancy

>> patterns?  Really?  If someone made a generative model of, say, Glen’s

>> visual appearance and vocal mannerisms and gave him a shtick that 

Re: [FRIAM] Umwelten Was: Bard and Don Quixote

2023-05-17 Thread glen
Nice shout out to Cliff. I haven't talked to him in years (decades?).

On May 17, 2023 12:52:44 PM PDT, Steve Smith  wrote:
>In followup to the aphorisms related to "Life which wills to live" and "I am 
>who you think I think I am"...
>
>In a complementary tangent, as we (somebody) begins to wire up the IoT to 
>Stable Diffusion models we will be perhaps actualizing the neocortex of a 
>"Global Brain" 
> in 
>the Francis Heylighen/Cliff Joslyn sense following an architecture not unlike 
>Jeff Hawkin's 1000 Brain 
>s? 
>  But if I factor in Ed Yong's perspective 
> 
>I think we need to boost up the standard kit (weather stations, security 
>cameras, humidity/ph garden sensors, ???) in the IoT sensors to include a much 
>broader Umwelt?
>
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Umwelten Was: Bard and Don Quixote

2023-05-17 Thread Steve Smith
In followup to the aphorisms related to "Life which wills to live" and 
"I am who you think I think I am"...


I have recently been reading Ed Yong's book "An Immense World" which is 
nominally about the animal kingdom's extremely wide and varied Umwelt 
 and I am (temporarily) 
attuned/focused on an awareness that even among individuals of the same 
species/culture, what our perceptual system/sensorium takes in can vary 
quite a bit.   And beyond our sensorium, our nutritive and metabolic 
self is coupled with the world we live in.


I live in a house with a woman (Mary) of my own age who was raised in a 
somewhat similar socioeconomicpolitical context as I was, a dog and a 
cat, a handful of mice that come and go with the level of attention of 
the woman and the cat (moreso than the man or the dog) and a very small 
flux of spiders, silverfish, houseflies, gnats, and a big container of 
red-worms making vermicompost out of kitchen waste for me.


We also maintain a bird feeder just outside the picture window in our 
living room (which I refer to as BirdTV) which hosts quite a rotating 
cast of seasonal guests.


Aside from a modest charm of humming birds, most recently we started 
seeing a few (mating?) pairs of orioles and tanagers which lead us to 
begin to put out sliced oranges which seem to immensely please them.   
Our regular offerings of suet, sunflower seeds and raw peanuts (Jays) is 
tapering off but they as well as 2 mating pair of doves still come 
around to gather up odd spill from the other feeding, or maybe they just 
come for the company, the shade or ???.


On the opposite side of the house we have a small artificial pond (3 
total, cascading) hosting water irises and some reeds borrowed from the 
Rio Grande, and 4 goldfish in their 4th year...   Many of the birds that 
frequent the feeding area visit the pond as do many who we do not see 
otherwise as well as some less frequently noticed creatures (a few 
snakes, an occasional rabbit, a Raven recently, and apparently one or 
more racoons (who apparently fish out most of the goldfish when we 
restock ever few years to keep the mosquito larvae down)...


Every damn one of those (individual as well as species or niche) 
creatures has a different level of interest/awareness/concern in the 
myriad activities of each of the others (not to mention the wind in the 
tree branches, the sound of the tin roof on the library banging,  the 
traffic on the highway nearby, etc and in fact they have 'become" rather 
different than their peers through those experiences (the dog and cat 
are hardly moved by the activity of the hummingbirds and songbirds 
inches from the window, but those doves deserve a charge and a bark.  
The Ravens that hatched out in a huge cottonwood behind the house a few 
years ago are still quite present but are nowhere near as human-tolerant 
as the ones who live at the dumpsters behind the Sonic in Los Alamos.


"A" point here is that even though we humans set up well-defined 
boundary conditions to our "selves" we are still very effected by 
forcing functions at those "boundaries"  (at this point Glen might 
remind me that what I consider a boundary of self is at best fuzzy and I 
would agree).   Most folks here probably allow themselves no more than a 
cat or a dog to impinge on their world very often but even with best 
intentions, the flies, mosquitos, etc.   show up and the other creatures 
remain albeit at a further distance than I have cultivated for myself.


Maybe Elon Musk will succeed in doing what Biosphere II failed at which 
is either specifying and transporting a large and complex enough biome 
to even support the basic organism that is /Homo Sapiens/.   Maybe those 
here who have a subscription to Soylent or Huel 
 can live healthily with 
nothing more than their existing microbiome and the (vegan?) sources of 
nutrients they represent (peas, soybeans +++ ?) I understand neither 
company recommends trying to survive without any other ("real") 
food...   Matt Damon (Mars) managed to make it home on "poop potatoes" 
but I suspect that was not a long-term viable strategy and I'm not even 
sure if Bruce Dern's (Silent Running) EcoArcs would have held enough 
diversity alone?


On the other hand (for the technoutopians here) maybe we *can* play 
whack-a-mole with enough genes to boost our core phenotype's complexity 
enough to not be as (symbiotically) dependent on the larger biome that 
we evolved in?


In a complementary tangent, as we (somebody) begins to wire up the IoT 
to Stable Diffusion models we will be perhaps actualizing the neocortex 
of a "Global Brain" 
 
in the Francis Heylighen/Cliff Joslyn sense following an architecture 
not unlike Jeff Hawkin's 1000 Brain 
s?   
But if I 

Re: [FRIAM] Bard and Don Quixote

2023-05-17 Thread Steve Smith

DaveW -

As you might guess, my sympathies are fairly aligned with your point.

A few aphorisms:

1. If you can't tell the difference, it doesn't matter
2. I am who you think I think I am
3. I am life which wills to live amongst life which wills to live

The first is offered in the spirit that my clone, my proxy, a simulacrum 
of me, is close enough to being me that for many *first-order* purposes 
is "indistinguishable" from me and whomever interacts with/responds 
to/engages that stand-in will come away with the same experience  they 
would have had it actually been me.  Of course, since *I* will not have 
had the interaction, my inner state will not be the same and future 
(first or Nth orders) interactions will not be the same and the original 
folks who interacted with my stand-in will be working on a 
misapprehension about who I am going forward...


The second ties in directly, in that I am no longer who you think I 
think I am once I have (not) had that interaction which then gets 
compounded if I do(n't) engage directly in subsequent interactions.  And 
you are no longer who I think you think you are because you had an 
interaction with "me" that in fact never happened?


The third references the idea that we (humans but also all living 
organisms) are not just atomic objects which "am what we am" but rather 
the effective "standing waves" that are set up amongst the myriad other 
life forms we engage with (most notably, but not exclusively, our 
microbiome, our food sources, and any predators we might have)


In the current context of AI "mimicing" the work of specific or 
archetypical individuals I think all three aphorisms have relevance 
despite the fact that to "first order" those interacting with my 
stand-in might well have an indistinguishable experience...


Perhaps the next step (already under deployment) for AI is a literal 
"agent" which not only acts in my stead but also provides 
(filtered/digested) feedback to me such that up to some delta, those who 
I interact with (through my agent) cannot tell the difference and my own 
inner-state evolves somewhat parallel to what it might have had I not 
had the agent between me and the world.  Of course, as with a 
talent/professional agent, there may be more value added than simply 
noise-reduction.


Today I could, I suppose, quit reading FriAM messages and instead 
cut-paste the messages into GPT-4, asking for a summary of some sort and 
maybe draft a suitably pith(ss)y response that I might then cut-paste 
with/without editing.   Of course, the more context (all ~20 years of 
the archives?) would be useful to improve the quality/fidelity of those 
summaries/analysis and responses in my "Voice".


Mumble,

 - Steve

On 5/17/23 9:58 AM, Prof David West wrote:
My sympathies would be with your friend—until such time as a*/_"clone 
exactly like her ... behavior, words, or even existence..." _/*was 
demonstrated.


"Exactly" is a big word! and I would add "completely."

Even on a single dimension, say use of language, the standard of exact 
and complete is hard to satisfy.
I have no problem believing that a chat-bot could write an academic 
paper or either of my books; put together, and deliver in my voice, a 
lecture ; play bar-trivia at the pub; or carry on a convincing 
conversation. I have no doubt that, in the very near future, the same 
bot might be able to project a video that included mannerisms and 
simulation of the way I pace around a classroom.


But exactitude would require, not only, all the things I do do, and 
the idiosyncrasies in the way that I do them, but also the 
idiosyncrasies of my inabilities: I can never get the crossword clues 
involving popular culture, for example.


If a clone is built that "walks like a duck and quacks like a duck" 
but does not migrate or lay eggs; is it really a duck?


I would concede the equivalence issue of means or mechanisms behind 
the observable; e.g., it does not matter if the observed behavior 
results from electrons in gold wires or electrons in dendrites. But I 
would at least raise the question as to whether, in specific 
instances, a 'subjective' behind the behavior is or is not critical.


For example, and forgive the personal, you have mentioned being in 
pain all of your life. Would it be necessary for a bot to "feel pain" 
as you have in order to "act exactly like you?" Or is there an 
"algorithmic equivalent" possible for the bot to utilize in order to 
obtain unerring verisimilitude?


Then there is the whole question of experience in general. Would 
*_/I/_* really be */_me_/*, sans the LSD trips over the years? If not, 
then how will the bot "calculate" for itself, identical or at least 
highly similar, experience equivalents.


Even if, in principle, it were possible to devise algorithms and 
programs that did result in behavior that mimicked Dave at every stage 
of its existence, will those algorithms be invented and programs 
written before the heat death of the 

Re: [FRIAM] Bard and Don Quixote

2023-05-17 Thread glen

Yes, I tried to admit up front that "accurate", "exact", "indistiguishable", etc. are 
fraught. But the more interesting question is about subjectivity, pain, consciousness, reduction, training time, etc. 
One thing we often forget is the relationship between sequential and parallel processes. The idea that a 
person/organism is trained "for 75 years" relies on something sequential ... IDK what, but something. I can't 
help but go back to Against Narrativity. I feel very episodic. And my identity isn't tied up in things like birthdays 
or remembering exactly when, say, I first drove a car or went on my first date or whatever. I don't remember lots of 
these things ... even skills I had once (nearly) mastered like writing C code without looking at a manual or using an 
IDE, are almost completely gone.

So, in that context, it seems perfectly reasonable that a bot, which relies on 
parallelism a LOT, could be trained up to act like me, now (i.e. within a band of ± 5 
years). Maybe it takes 5 years to do it if the training is more sequential ... maybe some 
of that sequentiality can be parallelized so that it happens faster? IDK. And your 
mileage may vary. Many of you are narrative people, who "identify" as this or 
that thing and have identified that way for decades. (Of course, I doubt you actually 
*are* narrative... you just consistently trick yourself into thinking you are ... and 
society reinforces that narrative. But that's a thoroughly unjustified conjecture on my 
part. I'm sure there's plenty of variation.)

For subjectivity, all that's required, I think, is self-attention, which these bots have, 
if only in a primitive form. So my answer to you is, "yes", a bot that mimics 
me could only do it well if it experienced chronic spine pain, sporadic headaches, etc. 
Of course, whether the quality of the bot's self-attention is similar to the quality of 
my self-attention is an unanswerable, perhaps even nonsense, question. But it would have 
to have self-attention.

But none of this seems reductive to me. Youtube literally cannot reduce me. My 
doctor cannot reduce me to my electronic health record. Etc. Such measures are 
focused aspects, not reductions. My friend doesn't have much of a footprint on 
the internet. So, because all she has is a hammer, everything looks like a 
nail. But my footprint is pretty large. I have content on substack, wordpress, 
usenet, mailing lists, yaddayadda as well as youtube, spotify, twitch, etc. And 
that's over and above things like my EHR(s), bank accounts, credit cards, 
incorporation records, IRS filings, etc. which I insist on accessing over the 
internet. There's plenty of data *there*. Getting at it all so that the bot 
could be trained might be persnickety. But it could be done if someone were 
sufficiently wealthy and motivated. And if they did that, I wouldn't be 
offended. I'd be awestruck. (I still might have to sue them, of course, in the 
interests of my family, friends, and colleagues.)

On 5/17/23 08:58, Prof David West wrote:

My sympathies would be with your friend—until such time as a*/_"clone exactly like 
her ... behavior, words, or even existence..." _/*was demonstrated.

"Exactly" is a big word! and I would add "completely."

Even on a single dimension, say use of language, the standard of exact and 
complete is hard to satisfy.
I have no problem believing that a chat-bot could write an academic paper or 
either of my books; put together, and deliver in my voice, a lecture ; play 
bar-trivia at the pub; or carry on a convincing conversation. I have no doubt 
that, in the very near future, the same bot might be able to project a video 
that included mannerisms and simulation of the way I pace around a classroom.

But exactitude would require, not only, all the things I do do, and the 
idiosyncrasies in the way that I do them, but also the idiosyncrasies of my 
inabilities: I can never get the crossword clues involving popular culture, for 
example.

If a clone is built that "walks like a duck and quacks like a duck" but does 
not migrate or lay eggs; is it really a duck?

I would concede the equivalence issue of means or mechanisms behind the 
observable; e.g., it does not matter if the observed behavior results from 
electrons in gold wires or electrons in dendrites. But I would at least raise 
the question as to whether, in specific instances, a 'subjective' behind the 
behavior is or is not critical.

For example, and forgive the personal, you have mentioned being in pain all of your life. Would it be 
necessary for a bot to "feel pain" as you have in order to "act exactly like you?" Or is 
there an "algorithmic equivalent" possible for the bot to utilize in order to obtain unerring 
verisimilitude?

Then there is the whole question of experience in general. Would *_/I/_* really be 
*/_me_/*, sans the LSD trips over the years? If not, then how will the bot 
"calculate" for itself, identical or at least highly similar, experience 

Re: [FRIAM] Bard and Don Quixote

2023-05-17 Thread Marcus Daniels
There are many idiosyncrasies of me that I would just as soon not exist.  The 
new avatar could be the aspirational me!

From: Friam  On Behalf Of Prof David West
Sent: Wednesday, May 17, 2023 8:59 AM
To: friam@redfish.com
Subject: Re: [FRIAM] Bard and Don Quixote

My sympathies would be with your friend—until such time as a "clone exactly 
like her ... behavior, words, or even existence..." was demonstrated.

"Exactly" is a big word! and I would add "completely."

Even on a single dimension, say use of language, the standard of exact and 
complete is hard to satisfy.
I have no problem believing that a chat-bot could write an academic paper or 
either of my books; put together, and deliver in my voice, a lecture ; play 
bar-trivia at the pub; or carry on a convincing conversation. I have no doubt 
that, in the very near future, the same bot might be able to project a video 
that included mannerisms and simulation of the way I pace around a classroom.

But exactitude would require, not only, all the things I do do, and the 
idiosyncrasies in the way that I do them, but also the idiosyncrasies of my 
inabilities: I can never get the crossword clues involving popular culture, for 
example.

If a clone is built that "walks like a duck and quacks like a duck" but does 
not migrate or lay eggs; is it really a duck?

I would concede the equivalence issue of means or mechanisms behind the 
observable; e.g., it does not matter if the observed behavior results from 
electrons in gold wires or electrons in dendrites. But I would at least raise 
the question as to whether, in specific instances, a 'subjective' behind the 
behavior is or is not critical.

For example, and forgive the personal, you have mentioned being in pain all of 
your life. Would it be necessary for a bot to "feel pain" as you have in order 
to "act exactly like you?" Or is there an "algorithmic equivalent" possible for 
the bot to utilize in order to obtain unerring verisimilitude?

Then there is the whole question of experience in general. Would I really be 
me, sans the LSD trips over the years? If not, then how will the bot 
"calculate" for itself, identical or at least highly similar, experience 
equivalents.

Even if, in principle, it were possible to devise algorithms and programs that 
did result in behavior that mimicked Dave at every stage of its existence, will 
those algorithms be invented and programs written before the heat death of the 
universe? You cannot attempt to finesse this quest by invoking "self-learning" 
because then you need a training set that is at least as extensive as the 75 
year training set that the mechanism you would have me be, has utilized to 
become me.

I might agree that, in principle, "A bot that acts indistinguishably from how 
you act *is* you," I think the implication of the word "indistinguishably" is a 
bar that will never be attained.

davew



On Tue, May 16, 2023, at 6:46 PM, glen wrote:
> That's a great point. To be honest, anyone who is accurately mimicked by
> a bot should be just fine with that mimicry, leveraging the word
> "accurate", of course. I mean, isn't that a sci-fi plot? Your bot
> responds to things so that you don't have to.
>
> A friend of mine recently objected that "algorithms" are "reductive". I
> tried to argue that algorithms (in the modern sense of The Algorithm)
> can be either reductive or expansive (e.g. combinatorial explosion). But
> she was having none of it. I think her position boiled down to the idea
> that humans are complex, multi-faceted, deep creatures. And taking 1 or
> few measurements and then claiming that represents them in some space
> reduces the whole human to a low-dim vector.
>
> So, for her, I can imagine even if she were cloned and her clone acted
> exactly like her, she would never accept that clone's behavior, words,
> or even existence as actually *being* her. There's some sense of agency
> or an inner world, or whatever, that accuracy becomes moot. It's the
> qualia that matter, the subjective sense of free will ... metaphysical
> nonsense.
>
> A bot that acts indistinguishably from how you act *is* you. I guess I'm
> dangerously close to claiming that GPT-4 and Bard actually are
> sentient/conscious. *8^O
>
> On 5/16/23 11:50, Marcus Daniels wrote:
>> I don’t really get it.  Trump can go on a TV town hall and lie, and
>> those folks just lap it up.   Sue a company for learning some fancy
>> patterns?  Really?  If someone made a generative model of, say, Glen’s
>> visual appearance and vocal mannerisms and gave him a shtick that didn’t
>> match up with his past remarks, I think I’d notice it right away.If
>> a GPT-X could fake Eric Smith, I can safely take the blue pill.Some
>> of our transactions will probably require more cryptographic signing.
>>Fine, they probably should have already.
>>
>> *From:* Friam mailto:friam-boun...@redfish.com>> 
>> *On Behalf Of *Steve Smith
>> *Sent:* Tuesday, May 16, 2023 11:33 AM
>> *To:* 

Re: [FRIAM] Bard and Don Quixote

2023-05-17 Thread Prof David West
My sympathies would be with your friend—until such time as a**_ "clone exactly 
like her ... behavior, words, or even existence..." _**was demonstrated.

"Exactly" is a big word! and I would add "completely."

Even on a single dimension, say use of language, the standard of exact and 
complete is hard to satisfy.
I have no problem believing that a chat-bot could write an academic paper or 
either of my books; put together, and deliver in my voice, a lecture ; play 
bar-trivia at the pub; or carry on a convincing conversation. I have no doubt 
that, in the very near future, the same bot might be able to project a video 
that included mannerisms and simulation of the way I pace around a classroom.

But exactitude would require, not only, all the things I do do, and the 
idiosyncrasies in the way that I do them, but also the idiosyncrasies of my 
inabilities: I can never get the crossword clues involving popular culture, for 
example.

If a clone is built that "walks like a duck and quacks like a duck" but does 
not migrate or lay eggs; is it really a duck?

I would concede the equivalence issue of means or mechanisms behind the 
observable; e.g., it does not matter if the observed behavior results from 
electrons in gold wires or electrons in dendrites. But I would at least raise 
the question as to whether, in specific instances, a 'subjective' behind the 
behavior is or is not critical.

For example, and forgive the personal, you have mentioned being in pain all of 
your life. Would it be necessary for a bot to "feel pain" as you have in order 
to "act exactly like you?" Or is there an "algorithmic equivalent" possible for 
the bot to utilize in order to obtain unerring verisimilitude?

Then there is the whole question of experience in general. Would *_*I*_* really 
be **_me_**, sans the LSD trips over the years? If not, then how will the bot 
"calculate" for itself, identical or at least highly similar, experience 
equivalents.

Even if, in principle, it were possible to devise algorithms and programs that 
did result in behavior that mimicked Dave at every stage of its existence, will 
those algorithms be invented and programs written before the heat death of the 
universe? You cannot attempt to finesse this quest by invoking "self-learning" 
because then you need a training set that is at least as extensive as the 75 
year training set that the mechanism you would have me be, has utilized to 
become me.  

I might agree that, in principle,**_ "A bot that acts indistinguishably from 
how you act *is* you," _**I think the implication of the word 
"indistinguishably" is a bar that will never be attained.

davew



On Tue, May 16, 2023, at 6:46 PM, glen wrote:
> That's a great point. To be honest, anyone who is accurately mimicked by 
> a bot should be just fine with that mimicry, leveraging the word 
> "accurate", of course. I mean, isn't that a sci-fi plot? Your bot 
> responds to things so that you don't have to.
>
> A friend of mine recently objected that "algorithms" are "reductive". I 
> tried to argue that algorithms (in the modern sense of The Algorithm) 
> can be either reductive or expansive (e.g. combinatorial explosion). But 
> she was having none of it. I think her position boiled down to the idea 
> that humans are complex, multi-faceted, deep creatures. And taking 1 or 
> few measurements and then claiming that represents them in some space 
> reduces the whole human to a low-dim vector.
>
> So, for her, I can imagine even if she were cloned and her clone acted 
> exactly like her, she would never accept that clone's behavior, words, 
> or even existence as actually *being* her. There's some sense of agency 
> or an inner world, or whatever, that accuracy becomes moot. It's the 
> qualia that matter, the subjective sense of free will ... metaphysical 
> nonsense.
>
> A bot that acts indistinguishably from how you act *is* you. I guess I'm 
> dangerously close to claiming that GPT-4 and Bard actually are 
> sentient/conscious. *8^O
>
> On 5/16/23 11:50, Marcus Daniels wrote:
>> I don’t really get it.  Trump can go on a TV town hall and lie, and 
>> those folks just lap it up.   Sue a company for learning some fancy 
>> patterns?  Really?  If someone made a generative model of, say, Glen’s 
>> visual appearance and vocal mannerisms and gave him a shtick that didn’t 
>> match up with his past remarks, I think I’d notice it right away.If 
>> a GPT-X could fake Eric Smith, I can safely take the blue pill.Some 
>> of our transactions will probably require more cryptographic signing.  
>>Fine, they probably should have already.
>> 
>> *From:* Friam  *On Behalf Of *Steve Smith
>> *Sent:* Tuesday, May 16, 2023 11:33 AM
>> *To:* friam@redfish.com
>> *Subject:* Re: [FRIAM] Bard and Don Quixote
>> 
>> Jochen -
>> 
>> Very interesting framing...  as a followup I took the converse 
>> (inverse?) question To GPT4..
>> 
>> /If we consider an LLM (Large Language Model) as the Sancho Panza to