Re: [FRIAM] more bullsh¡t

2023-01-07 Thread Steve Smith



AI, Teaching, and "Our Willingness to Give Bullshit a Pass"
https://dailynous.com/2023/01/05/ai-teaching-and-our-willingness-to-give-bullshit-a-pass/ 



The first time I heard this argument was from these guys:

https://www.audible.com/pd/Pill-Pod-104-AI-the-New-Crisis-of-Humanities-Education-Podcast/B0BPQ77Z8P 



My phrasing of the idea being that tools like ChatGPT are analogous to 
calculators, allowing the computer to do what it's good at and freeing 
humans up to do what we're good at. Why require students to learn 
bullshit rhetorical styling when we can teach them to think about the 
*substance* ... a lesson many of us learned from Knuth's TeX a long 
time ago. The trick is that tools like ChatGPT are built around the 
bullshit-generation use case. What we need are tools built around the 
bullshit-detection use case.


and is the introduction of GPTZero (and it's ilk) represent closing the 
loop in an "antagonistic" pair to hone that BS generator even faster?


With branch prediction, we could implant a little device just under 
the eardrum that listened to someone's speech acts for a tiny 
fraction, predict where it was going, and call bullshit or "pay 
attention" for some interval. The bullshitters' rhetoric would never 
even reach your audio perception devices. ... like trigger warnings 
for all of us sensitive snowflakes who can't bear to look on images of 
Mohammed 
.


With a nod to our resident trans/post-humanist(s), our perceptual 
circuits *already* select for what they have been trained to 
see/hear/smell/taste/feel some things more/better/easier/differently 
than others.  So jacking that up with pass-through AR technology is 
totally "obvious"... and therefore a "good idea"?




Those of us who've kissed the Blarney Stone, unfortunately, would 
spend our lives talking to brick walls.


But with things developing as they are, the brick walls would be capable 
of a much more interesting dialogue, perhaps, than people (though given 
nay will be post/trans-humans, WTF?).   A wall of (otherwise) bricked 
smart-devices programmed just to be contrarian with your syntax and 
logic constructions?


I think this is probably where we are headed:

https://en.wikipedia.org/wiki/With_Folded_Hands-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] more bullsh¡t

2023-01-06 Thread Pieter Steenekamp
In my current perspective, the integration of automation and artificial
intelligence *in the coming decade* will significantly impact employment
opportunities. However, I also believe that in this future, there will be
an abundance of material resources, allowing individuals to access the
products and services they desire within the limitations of physical laws.
It is impossible to predict the future with certainty, but these are my
current speculations based on my observations.

In terms of education, I advocate for a system that allows for self-guided
learning rather than mandatory teaching. By creating an environment that
supports learning and inviting guest instructors for specialized knowledge,
students can pursue their interests and passions. Even if some students
attempt to abuse the system, the benefits of fostering a love for learning
outweigh any potential harm.

Full disclosure: I haven't kissed the Blarney Stone and wrote a paragraph,
but it did not sound good. So I asked ChatGPT to rephrase and upon reading
I went yes this is exactly my point and it almost sounds as if I have
kissed the bloody stone. So the above is my exact message but rephrased
eloquently by ChatGPT.

On Fri, 6 Jan 2023 at 12:19, David Eric Smith  wrote:

> Your use of chatGTP, Pieter, is to my mind a very interesting thread.
>
> There was a columnist for the New York Times many years ago, named William
> Safire.  I don’t even remember now what he wrote about, but he was known,
> and was significant to me, for being an example of “a good writer”.
>
> Safire wrote something (column?  book?  article?) with the theme that, if
> one would write creatively, one should first do several years galley-rowing
> as an editor somewhere.  People who have an impulse to write “creatively”
> imagine all kinds of innovation in language that will be just dramatic and
> wonderful.  Editors, who have had to deal with those imaginations in the
> writings of others, know that most such ideas are awful and need to be
> beaten out of the writer if he or she is ever to become good.  So Safire’s
> thesis was that you really need to do this, for a number of years and a
> large number of other people’s pieces, to squeeze the nonsense out of you
> and develop a solid understanding of your language.  Kind of like, in books
> on French cooking, the author says “why did we have to spend the first year
> cooking mixed vegetables in mayonnaise over and over again; I don’t even
> like mixed vegetables in mayonnaise.  To which the answer, of course, is
> that one develops what the French term “method”: experience with uniform
> sizing of each ingredient, correct relative sizing across ingredients, time
> of introduction to heat, and on and on, so that one gets control and has
> everything cooked to the intended degree reliably.  Only then has one
> gained the tools to create.
>
> I have run Safire’s thesis by some writers I know to see what happened; my
> notable memories are the ones who hate it and think it is completely wrong.
>
> But to Glen’s point that we should think of literary AI the way we think
> of pocket calculators (another thing I was not allowed to have in school;
> my parents thought it would make me stupid).  ChatGTP can be sort of the
> William Safire level of basic method in language, not intending or intended
> to create anything, but somehow, as you say, to find a kind of solid and
> central expression for things.  One might even think of the appropriate
> training schedule for a tool meant to do just that, which could be a bit
> different from the ad hoc training that is probably first-gen of these
> tools.
>
> Eric
>
> On Jan 6, 2023, at 12:59 AM, Pieter Steenekamp 
> wrote:
>
> As a native of South Africa, I have personally witnessed the shortcomings
> of both our public primary and secondary education systems and the
> financial barriers that prevent many from accessing private schools. In
> response, I have dedicated the past year to establishing a private
> institution that is not only affordable, but also committed to providing a
> high-quality education. In reflecting on what constitutes a truly valuable
> education, I have come to the conclusion that the most essential outcome is
> not the acquisition of academic skills, but rather the development of
> strong relationships - both with oneself and with the outside world. While
> it is not possible to directly teach children how to cultivate such
> relationships, it is possible to create an environment in which they can
> learn and grow through unsupervised interactions with their peers.
>
> Full disclosure: I have not kissed  Blarney Stone and my ability to write
> (or speak for that matter) eloquently is just awful. I've written a
> paragraph and then I asked chatGPT, who have kissed the Blamey Stone, to
> rephrase it more eloquently. The above paragraph reflects exactly what I
> wish to say, but is just expressed so much better.
>
> On Fri, 6 Jan 2023 at 00:39, glen  wrote:
>

Re: [FRIAM] more bullsh¡t

2023-01-06 Thread David Eric Smith
Your use of chatGTP, Pieter, is to my mind a very interesting thread.

There was a columnist for the New York Times many years ago, named William 
Safire.  I don’t even remember now what he wrote about, but he was known, and 
was significant to me, for being an example of “a good writer”.

Safire wrote something (column?  book?  article?) with the theme that, if one 
would write creatively, one should first do several years galley-rowing as an 
editor somewhere.  People who have an impulse to write “creatively” imagine all 
kinds of innovation in language that will be just dramatic and wonderful.  
Editors, who have had to deal with those imaginations in the writings of 
others, know that most such ideas are awful and need to be beaten out of the 
writer if he or she is ever to become good.  So Safire’s thesis was that you 
really need to do this, for a number of years and a large number of other 
people’s pieces, to squeeze the nonsense out of you and develop a solid 
understanding of your language.  Kind of like, in books on French cooking, the 
author says “why did we have to spend the first year cooking mixed vegetables 
in mayonnaise over and over again; I don’t even like mixed vegetables in 
mayonnaise.  To which the answer, of course, is that one develops what the 
French term “method”: experience with uniform sizing of each ingredient, 
correct relative sizing across ingredients, time of introduction to heat, and 
on and on, so that one gets control and has everything cooked to the intended 
degree reliably.  Only then has one gained the tools to create.

I have run Safire’s thesis by some writers I know to see what happened; my 
notable memories are the ones who hate it and think it is completely wrong.

But to Glen’s point that we should think of literary AI the way we think of 
pocket calculators (another thing I was not allowed to have in school; my 
parents thought it would make me stupid).  ChatGTP can be sort of the William 
Safire level of basic method in language, not intending or intended to create 
anything, but somehow, as you say, to find a kind of solid and central 
expression for things.  One might even think of the appropriate training 
schedule for a tool meant to do just that, which could be a bit different from 
the ad hoc training that is probably first-gen of these tools.

Eric

> On Jan 6, 2023, at 12:59 AM, Pieter Steenekamp  
> wrote:
> 
> As a native of South Africa, I have personally witnessed the shortcomings of 
> both our public primary and secondary education systems and the financial 
> barriers that prevent many from accessing private schools. In response, I 
> have dedicated the past year to establishing a private institution that is 
> not only affordable, but also committed to providing a high-quality 
> education. In reflecting on what constitutes a truly valuable education, I 
> have come to the conclusion that the most essential outcome is not the 
> acquisition of academic skills, but rather the development of strong 
> relationships - both with oneself and with the outside world. While it is not 
> possible to directly teach children how to cultivate such relationships, it 
> is possible to create an environment in which they can learn and grow through 
> unsupervised interactions with their peers.
> 
> Full disclosure: I have not kissed  Blarney Stone and my ability to write (or 
> speak for that matter) eloquently is just awful. I've written a paragraph and 
> then I asked chatGPT, who have kissed the Blamey Stone, to rephrase it more 
> eloquently. The above paragraph reflects exactly what I wish to say, but is 
> just expressed so much better.
> 
> On Fri, 6 Jan 2023 at 00:39, glen  > wrote:
> AI, Teaching, and "Our Willingness to Give Bullshit a Pass"
> https://dailynous.com/2023/01/05/ai-teaching-and-our-willingness-to-give-bullshit-a-pass/
>  
> 
> 
> The first time I heard this argument was from these guys:
> 
> https://www.audible.com/pd/Pill-Pod-104-AI-the-New-Crisis-of-Humanities-Education-Podcast/B0BPQ77Z8P
>  
> 
> 
> My phrasing of the idea being that tools like ChatGPT are analogous to 
> calculators, allowing the computer to do what it's good at and freeing humans 
> up to do what we're good at. Why require students to learn bullshit 
> rhetorical styling when we can teach them to think about the *substance* ... 
> a lesson many of us learned from Knuth's TeX a long time ago. The trick is 
> that tools like ChatGPT are built around the bullshit-generation use case. 
> What we need are tools built around the bullshit-detection use case.
> 
> With branch 

Re: [FRIAM] more bullsh¡t

2023-01-05 Thread Pieter Steenekamp
As a native of South Africa, I have personally witnessed the shortcomings
of both our public primary and secondary education systems and the
financial barriers that prevent many from accessing private schools. In
response, I have dedicated the past year to establishing a private
institution that is not only affordable, but also committed to providing a
high-quality education. In reflecting on what constitutes a truly valuable
education, I have come to the conclusion that the most essential outcome is
not the acquisition of academic skills, but rather the development of
strong relationships - both with oneself and with the outside world. While
it is not possible to directly teach children how to cultivate such
relationships, it is possible to create an environment in which they can
learn and grow through unsupervised interactions with their peers.

Full disclosure: I have not kissed  Blarney Stone and my ability to write
(or speak for that matter) eloquently is just awful. I've written a
paragraph and then I asked chatGPT, who have kissed the Blamey Stone, to
rephrase it more eloquently. The above paragraph reflects exactly what I
wish to say, but is just expressed so much better.

On Fri, 6 Jan 2023 at 00:39, glen  wrote:

> AI, Teaching, and "Our Willingness to Give Bullshit a Pass"
>
> https://dailynous.com/2023/01/05/ai-teaching-and-our-willingness-to-give-bullshit-a-pass/
>
> The first time I heard this argument was from these guys:
>
>
> https://www.audible.com/pd/Pill-Pod-104-AI-the-New-Crisis-of-Humanities-Education-Podcast/B0BPQ77Z8P
>
> My phrasing of the idea being that tools like ChatGPT are analogous to
> calculators, allowing the computer to do what it's good at and freeing
> humans up to do what we're good at. Why require students to learn bullshit
> rhetorical styling when we can teach them to think about the *substance*
> ... a lesson many of us learned from Knuth's TeX a long time ago. The trick
> is that tools like ChatGPT are built around the bullshit-generation use
> case. What we need are tools built around the bullshit-detection use case.
>
> With branch prediction, we could implant a little device just under the
> eardrum that listened to someone's speech acts for a tiny fraction, predict
> where it was going, and call bullshit or "pay attention" for some interval.
> The bullshitters' rhetoric would never even reach your audio perception
> devices. ... like trigger warnings for all of us sensitive snowflakes who
> can't bear to look on images of Mohammed <
> https://whyevolutionistrue.com/2023/01/05/hamline-university-assailed-for-firing-professor-who-showed-images-of-muhammads-face/
> >.
>
> Those of us who've kissed the Blarney Stone, unfortunately, would spend
> our lives talking to brick walls.
>
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] more bullsh¡t

2023-01-05 Thread glen

AI, Teaching, and "Our Willingness to Give Bullshit a Pass"
https://dailynous.com/2023/01/05/ai-teaching-and-our-willingness-to-give-bullshit-a-pass/

The first time I heard this argument was from these guys:

https://www.audible.com/pd/Pill-Pod-104-AI-the-New-Crisis-of-Humanities-Education-Podcast/B0BPQ77Z8P

My phrasing of the idea being that tools like ChatGPT are analogous to 
calculators, allowing the computer to do what it's good at and freeing humans 
up to do what we're good at. Why require students to learn bullshit rhetorical 
styling when we can teach them to think about the *substance* ... a lesson many 
of us learned from Knuth's TeX a long time ago. The trick is that tools like 
ChatGPT are built around the bullshit-generation use case. What we need are 
tools built around the bullshit-detection use case.

With branch prediction, we could implant a little device just under the eardrum that listened 
to someone's speech acts for a tiny fraction, predict where it was going, and call bullshit or 
"pay attention" for some interval. The bullshitters' rhetoric would never even reach 
your audio perception devices. ... like trigger warnings for all of us sensitive snowflakes who 
can't bear to look on images of Mohammed 
.

Those of us who've kissed the Blarney Stone, unfortunately, would spend our 
lives talking to brick walls.

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/