Re: [FRIAM] NickC channels DaveW

2023-01-19 Thread Prof David West
; repeat his quote and give the reference again.
>>> "Intelligence and energy have been the fundamental limiters towards most 
>>> things we want. A future where these are not the limiting reagents will be 
>>> radically different, and can be amazingly better."
>>> Taken from 
>>> https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
>>>  
>>> <https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
>>>   :
>>>
>>> In conclusion, yes I agree with Glen that there are sadly hidden elements 
>>> to all the techno-optimism. but this does not dampen my enthusiasm for the 
>>> future triggered by abundant intelligence and energy.
>>>
>>> On Wed, 18 Jan 2023 at 21:08, glen >> <mailto:geprope...@gmail.com>> wrote:
>>>
>>> Sadly, there are some hidden elements to all that techno-optimism. E.g.
>>>
>>> https://nitter.cz/billyperrigo/status/1615682180201447425#m 
>>> <https://nitter.cz/billyperrigo/status/1615682180201447425#m>
>>>
>>> On 1/18/23 00:40, Pieter Steenekamp wrote:
>>> > I totally agree that realizable behavior is what matters.
>>> >
>>> > The elephant in the room is whether AI (and robotics of course) will 
>>> (not to replace but to) be able to do better than humans in all respects, 
>>> including come up with creative solutions to not only the world's most 
>>> pressing problems but also small creative things like writing poems, and 
>>> then to do the mental and physical tasks required to provide goods and 
>>> services to all in the world,
>>> >
>>> > Sam Altman said there are two things that will shape our future; 
>>> intelligence and energy. If we have real abundant intelligence and energy, 
>>> the world will be very different indeed.
>>> >
>>> > To quote Sam Altmen at 
>>> https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
>>>  
>>> <https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
>>>  
>>> <https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
>>>  
>>> <https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>>
>>>   :
>>> >
>>> > "intelligence and energy have been the fundamental limiters towards 
>>> most things we want. A future where these are not the limiting reagents 
>>> will be radically different, and can be amazingly better."
>>> >
>>> >
>>> >
>>> > On Wed, 18 Jan 2023 at 03:06, Marcus Daniels >> <mailto:mar...@snoutfarm.com> <mailto:mar...@snoutfarm.com 
>>> <mailto:mar...@snoutfarm.com>>> wrote:
>>> >
>>> >     Definitions are all fine and good, but realizable behavior is 
>>> what matters.   Analog computers will have imperfect behavior, and there 
>>> will be leakage between components.   A large network of transistors or 
>>> neurons are sufficiently similar for my purposes.   The unrolling would be 
>>> inside a skull, so somewhat isolated from interference.
>>> >
>>> >     -Original Message-
>>> >     From: Friam >> <mailto:friam-boun...@redfish.com> <mailto:friam-boun...@redfish.com 
>>> <mailto:friam-boun...@redfish.com>>> On Behalf Of glen
>>> >     Sent: Tuesday, January 17, 2023 2:11 PM
>>> >     To: friam@redfish.com <mailto:friam@redfish.com> 
>>> <mailto:friam@redfish.com <mailto:friam@redfish.com>>
>>> >     Subject: Re: [FRIAM] NickC channels DaveW
>>> >
>>> >     I don't quite grok that. A crisp definition of recursion implies 
>>> no interaction with the outside world, right? If you can tolerate the 
>>> ambiguity in that statement, the artifacts laying about from an unrolled 
>>> recursion might be seen and used by outsiders. That's not to say a 
>>> trespasser can't have some sophisticated intrusion technique. But unrolled 
>>> seems more "open" to family, friends, and the occasional acquaintance.
>>> >
>>> >     On 1/17/23 13:37, Marcus Daniels wrote:
>>> >      > I probably didn't pay enough attention to the thread some time 
>>> ago on serialization, but to me recursion is hard to distinguish from an 
>>> unrolling of recursion.
>>> >
>
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-19 Thread glen
lements to all that techno-optimism. E.g.

https://nitter.cz/billyperrigo/status/1615682180201447425#m 
<https://nitter.cz/billyperrigo/status/1615682180201447425#m>

On 1/18/23 00:40, Pieter Steenekamp wrote:
> I totally agree that realizable behavior is what matters.
>
> The elephant in the room is whether AI (and robotics of course) will (not 
to replace but to) be able to do better than humans in all respects, including 
come up with creative solutions to not only the world's most pressing problems but 
also small creative things like writing poems, and then to do the mental and 
physical tasks required to provide goods and services to all in the world,
>
> Sam Altman said there are two things that will shape our future; 
intelligence and energy. If we have real abundant intelligence and energy, the 
world will be very different indeed.
>
> To quote Sam Altmen at 
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>>
  :
>
> "intelligence and energy have been the fundamental limiters towards most 
things we want. A future where these are not the limiting reagents will be radically 
different, and can be amazingly better."
>
>
>
> On Wed, 18 Jan 2023 at 03:06, Marcus Daniels mailto:mar...@snoutfarm.com> <mailto:mar...@snoutfarm.com 
<mailto:mar...@snoutfarm.com>>> wrote:
>
>     Definitions are all fine and good, but realizable behavior is what 
matters.   Analog computers will have imperfect behavior, and there will be 
leakage between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, so 
somewhat isolated from interference.
>
>     -Original Message-
>     From: Friam mailto:friam-boun...@redfish.com> 
<mailto:friam-boun...@redfish.com <mailto:friam-boun...@redfish.com>>> On Behalf Of glen
>     Sent: Tuesday, January 17, 2023 2:11 PM
>     To: friam@redfish.com <mailto:friam@redfish.com> <mailto:friam@redfish.com 
<mailto:friam@redfish.com>>
>     Subject: Re: [FRIAM] NickC channels DaveW
>
>     I don't quite grok that. A crisp definition of recursion implies no interaction 
with the outside world, right? If you can tolerate the ambiguity in that statement, the 
artifacts laying about from an unrolled recursion might be seen and used by outsiders. 
That's not to say a trespasser can't have some sophisticated intrusion technique. But 
unrolled seems more "open" to family, friends, and the occasional acquaintance.
>
>     On 1/17/23 13:37, Marcus Daniels wrote:
>      > I probably didn't pay enough attention to the thread some time ago 
on serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.
>


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-19 Thread Prof David West
s.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
>> >  
>> > <https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
>> >   :
>> > 
>> > "intelligence and energy have been the fundamental limiters towards most 
>> > things we want. A future where these are not the limiting reagents will be 
>> > radically different, and can be amazingly better."
>> > 
>> > 
>> > 
>> > On Wed, 18 Jan 2023 at 03:06, Marcus Daniels > > <mailto:mar...@snoutfarm.com>> wrote:
>> > 
>> > Definitions are all fine and good, but realizable behavior is what 
>> > matters.   Analog computers will have imperfect behavior, and there will 
>> > be leakage between components.   A large network of transistors or neurons 
>> > are sufficiently similar for my purposes.   The unrolling would be inside 
>> > a skull, so somewhat isolated from interference.
>> > 
>> > -Original Message-
>> > From: Friam > > <mailto:friam-boun...@redfish.com>> On Behalf Of glen
>> > Sent: Tuesday, January 17, 2023 2:11 PM
>> > To: friam@redfish.com <mailto:friam@redfish.com>
>> > Subject: Re: [FRIAM] NickC channels DaveW
>> > 
>> > I don't quite grok that. A crisp definition of recursion implies no 
>> > interaction with the outside world, right? If you can tolerate the 
>> > ambiguity in that statement, the artifacts laying about from an unrolled 
>> > recursion might be seen and used by outsiders. That's not to say a 
>> > trespasser can't have some sophisticated intrusion technique. But unrolled 
>> > seems more "open" to family, friends, and the occasional acquaintance.
>> > 
>> > On 1/17/23 13:37, Marcus Daniels wrote:
>> >  > I probably didn't pay enough attention to the thread some time ago 
>> > on serialization, but to me recursion is hard to distinguish from an 
>> > unrolling of recursion.
>> > 
>> 
>> 
>> -- 
>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>> 
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> 
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-19 Thread glen
com> <mailto:mar...@snoutfarm.com 
<mailto:mar...@snoutfarm.com>>> wrote:
 >
 >     Definitions are all fine and good, but realizable behavior is what 
matters.   Analog computers will have imperfect behavior, and there will be 
leakage between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, so 
somewhat isolated from interference.
 >
 >     -Original Message-
 >     From: Friam mailto:friam-boun...@redfish.com> 
<mailto:friam-boun...@redfish.com <mailto:friam-boun...@redfish.com>>> On Behalf Of glen
 >     Sent: Tuesday, January 17, 2023 2:11 PM
 >     To: friam@redfish.com <mailto:friam@redfish.com> <mailto:friam@redfish.com 
<mailto:friam@redfish.com>>
 >     Subject: Re: [FRIAM] NickC channels DaveW
 >
 >     I don't quite grok that. A crisp definition of recursion implies no 
interaction with the outside world, right? If you can tolerate the ambiguity in that 
statement, the artifacts laying about from an unrolled recursion might be seen and used by 
outsiders. That's not to say a trespasser can't have some sophisticated intrusion technique. 
But unrolled seems more "open" to family, friends, and the occasional acquaintance.
 >
 >     On 1/17/23 13:37, Marcus Daniels wrote:
 >      > I probably didn't pay enough attention to the thread some time 
ago on serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-19 Thread Pieter Steenekamp
*Sadly, there are some hidden elements to all that techno-optimism.*

Yes, sadly the world is unequal and those at the bottom of the economic
ladder just don't get a good deal.

On the positive side, looking back at the history of mankind there is
evidence that it is now better to live than ever in the past for the large
majority of humankind. This is true even though it is the sad truth that
it's very far from perfect; human suffering is a reality, Glen's comment is
sad but true.

The question of course is whether it will continue to go better?

It's just impossible to know the future. One person can believe it'll go
better in the future, another that it'll be worse, each with tons of  good
arguments.

I for one, embrace the optimism of Sam Altman, just for completeness I
repeat his quote and give the reference again.
"Intelligence and energy have been the fundamental limiters towards most
things we want. A future where these are not the limiting reagents will be
radically different, and can be amazingly better."
Taken from
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
  :

In conclusion, yes I agree with Glen that there are sadly hidden elements
to all the techno-optimism. but this does not dampen my enthusiasm for the
future triggered by abundant intelligence and energy.

On Wed, 18 Jan 2023 at 21:08, glen  wrote:

> Sadly, there are some hidden elements to all that techno-optimism. E.g.
>
> https://nitter.cz/billyperrigo/status/1615682180201447425#m
>
> On 1/18/23 00:40, Pieter Steenekamp wrote:
> > I totally agree that realizable behavior is what matters.
> >
> > The elephant in the room is whether AI (and robotics of course) will
> (not to replace but to) be able to do better than humans in all respects,
> including come up with creative solutions to not only the world's most
> pressing problems but also small creative things like writing poems, and
> then to do the mental and physical tasks required to provide goods and
> services to all in the world,
> >
> > Sam Altman said there are two things that will shape our future;
> intelligence and energy. If we have real abundant intelligence and energy,
> the world will be very different indeed.
> >
> > To quote Sam Altmen at
> https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
> <
> https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
>  :
> >
> > "intelligence and energy have been the fundamental limiters towards most
> things we want. A future where these are not the limiting reagents will be
> radically different, and can be amazingly better."
> >
> >
> >
> > On Wed, 18 Jan 2023 at 03:06, Marcus Daniels  <mailto:mar...@snoutfarm.com>> wrote:
> >
> > Definitions are all fine and good, but realizable behavior is what
> matters.   Analog computers will have imperfect behavior, and there will be
> leakage between components.   A large network of transistors or neurons are
> sufficiently similar for my purposes.   The unrolling would be inside a
> skull, so somewhat isolated from interference.
> >
> > -Original Message-----
> >     From: Friam  friam-boun...@redfish.com>> On Behalf Of glen
> > Sent: Tuesday, January 17, 2023 2:11 PM
> > To: friam@redfish.com <mailto:friam@redfish.com>
> > Subject: Re: [FRIAM] NickC channels DaveW
> >
> > I don't quite grok that. A crisp definition of recursion implies no
> interaction with the outside world, right? If you can tolerate the
> ambiguity in that statement, the artifacts laying about from an unrolled
> recursion might be seen and used by outsiders. That's not to say a
> trespasser can't have some sophisticated intrusion technique. But unrolled
> seems more "open" to family, friends, and the occasional acquaintance.
> >
> > On 1/17/23 13:37, Marcus Daniels wrote:
> >  > I probably didn't pay enough attention to the thread some time
> ago on serialization, but to me recursion is hard to distinguish from an
> unrolling of recursion.
> >
>
>
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redf

Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread Steve Smith




That might qualify as a DDOS attack.
I'm suspecting  they already had my "number" when they 
responded to my attempt to sign up with "sorry, too busy, try back 
later"...



On Jan 18, 2023, at 7:03 AM, Steve Smith  wrote:

I suppose pouring all of the FriAM traffic into (even my own bloviations) a 
chatbot might be a bit usurious (the fool's errand of a fool errant)?

On 1/17/23 2:37 PM, glen wrote:

You might try using the OpenAI API directly. It takes some work, but not much.

https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fopenai.com%2fapi%2f=E,1,HJ318n4srAACDIyWEzfPOzvMVtqgSqwLdvAizjLkkb1uDy5X4kPvoq_dYLKkkGFIA3DZ_FVdqrBvZUIyd5cGsQuJLe7SGEwu5RiJtC6GcsSxUoVp_V41JGDy=1

Or you could sign up for this:

https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/

I would hook you up to my Slack bot that queries GPT3 for every channel 
message. But that might get expensive with a verbose person like you! 8^D I can 
imagine some veerrryyy long prompts.


On 1/17/23 12:57, Steve Smith wrote:

On 1/17/23 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)

And I bet (s)he channels *at least* one FriAM member's affect pretty well also!

My 9 month old golden-doodle does as good of a job at that (I won't name names) 
as my (now deceased 11 year old Akita and my 9 year old chocolate dobie mix bot 
did) but nobody here really demonstrates the basic nature of either my 9 month 
old tabby or her 20 year old black-mouser predecessor.There is very little 
overlap.

The jays and the woodpeckers and the finches and towhees and sparrows and 
nuthatches and robins and the mating pair of doves and the several ravens and 
the (courting?) pair of owls (that I only hear hooting to one another in the 
night) and the lone (that I see) hawk and the lone blue heron (very more 
occasionally) and the flock(lets) of geese migrating down the rio-grande 
flyway... their aggregate neural complexity is only multiplicative (order 
100-1000x) that of any given beast... but somehow their interactions (this is 
without the half-dozen species of rodentia and leporidae and racoons and 
insects and worms and ) would seem to have a more combinatorial network of 
relations?

I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave blog-link) and 
was denied because "too busy, try back later" and realized that it had become a 
locus for (first world) humans to express and combine their greatest hopes and worse 
fears in a single place.

This seems like a higher-order training set?  Not just the intersection of all things "worth 
saying" but somehow filtered/diffracted through "the things (some) people are interested 
in in particular"...



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,Q1HAmGNq-4qQpfbq0DfLXuFeNbuzE822K_GzY0xfgdrSHyDpeUqW_VgLZ-2Kq19ijfCG9e7wEFdAv26S-o1Sl5oD1eU95cmoXWWA9H4-XcuGNF-mJXo9MBLiCQ,,=1
to (un)subscribe 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,w7n7oYM9aADeW_87EBwWg3qb__jTpRMuz3zJjVV31KOB36kJRxrRvicMMz2zxuD5U4FqugvocrX4ZnruPv7dVvjvq0UNtcUs4uEYPCQTIXR2-MF5GUmmBzxdTHQV=1
FRIAM-COMIC 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f=E,1,vVM3Lmp0D4U8PPxQ6KGkDluW6BUNfKH8BBpOth_NPab-Uupf4IQO0h_8QZvB1QbXnQ1aKjAFUsycR-8ringk7QoZd8nfeqgEVpWuaB5TQodZFLjYMzofiA,,=1
archives:  5/2017 thru present 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f=E,1,FP8Iyk95eJ9RiKUEMxS166hzoBlr-Wtp2ltlp9feoQ25x-sK2Z3wBu8Z5gnUaXf5Qggt2XCac9Z24CCsm3s0UftsJp9SsfN8jgZn9JOoIhgEpYPZ=1
1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread Marcus Daniels
Isn’t this expected from Effective Altruism?   There will be people sacrificed..

From: Friam  On Behalf Of Steve Smith
Sent: Wednesday, January 18, 2023 2:09 PM
To: friam@redfish.com
Subject: Re: [FRIAM] NickC channels DaveW

Sadly, there are some hidden elements to all that techno-optimism. E.g.
https://nitter.cz/billyperrigo/status/1615682180201447425#m


[https://nitter.cz/pic/media%2FFmwNndiWIAIYYtf.png%3Fname%3Dsmall]

sounds like the "woke mob" is interfering with patriotic bestial pedophiles who 
are just exercising their first, second, maybe fifth and just in case, the 
ninth amendment rights? ...



Every time I respond to a Captcha challenge, I feel as if I'm being conscripted 
to help train an image recognition ML model.   And do we know how (not if) 
OpenAI, et alii are using *our questions* to train a new (subset of) model?



On 1/18/23 00:40, Pieter Steenekamp wrote:

I totally agree that realizable behavior is what matters.

The elephant in the room is whether AI (and robotics of course) will (not to 
replace but to) be able to do better than humans in all respects, including 
come up with creative solutions to not only the world's most pressing problems 
but also small creative things like writing poems, and then to do the mental 
and physical tasks required to provide goods and services to all in the world,

Sam Altman said there are two things that will shape our future; intelligence 
and energy. If we have real abundant intelligence and energy, the world will be 
very different indeed.

To quote Sam Altmen at 
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms><https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
  :

"intelligence and energy have been the fundamental limiters towards most things 
we want. A future where these are not the limiting reagents will be radically 
different, and can be amazingly better."



On Wed, 18 Jan 2023 at 03:06, Marcus Daniels 
mailto:mar...@snoutfarm.com> 
<mailto:mar...@snoutfarm.com><mailto:mar...@snoutfarm.com>> wrote:

Definitions are all fine and good, but realizable behavior is what matters. 
  Analog computers will have imperfect behavior, and there will be leakage 
between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, 
so somewhat isolated from interference.

-Original Message-
From: Friam mailto:friam-boun...@redfish.com> 
<mailto:friam-boun...@redfish.com><mailto:friam-boun...@redfish.com>> On Behalf 
Of glen
Sent: Tuesday, January 17, 2023 2:11 PM
To: friam@redfish.com<mailto:friam@redfish.com> 
<mailto:friam@redfish.com><mailto:friam@redfish.com>
Subject: Re: [FRIAM] NickC channels DaveW

I don't quite grok that. A crisp definition of recursion implies no 
interaction with the outside world, right? If you can tolerate the ambiguity in 
that statement, the artifacts laying about from an unrolled recursion might be 
seen and used by outsiders. That's not to say a trespasser can't have some 
sophisticated intrusion technique. But unrolled seems more "open" to family, 
friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:
 > I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread Steve Smith

Sadly, there are some hidden elements to all that techno-optimism. E.g.
https://nitter.cz/billyperrigo/status/1615682180201447425#m




   sounds like the "woke mob" is interfering with patriotic bestial
   pedophiles who are just exercising their first, second, maybe fifth
   and just in case, the ninth amendment rights? ...



Every time I respond to a Captcha challenge, I feel as if I'm being 
conscripted to help train an image recognition ML model. And do we know 
how (not if) OpenAI, et alii are using *our questions* to train a new 
(subset of) model?





On 1/18/23 00:40, Pieter Steenekamp wrote:

I totally agree that realizable behavior is what matters.

The elephant in the room is whether AI (and robotics of course) will 
(not to replace but to) be able to do better than humans in all 
respects, including come up with creative solutions to not only the 
world's most pressing problems but also small creative things like 
writing poems, and then to do the mental and physical tasks required 
to provide goods and services to all in the world,


Sam Altman said there are two things that will shape our future; 
intelligence and energy. If we have real abundant intelligence and 
energy, the world will be very different indeed.


To quote Sam Altmen at 
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms> 
 :


"intelligence and energy have been the fundamental limiters towards 
most things we want. A future where these are not the limiting 
reagents will be radically different, and can be amazingly better."




On Wed, 18 Jan 2023 at 03:06, Marcus Daniels <mailto:mar...@snoutfarm.com>> wrote:


    Definitions are all fine and good, but realizable behavior is 
what matters.   Analog computers will have imperfect behavior, and 
there will be leakage between components.   A large network of 
transistors or neurons are sufficiently similar for my purposes.  
 The unrolling would be inside a skull, so somewhat isolated from 
interference.


    -Original Message-
    From: Friam <mailto:friam-boun...@redfish.com>> On Behalf Of glen

    Sent: Tuesday, January 17, 2023 2:11 PM
    To: friam@redfish.com <mailto:friam@redfish.com>
    Subject: Re: [FRIAM] NickC channels DaveW

    I don't quite grok that. A crisp definition of recursion implies 
no interaction with the outside world, right? If you can tolerate the 
ambiguity in that statement, the artifacts laying about from an 
unrolled recursion might be seen and used by outsiders. That's not to 
say a trespasser can't have some sophisticated intrusion technique. 
But unrolled seems more "open" to family, friends, and the occasional 
acquaintance.


    On 1/17/23 13:37, Marcus Daniels wrote:
 > I probably didn't pay enough attention to the thread some time 
ago on serialization, but to me recursion is hard to distinguish from 
an unrolling of recursion.




-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread David Eric Smith
That might qualify as a DDOS attack.

> On Jan 18, 2023, at 7:03 AM, Steve Smith  wrote:
> 
> I suppose pouring all of the FriAM traffic into (even my own bloviations) a 
> chatbot might be a bit usurious (the fool's errand of a fool errant)?
> 
> On 1/17/23 2:37 PM, glen wrote:
>> You might try using the OpenAI API directly. It takes some work, but not 
>> much.
>> 
>> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fopenai.com%2fapi%2f=E,1,HJ318n4srAACDIyWEzfPOzvMVtqgSqwLdvAizjLkkb1uDy5X4kPvoq_dYLKkkGFIA3DZ_FVdqrBvZUIyd5cGsQuJLe7SGEwu5RiJtC6GcsSxUoVp_V41JGDy=1
>> 
>> Or you could sign up for this:
>> 
>> https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/
>>  
>> 
>> I would hook you up to my Slack bot that queries GPT3 for every channel 
>> message. But that might get expensive with a verbose person like you! 8^D I 
>> can imagine some veerrryyy long prompts.
>> 
>> 
>> On 1/17/23 12:57, Steve Smith wrote:
>>> 
>>> On 1/17/23 1:08 PM, Marcus Daniels wrote:
 Dogs have about 500 million neurons in their cortex.  Neurons have about 
 7,000 synaptic connections, so I think my dog is a lot smarter than a 
 billion parameter LLM.  :-)
>>> And I bet (s)he channels *at least* one FriAM member's affect pretty well 
>>> also!
>>> 
>>> My 9 month old golden-doodle does as good of a job at that (I won't name 
>>> names) as my (now deceased 11 year old Akita and my 9 year old chocolate 
>>> dobie mix bot did) but nobody here really demonstrates the basic nature of 
>>> either my 9 month old tabby or her 20 year old black-mouser predecessor.
>>> There is very little overlap.
>>> 
>>> The jays and the woodpeckers and the finches and towhees and sparrows and 
>>> nuthatches and robins and the mating pair of doves and the several ravens 
>>> and the (courting?) pair of owls (that I only hear hooting to one another 
>>> in the night) and the lone (that I see) hawk and the lone blue heron (very 
>>> more occasionally) and the flock(lets) of geese migrating down the 
>>> rio-grande flyway... their aggregate neural complexity is only 
>>> multiplicative (order 100-1000x) that of any given beast... but somehow 
>>> their interactions (this is without the half-dozen species of rodentia and 
>>> leporidae and racoons and insects and worms and ) would seem to have a 
>>> more combinatorial network of relations?
>>> 
>>> I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave 
>>> blog-link) and was denied because "too busy, try back later" and realized 
>>> that it had become a locus for (first world) humans to express and combine 
>>> their greatest hopes and worse fears in a single place.
>>> 
>>> This seems like a higher-order training set?  Not just the intersection of 
>>> all things "worth saying" but somehow filtered/diffracted through "the 
>>> things (some) people are interested in in particular"...
>> 
>> 
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam=E,1,Q1HAmGNq-4qQpfbq0DfLXuFeNbuzE822K_GzY0xfgdrSHyDpeUqW_VgLZ-2Kq19ijfCG9e7wEFdAv26S-o1Sl5oD1eU95cmoXWWA9H4-XcuGNF-mJXo9MBLiCQ,,=1
> to (un)subscribe 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com=E,1,w7n7oYM9aADeW_87EBwWg3qb__jTpRMuz3zJjVV31KOB36kJRxrRvicMMz2zxuD5U4FqugvocrX4ZnruPv7dVvjvq0UNtcUs4uEYPCQTIXR2-MF5GUmmBzxdTHQV=1
> FRIAM-COMIC 
> https://linkprotect.cudasvc.com/url?a=http%3a%2f%2ffriam-comic.blogspot.com%2f=E,1,vVM3Lmp0D4U8PPxQ6KGkDluW6BUNfKH8BBpOth_NPab-Uupf4IQO0h_8QZvB1QbXnQ1aKjAFUsycR-8ringk7QoZd8nfeqgEVpWuaB5TQodZFLjYMzofiA,,=1
> archives:  5/2017 thru present 
> https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fredfish.com%2fpipermail%2ffriam_redfish.com%2f=E,1,FP8Iyk95eJ9RiKUEMxS166hzoBlr-Wtp2ltlp9feoQ25x-sK2Z3wBu8Z5gnUaXf5Qggt2XCac9Z24CCsm3s0UftsJp9SsfN8jgZn9JOoIhgEpYPZ=1
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread glen

This person made it even easier:

https://github.com/karfly/chatgpt_telegram_bot


On 1/18/23 09:51, Steve Smith wrote:

I might probably be able to install/load/try the openAI model if I wasn't 
wasting so much time careening down memory lane and trying to register what I 
see in my rear view mirrors with what I see screaming down the 
Autobhan-of-the-mind through my windscreen!




--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread glen

Sadly, there are some hidden elements to all that techno-optimism. E.g.

https://nitter.cz/billyperrigo/status/1615682180201447425#m

On 1/18/23 00:40, Pieter Steenekamp wrote:

I totally agree that realizable behavior is what matters.

The elephant in the room is whether AI (and robotics of course) will (not to 
replace but to) be able to do better than humans in all respects, including 
come up with creative solutions to not only the world's most pressing problems 
but also small creative things like writing poems, and then to do the mental 
and physical tasks required to provide goods and services to all in the world,

Sam Altman said there are two things that will shape our future; intelligence 
and energy. If we have real abundant intelligence and energy, the world will be 
very different indeed.

To quote Sam Altmen at 
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
 
<https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms>
  :

"intelligence and energy have been the fundamental limiters towards most things we 
want. A future where these are not the limiting reagents will be radically different, and 
can be amazingly better."



On Wed, 18 Jan 2023 at 03:06, Marcus Daniels mailto:mar...@snoutfarm.com>> wrote:

Definitions are all fine and good, but realizable behavior is what matters. 
  Analog computers will have imperfect behavior, and there will be leakage 
between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, 
so somewhat isolated from interference.

-Original Message-
From: Friam mailto:friam-boun...@redfish.com>> 
On Behalf Of glen
Sent: Tuesday, January 17, 2023 2:11 PM
To: friam@redfish.com <mailto:friam@redfish.com>
Subject: Re: [FRIAM] NickC channels DaveW

I don't quite grok that. A crisp definition of recursion implies no interaction with 
the outside world, right? If you can tolerate the ambiguity in that statement, the 
artifacts laying about from an unrolled recursion might be seen and used by outsiders. 
That's not to say a trespasser can't have some sophisticated intrusion technique. But 
unrolled seems more "open" to family, friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:
 > I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.




--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-18 Thread Pieter Steenekamp
I totally agree that realizable behavior is what matters.

The elephant in the room is whether AI (and robotics of course) will (not
to replace but to) be able to do better than humans in all respects,
including come up with creative solutions to not only the world's most
pressing problems but also small creative things like writing poems, and
then to do the mental and physical tasks required to provide goods and
services to all in the world,

Sam Altman said there are two things that will shape our future;
intelligence and energy. If we have real abundant intelligence and energy,
the world will be very different indeed.

To quote Sam Altmen at
https://economictimes.indiatimes.com/tech/startups/intelligence-energy-sam-altmans-technology-predictions-for-2020s/articleshow/86088731.cms
 :

"intelligence and energy have been the fundamental limiters towards most
things we want. A future where these are not the limiting reagents will be
radically different, and can be amazingly better."



On Wed, 18 Jan 2023 at 03:06, Marcus Daniels  wrote:

> Definitions are all fine and good, but realizable behavior is what
> matters.   Analog computers will have imperfect behavior, and there will be
> leakage between components.   A large network of transistors or neurons are
> sufficiently similar for my purposes.   The unrolling would be inside a
> skull, so somewhat isolated from interference.
>
> -Original Message-
> From: Friam  On Behalf Of glen
> Sent: Tuesday, January 17, 2023 2:11 PM
> To: friam@redfish.com
> Subject: Re: [FRIAM] NickC channels DaveW
>
> I don't quite grok that. A crisp definition of recursion implies no
> interaction with the outside world, right? If you can tolerate the
> ambiguity in that statement, the artifacts laying about from an unrolled
> recursion might be seen and used by outsiders. That's not to say a
> trespasser can't have some sophisticated intrusion technique. But unrolled
> seems more "open" to family, friends, and the occasional acquaintance.
>
> On 1/17/23 13:37, Marcus Daniels wrote:
> > I probably didn't pay enough attention to the thread some time ago on
> serialization, but to me recursion is hard to distinguish from an unrolling
> of recursion.
>
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
Definitions are all fine and good, but realizable behavior is what matters.   
Analog computers will have imperfect behavior, and there will be leakage 
between components.   A large network of transistors or neurons are 
sufficiently similar for my purposes.   The unrolling would be inside a skull, 
so somewhat isolated from interference.

-Original Message-
From: Friam  On Behalf Of glen
Sent: Tuesday, January 17, 2023 2:11 PM
To: friam@redfish.com
Subject: Re: [FRIAM] NickC channels DaveW

I don't quite grok that. A crisp definition of recursion implies no interaction 
with the outside world, right? If you can tolerate the ambiguity in that 
statement, the artifacts laying about from an unrolled recursion might be seen 
and used by outsiders. That's not to say a trespasser can't have some 
sophisticated intrusion technique. But unrolled seems more "open" to family, 
friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:
> I probably didn't pay enough attention to the thread some time ago on 
> serialization, but to me recursion is hard to distinguish from an unrolling 
> of recursion.

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

I don't quite grok that. A crisp definition of recursion implies no interaction with the 
outside world, right? If you can tolerate the ambiguity in that statement, the artifacts 
laying about from an unrolled recursion might be seen and used by outsiders. That's not 
to say a trespasser can't have some sophisticated intrusion technique. But unrolled seems 
more "open" to family, friends, and the occasional acquaintance.

On 1/17/23 13:37, Marcus Daniels wrote:

I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Steve Smith
I suppose pouring all of the FriAM traffic into (even my own 
bloviations) a chatbot might be a bit usurious (the fool's errand of a 
fool errant)?


On 1/17/23 2:37 PM, glen wrote:
You might try using the OpenAI API directly. It takes some work, but 
not much.


https://openai.com/api/

Or you could sign up for this:

https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/ 



I would hook you up to my Slack bot that queries GPT3 for every 
channel message. But that might get expensive with a verbose person 
like you! 8^D I can imagine some veerrryyy long prompts.



On 1/17/23 12:57, Steve Smith wrote:


On 1/17/23 1:08 PM, Marcus Daniels wrote:
Dogs have about 500 million neurons in their cortex.  Neurons have 
about 7,000 synaptic connections, so I think my dog is a lot smarter 
than a billion parameter LLM.  :-)
And I bet (s)he channels *at least* one FriAM member's affect pretty 
well also!


My 9 month old golden-doodle does as good of a job at that (I won't 
name names) as my (now deceased 11 year old Akita and my 9 year old 
chocolate dobie mix bot did) but nobody here really demonstrates the 
basic nature of either my 9 month old tabby or her 20 year old 
black-mouser predecessor.    There is very little overlap.


The jays and the woodpeckers and the finches and towhees and sparrows 
and nuthatches and robins and the mating pair of doves and the 
several ravens and the (courting?) pair of owls (that I only hear 
hooting to one another in the night) and the lone (that I see) hawk 
and the lone blue heron (very more occasionally) and the flock(lets) 
of geese migrating down the rio-grande flyway... their aggregate 
neural complexity is only multiplicative (order 100-1000x) that of 
any given beast... but somehow their interactions (this is without 
the half-dozen species of rodentia and leporidae and racoons and 
insects and worms and ) would seem to have a more combinatorial 
network of relations?


I tried signing up to try chatGPT for myself (thanks to Glen's Nick 
Cave blog-link) and was denied because "too busy, try back later" and 
realized that it had become a locus for (first world) humans to 
express and combine their greatest hopes and worse fears in a single 
place.


This seems like a higher-order training set?  Not just the 
intersection of all things "worth saying" but somehow 
filtered/diffracted through "the things (some) people are interested 
in in particular"...





-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
I probably didn't pay enough attention to the thread some time ago on 
serialization, but to me recursion is hard to distinguish from an unrolling of 
recursion.

From: Friam  on behalf of glen 
Sent: Tuesday, January 17, 2023 2:21 PM
To: friam@redfish.com 
Subject: Re: [FRIAM] NickC channels DaveW

Being a too-literal person, who never gets the joke, I have to say that these 
simple scalings, combinatorial or not, don't capture the interconnectionist 
point being made in the pain article. The absolute numbers of elements 
(neurons, synapses, signaling molecules, etc.) flatten it all out. But 
_ganglion_, that's a different thing. What we're looking for are loops and 
"integratory" structures. I think that's where we can start to find a scaling 
for smartness.

In that context, my guess is the heart is closer to ChatGPT in its smartness 
than either of those are to the human gut. But structure-based assessments like 
these merely complement behavior-based assessments. We could quantify the 
number of *jobs* done by the thing. The heart has fewer jobs to do than the 
gut. And the gut has fewer jobs to do than the dog. Etc. Of course, the lines 
between jobs aren't all that crisp, especially as the complexity of the thing 
grows. Behaviors in complex things are composable and polymorphic. In spite of 
our imagining what ChatGPT is doing, it's really only doing 1 thing: choosing 
the most likely next token given the previous tokens. You *might* be able to 
serialize your dog and suggest she's really just choosing the most likely next 
behavior given the previous behaviors. But my guess is dog owners perceive (or 
impute) that dogs resolve contradictions that arise in parallel. (chase the 
ball? chew the bone? continue chewing the bone until you get to the ball?) 
Contradiction resolution is evidence of more than 1 task. You could gussy up 
the model by providing a single interface to an ensemble of models. Then it 
might look more like a dog, depending on the algorithm(s) used to resolve 
contradictions between models. But to get closer to dog-complexity, you'd have 
to wire the models together so that they could contradict each other but still 
feed off each other in some way. A model that changes its mind midway through 
its response would be good. I haven't had a dog in a long time. But I seem to 
remember they were easy to redirect, despite the old saying "like a dog with a 
bone".

On 1/17/23 12:51, Prof David West wrote:
> Apropos of nothing:
>
> The human heart has roughly 40,000 neurons and the human gut around 0.1 
> billion neurons (sensory neurons, neurotransmitters, ganglia, and motor 
> neurons).
>
> So the human gut is about 1/5 as smart as Marcus's dog??
>
> davew
>
>
> On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:
>> Dogs have about 500 million neurons in their cortex.  Neurons have
>> about 7,000 synaptic connections, so I think my dog is a lot smarter
>> than a billion parameter LLM.  :-)
>>
>> Sent from my iPhone
>>
>>> On Jan 17, 2023, at 11:35 AM, glen  wrote:
>>>
>>> 
>>> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
>>> what it produced. What do you think?"
>>> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
>>>
>>> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
>>> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
>>>
>>> Taken separately, (1) and (2) are each interesting, if seemingly 
>>> orthogonal. But what twines them, I think, is the concept of "mutual 
>>> information". I read (2) before I read (1) because, for some bizarre 
>>> reason, my day job involves trying to understand pain mechanisms. And (2) 
>>> speaks directly (if only implicitly) to things like IIT. If you read (1) 
>>> first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
>>> NickT's objection to an inner life, it seems clear that the nuance we see 
>>> on the surface, at least longitudinally, *needs* an inner life. You simply 
>>> can't get good stuff out of an entirely flat/transparent/reactive/Markovian 
>>> object.
>>>
>>> However, what NickC misses is that LLMs *have* some intertwined mutual 
>>> information within them. Similar to asking whether an insect experiences 
>>> pain, we can ask whether a X billion parameter LLM experiences something 
>>> like "suffering". My guess is the answer is "yes". It may not be a good 
>>> analog to what we call "suffering", though ... maybe &q

Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

You might try using the OpenAI API directly. It takes some work, but not much.

https://openai.com/api/

Or you could sign up for this:

https://azure.microsoft.com/en-us/blog/general-availability-of-azure-openai-service-expands-access-to-large-advanced-ai-models-with-added-enterprise-benefits/

I would hook you up to my Slack bot that queries GPT3 for every channel 
message. But that might get expensive with a verbose person like you! 8^D I can 
imagine some veerrryyy long prompts.


On 1/17/23 12:57, Steve Smith wrote:


On 1/17/23 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)

And I bet (s)he channels *at least* one FriAM member's affect pretty well also!

My 9 month old golden-doodle does as good of a job at that (I won't name names) 
as my (now deceased 11 year old Akita and my 9 year old chocolate dobie mix bot 
did) but nobody here really demonstrates the basic nature of either my 9 month 
old tabby or her 20 year old black-mouser predecessor.    There is very little 
overlap.

The jays and the woodpeckers and the finches and towhees and sparrows and 
nuthatches and robins and the mating pair of doves and the several ravens and 
the (courting?) pair of owls (that I only hear hooting to one another in the 
night) and the lone (that I see) hawk and the lone blue heron (very more 
occasionally) and the flock(lets) of geese migrating down the rio-grande 
flyway... their aggregate neural complexity is only multiplicative (order 
100-1000x) that of any given beast... but somehow their interactions (this is 
without the half-dozen species of rodentia and leporidae and racoons and 
insects and worms and ) would seem to have a more combinatorial network of 
relations?

I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave blog-link) and 
was denied because "too busy, try back later" and realized that it had become a 
locus for (first world) humans to express and combine their greatest hopes and worse 
fears in a single place.

This seems like a higher-order training set?  Not just the intersection of all things "worth 
saying" but somehow filtered/diffracted through "the things (some) people are interested 
in in particular"...



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread glen

Being a too-literal person, who never gets the joke, I have to say that these simple 
scalings, combinatorial or not, don't capture the interconnectionist point being made in 
the pain article. The absolute numbers of elements (neurons, synapses, signaling 
molecules, etc.) flatten it all out. But _ganglion_, that's a different thing. What we're 
looking for are loops and "integratory" structures. I think that's where we can 
start to find a scaling for smartness.

In that context, my guess is the heart is closer to ChatGPT in its smartness than either 
of those are to the human gut. But structure-based assessments like these merely 
complement behavior-based assessments. We could quantify the number of *jobs* done by the 
thing. The heart has fewer jobs to do than the gut. And the gut has fewer jobs to do than 
the dog. Etc. Of course, the lines between jobs aren't all that crisp, especially as the 
complexity of the thing grows. Behaviors in complex things are composable and 
polymorphic. In spite of our imagining what ChatGPT is doing, it's really only doing 1 
thing: choosing the most likely next token given the previous tokens. You *might* be able 
to serialize your dog and suggest she's really just choosing the most likely next 
behavior given the previous behaviors. But my guess is dog owners perceive (or impute) 
that dogs resolve contradictions that arise in parallel. (chase the ball? chew the bone? 
continue chewing the bone until you get to the ball?) Contradiction resolution is 
evidence of more than 1 task. You could gussy up the model by providing a single 
interface to an ensemble of models. Then it might look more like a dog, depending on the 
algorithm(s) used to resolve contradictions between models. But to get closer to 
dog-complexity, you'd have to wire the models together so that they could contradict each 
other but still feed off each other in some way. A model that changes its mind midway 
through its response would be good. I haven't had a dog in a long time. But I seem to 
remember they were easy to redirect, despite the old saying "like a dog with a 
bone".

On 1/17/23 12:51, Prof David West wrote:

Apropos of nothing:

The human heart has roughly 40,000 neurons and the human gut around 0.1 billion 
neurons (sensory neurons, neurotransmitters, ganglia, and motor neurons).

So the human gut is about 1/5 as smart as Marcus's dog??

davew


On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have
about 7,000 synaptic connections, so I think my dog is a lot smarter
than a billion parameter LLM.  :-)

Sent from my iPhone


On Jan 17, 2023, at 11:35 AM, glen  wrote:


1) "I asked Chat GPT to write a song in the style of Nick Cave and this is what it 
produced. What do you think?"
https://www.theredhandfiles.com/chat-gpt-what-do-you-think/

2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#

Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. But what 
twines them, I think, is the concept of "mutual information". I read (2) before 
I read (1) because, for some bizarre reason, my day job involves trying to understand 
pain mechanisms. And (2) speaks directly (if only implicitly) to things like IIT. If you 
read (1) first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
NickT's objection to an inner life, it seems clear that the nuance we see on the surface, 
at least longitudinally, *needs* an inner life. You simply can't get good stuff out of an 
entirely flat/transparent/reactive/Markovian object.

However, what NickC misses is that LLMs *have* some intertwined mutual information within them. Similar to asking whether an 
insect experiences pain, we can ask whether a X billion parameter LLM experiences something like "suffering". My guess 
is the answer is "yes". It may not be a good analog to what we call "suffering", though ... maybe 
"friction"? ... maybe "release"? My sense is that when you engage a LLM (embedded in a larger construct that 
handles the prompts and live learning, of course) in such a way that it assembles a response that nobody else has evoked, it 
might get something akin to a tingle ... or like the relief you feel when scratching an itch ... of course it would be primordial 
because the self-attention in such a system is hopelessly disabled compared to the rich self-attention loops we have in our meaty 
bodies. But it just *might* be there in some primitive sense.

As always, agnosticism is the only rational stance. And I won't trust the songs 
written by LLMs until I see a few of them commit suicide, overdose, or punch a 
TMZ cameraman in the face.


--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- 

Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Steve Smith


On 1/17/23 1:08 PM, Marcus Daniels wrote:

Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)
And I bet (s)he channels *at least* one FriAM member's affect pretty 
well also!


My 9 month old golden-doodle does as good of a job at that (I won't name 
names) as my (now deceased 11 year old Akita and my 9 year old chocolate 
dobie mix bot did) but nobody here really demonstrates the basic nature 
of either my 9 month old tabby or her 20 year old black-mouser 
predecessor.    There is very little overlap.


The jays and the woodpeckers and the finches and towhees and sparrows 
and nuthatches and robins and the mating pair of doves and the several 
ravens and the (courting?) pair of owls (that I only hear hooting to one 
another in the night) and the lone (that I see) hawk and the lone blue 
heron (very more occasionally) and the flock(lets) of geese migrating 
down the rio-grande flyway... their aggregate neural complexity is only 
multiplicative (order 100-1000x) that of any given beast... but somehow 
their interactions (this is without the half-dozen species of rodentia 
and leporidae and racoons and insects and worms and ) would seem to 
have a more combinatorial network of relations?


I tried signing up to try chatGPT for myself (thanks to Glen's Nick Cave 
blog-link) and was denied because "too busy, try back later" and 
realized that it had become a locus for (first world) humans to express 
and combine their greatest hopes and worse fears in a single place.


This seems like a higher-order training set?  Not just the intersection 
of all things "worth saying" but somehow filtered/diffracted through 
"the things (some) people are interested in in particular"...



-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Prof David West
Apropos of nothing:

The human heart has roughly 40,000 neurons and the human gut around 0.1 billion 
neurons (sensory neurons, neurotransmitters, ganglia, and motor neurons).

So the human gut is about 1/5 as smart as Marcus's dog??

davew


On Tue, Jan 17, 2023, at 1:08 PM, Marcus Daniels wrote:
> Dogs have about 500 million neurons in their cortex.  Neurons have 
> about 7,000 synaptic connections, so I think my dog is a lot smarter 
> than a billion parameter LLM.  :-)
>
> Sent from my iPhone
>
>> On Jan 17, 2023, at 11:35 AM, glen  wrote:
>> 
>> 
>> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
>> what it produced. What do you think?"
>> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
>> 
>> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
>> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
>> 
>> Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. 
>> But what twines them, I think, is the concept of "mutual information". I 
>> read (2) before I read (1) because, for some bizarre reason, my day job 
>> involves trying to understand pain mechanisms. And (2) speaks directly (if 
>> only implicitly) to things like IIT. If you read (1) first, it's difficult 
>> to avoid snapping quickly into NickC's canal. Despite NickT's objection to 
>> an inner life, it seems clear that the nuance we see on the surface, at 
>> least longitudinally, *needs* an inner life. You simply can't get good stuff 
>> out of an entirely flat/transparent/reactive/Markovian object.
>> 
>> However, what NickC misses is that LLMs *have* some intertwined mutual 
>> information within them. Similar to asking whether an insect experiences 
>> pain, we can ask whether a X billion parameter LLM experiences something 
>> like "suffering". My guess is the answer is "yes". It may not be a good 
>> analog to what we call "suffering", though ... maybe "friction"? ... maybe 
>> "release"? My sense is that when you engage a LLM (embedded in a larger 
>> construct that handles the prompts and live learning, of course) in such a 
>> way that it assembles a response that nobody else has evoked, it might get 
>> something akin to a tingle ... or like the relief you feel when scratching 
>> an itch ... of course it would be primordial because the self-attention in 
>> such a system is hopelessly disabled compared to the rich self-attention 
>> loops we have in our meaty bodies. But it just *might* be there in some 
>> primitive sense.
>> 
>> As always, agnosticism is the only rational stance. And I won't trust the 
>> songs written by LLMs until I see a few of them commit suicide, overdose, or 
>> punch a TMZ cameraman in the face.
>> 
>> -- 
>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>> 
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/
>> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] NickC channels DaveW

2023-01-17 Thread Marcus Daniels
Dogs have about 500 million neurons in their cortex.  Neurons have about 7,000 
synaptic connections, so I think my dog is a lot smarter than a billion 
parameter LLM.  :-)

Sent from my iPhone

> On Jan 17, 2023, at 11:35 AM, glen  wrote:
> 
> 
> 1) "I asked Chat GPT to write a song in the style of Nick Cave and this is 
> what it produced. What do you think?"
> https://www.theredhandfiles.com/chat-gpt-what-do-you-think/
> 
> 2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
> https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#
> 
> Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. 
> But what twines them, I think, is the concept of "mutual information". I read 
> (2) before I read (1) because, for some bizarre reason, my day job involves 
> trying to understand pain mechanisms. And (2) speaks directly (if only 
> implicitly) to things like IIT. If you read (1) first, it's difficult to 
> avoid snapping quickly into NickC's canal. Despite NickT's objection to an 
> inner life, it seems clear that the nuance we see on the surface, at least 
> longitudinally, *needs* an inner life. You simply can't get good stuff out of 
> an entirely flat/transparent/reactive/Markovian object.
> 
> However, what NickC misses is that LLMs *have* some intertwined mutual 
> information within them. Similar to asking whether an insect experiences 
> pain, we can ask whether a X billion parameter LLM experiences something like 
> "suffering". My guess is the answer is "yes". It may not be a good analog to 
> what we call "suffering", though ... maybe "friction"? ... maybe "release"? 
> My sense is that when you engage a LLM (embedded in a larger construct that 
> handles the prompts and live learning, of course) in such a way that it 
> assembles a response that nobody else has evoked, it might get something akin 
> to a tingle ... or like the relief you feel when scratching an itch ... of 
> course it would be primordial because the self-attention in such a system is 
> hopelessly disabled compared to the rich self-attention loops we have in our 
> meaty bodies. But it just *might* be there in some primitive sense.
> 
> As always, agnosticism is the only rational stance. And I won't trust the 
> songs written by LLMs until I see a few of them commit suicide, overdose, or 
> punch a TMZ cameraman in the face.
> 
> -- 
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
> 
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
> 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] NickC channels DaveW

2023-01-17 Thread glen


1) "I asked Chat GPT to write a song in the style of Nick Cave and this is what it 
produced. What do you think?"
https://www.theredhandfiles.com/chat-gpt-what-do-you-think/

2) "Is it pain if it does not hurt? On the unlikelihood of insect pain"
https://www.cambridge.org/core/journals/canadian-entomologist/article/is-it-pain-if-it-does-not-hurt-on-the-unlikelihood-of-insect-pain/9A60617352A45B15E25307F85FF2E8F2#

Taken separately, (1) and (2) are each interesting, if seemingly orthogonal. But what 
twines them, I think, is the concept of "mutual information". I read (2) before 
I read (1) because, for some bizarre reason, my day job involves trying to understand 
pain mechanisms. And (2) speaks directly (if only implicitly) to things like IIT. If you 
read (1) first, it's difficult to avoid snapping quickly into NickC's canal. Despite 
NickT's objection to an inner life, it seems clear that the nuance we see on the surface, 
at least longitudinally, *needs* an inner life. You simply can't get good stuff out of an 
entirely flat/transparent/reactive/Markovian object.

However, what NickC misses is that LLMs *have* some intertwined mutual information within them. Similar to asking whether an 
insect experiences pain, we can ask whether a X billion parameter LLM experiences something like "suffering". My guess 
is the answer is "yes". It may not be a good analog to what we call "suffering", though ... maybe 
"friction"? ... maybe "release"? My sense is that when you engage a LLM (embedded in a larger construct that 
handles the prompts and live learning, of course) in such a way that it assembles a response that nobody else has evoked, it 
might get something akin to a tingle ... or like the relief you feel when scratching an itch ... of course it would be primordial 
because the self-attention in such a system is hopelessly disabled compared to the rich self-attention loops we have in our meaty 
bodies. But it just *might* be there in some primitive sense.

As always, agnosticism is the only rational stance. And I won't trust the songs 
written by LLMs until I see a few of them commit suicide, overdose, or punch a 
TMZ cameraman in the face.

--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/