Re: [FRIAM] Deep learning training material

2023-01-08 Thread Marcus Daniels
The main defects of both R and Python are a lack of a typing system and high 
performance compilation.  I find R still follows (is used by) the statistics 
research community more than Python.   Common Lisp was always better than 
either.

Sent from my iPhone

On Jan 8, 2023, at 11:03 AM, Russ Abbott  wrote:


As indicated in my original reply, my interest in this project grows from my 
relative ignorance of Deep Learning. My career has focussed exclusively on 
symbolic computing. I've worked with and taught (a) functional programming, 
logic programming, and related issues in advanced Python; (b) complex systems, 
agent-based modeling, genetic algorithms, and related evolutionary processes, 
(c) a bit of constraint programming, especially in MiniZinc, and (d) 
reinforcement learning as Q-learning, which is reinforcement learning without 
neural nets. I've always avoided neural nets--and more generally numerical 
programming of any sort.

Deep learning has produced so many impressive results that I've decided to 
devote much of my retirement life to learning about it. I retired at the end of 
Spring 2022 and (after a break) am now devoting much of my time to learning 
more about Deep Neural Nets. So far, I've dipped my brain into it at various 
points. I think I've learned a fair amount. For example,

  *   I now know how to build a neural net (NN) that adds two numbers using a 
single layer with a single neuron. It's really quite simple and is, I think, a 
beautiful example of how NNs work. If I were to teach an intro to NNs I'd start 
with this.
  *   I've gone through the Kaggle Deep Learning sequence mentioned earlier.
  *   I found a paper that shows how you can approximate any differentiable 
function to any degree of accuracy with a single-layer NN. (This is a very nice 
result, although I believe it's not used explicitly in building serious Deep NN 
systems.)
  *   From what I've seen so far, most serious DNNs are built using Keras 
rather than PyTorch.
  *   I've looked at Jeremy Howard's fast.ai material. I was 
going to go through the course but stopped when I found that it uses PyTorch. 
Also, it seems to be built on fast.ai libraries that do a lot 
of the work for you without explanation.  And it seems to focus almost 
exclusively on Convolutional NNs.
  *   My impression of DNNs is that to a great extent they are ad hoc. There is 
no good way to determine the best architecture to use for a given problem. By 
architecture, I mean the number of layers, the number of neurons in each layer, 
the types of layers, the activation functions to use, etc.
  *   All DNNs that I've seen use Python as code glue rather than R or some 
other language. I like Python--so I'm pleased with that.
  *   To build serious NNs one should learn the Python libraries Numpy (array 
manipulation) and Pandas (data processing). Numpy especially seems to be used 
for virtually all DNNs that I've seen.
  *   Keras and probably PyTorch include a number of special-purpose neurons 
and layers that can be included in one's DNN. These include: a DropOut layer, 
LSTM (short-long-term memory) neurons, convolutional layers, recurrent neural 
net layers (RNN), and more recently transformers, which get credit for ChatGPT 
and related programs. My impression is that these special-purpose layers are ad 
hoc in the same sense that functions or libraries that one finds useful in a 
programming language are ad hoc. They have been very important for the success 
of DNNs, but they came into existence because people invented them in the same 
way that people invented useful functions and libraries.
  *   NN libraries also include a menagerie of activation functions. An 
activation function acts as the final control on the output of a layer. 
Different activation functions are used for different purposes. To be 
successful in building a DNN, one must understand what those activation 
functions do for you and which ones to use.
  *   I'm especially interested in DNNs that use reinforcement learning. That's 
because the first DNN work that impressed me was DeepMind's DNNs that learned 
to play Atari games--and then Go, etc. An important advantage of Reinforcement 
Learning (RL) is that it doesn't depend on mountains of labeled data.
  *   I find RL systems more interesting than image recognition systems. One of 
the striking features of many image recognition systems is that they can be 
thrown off by changing a small number of pixels in an image. The changed image 
would look to a human observer just like the original, but it might fool a 
trained NN into labeling the image as a banana rather than, say, an automobile, 
which is what it really is. To address this problem people have developed 
Generative Adversarial Networks (GANs) which attempt to find such weaknesses in 
a neural net during training and then to train the NN not to have those 
weaknesses. This is a fascinating result, but as far as I can tell, i

Re: [FRIAM] new thermal tech

2023-01-08 Thread Roger Critchlow
I learned most everything I know about thermoacoustic heat engines while
trying to read those papers, then I went back to the day job hacking code.

-- rec --


On Sun, Jan 8, 2023 at 6:34 AM David Eric Smith  wrote:

> The thermoacousktic one is interesting, and surprises me a bit.
>
> I worked on these systems a bit in the mid-1990s, when in a kind of
> purgatory in a navy research lab that mostly did acoustics.
>
> Broadly, there are two limiting cases for a thermoacoutic engine.  One
> uses a standing wave and is simple and robust to design and run.  The other
> uses a traveling wave and is much harder to tune and keep tuned.
>
> A difference is that the SW version, which we might say runs on a
> “thermoacousktic cycle”, makes intrinsic use of the phase lag for diffusion
> of heat through a boundary layer.  As such, it has no nontrivial reversible
> limit, and has severe limits on the efficiency (or coefficient of
> performance, if you are running it as a refrigerator).  So hearing that
> they get COPs comparable to existing mechanical systems would make me
> suspicious of they were using SW.
>
> The TW version runs on, effectively, the Stirling cycle, and in principle
> it does have a reversible, Carnot-efficient limit.  However, it has
> parasitic losses from viscous boundary layers.  The engineering limit you
> need to approach ideal thermal transfer efficiency is one that chokes off
> the flow of the working fluid, and makes the viscous drag explode.  Using
> an ideal gas like He reduces the viscosity, though also the heat capacity
> and diffusion rate through the fluid.
>
> On their website, they have a little advertising graphic of a sound wave,
> which shows a traveling wave (or a mixed wave with large TW component).  It
> would be reasonable, if they are scientists or engineers, for them to make
> their public graphics true representations of at least qualitatively what
> their system does.
>
> In view of the fact that there is very little conceptual to do with a
> thermoacousktic engine, and it is all materials science and tweaking
> engineering details, I really wonder what would have taken 27 years to
> figure out, or to get around to doing.
>
>
> For geeks who like this stuff, there is a fun continuum:
>
> 1. When I was a little kid, I got an ultra-simple Stirling engine from a
> mail advertisement (back when those weren’t all scams), and was delighted
> by it.
>
> 2. In reading more about Stirling cycles etc., I learned about
> “free-piston” Stirling engines, which have the same compartments and
> barriers, but use the compression-bounce of the gas to move the displacer
> piston rather than a mechanical linkage.
>
> 3. The TW thermoacousktic engine is just a free-piston Stirling without
> the piston: the shuttle of gas becomes the displacer.
>
> 4. Some years later, having been thrown out of String Theory for being too
> stupid to understand it, I was interested in the way adiabatic
> transformations look like mere coordinate deformations in state spaces,
> which means that one should be able to make Carnot-efficient reversible
> movement identical to equilibrium by use of a conformal field (the String
> Theorist’s universal symmetry transformation, back in those days).  So we
> can do thermoacousktic engines using String Theory (Horray!):
> https://journals.aps.org/pre/abstract/10.1103/PhysRevE.58.2818
> http://www.santafe.edu/~desmith/PDF_pubs/Carnot_1.pdf
> and then
> https://journals.aps.org/pre/abstract/10.1103/PhysRevE.60.3633
> http://www.santafe.edu/~desmith/PDF_pubs/Carnot_2.pdf
> Papers I know no-one has ever had any interest in, and very possibly
> no-one has ever read.
>
> I thought it was very fun to be able to derive Carnot’s theorem directly
> from a symmetry transformation, so entropy flux behaves like any other
> conserved quantity, rather than having to make arguments about limits to
> thermodynamic efficiency by daisy-chain proofs-by-contradiction (If you
> could do such-and-such, then by running an exemplar Carnot engine in
> reverse, you could make a perpetual-motion machine of type-XYZ).  But I
> never did anything with it that yielded a new calculation, as opposed to
> just a restatement of common knowledge.
>
> Anyway…
>
> Eric
>
>
>
>
>
> On Jan 6, 2023, at 8:27 AM, Roger Critchlow  wrote:
>
> I was amused to see an announcement of a thermoacoustic heat pump  the
> other day:
>
>
> https://www.pv-magazine.com/2023/01/02/residential-thermo-acoustic-heat-pump-produces-water-up-to-80-c/
> 
>
> then an ionocaloric refrigerator announcement turns up this morning
>
>   https://newscenter.lbl.gov/2023/01/03/cool-new-method-of-refrigeration/
> 

Re: [FRIAM] Deep learning training material

2023-01-08 Thread Russ Abbott
As indicated in my original reply, my interest in this project grows from
my relative ignorance of Deep Learning. My career has focussed exclusively
on symbolic computing. I've worked with and taught (a) functional
programming, logic programming, and related issues in advanced Python; (b)
complex systems, agent-based modeling, genetic algorithms, and related
evolutionary processes, (c) a bit of constraint programming, especially in
MiniZinc, and (d) reinforcement learning as Q-learning, which is
reinforcement learning without neural nets. I've always avoided neural
nets--and more generally numerical programming of any sort.

Deep learning has produced so many impressive results that I've decided to
devote much of my retirement life to learning about it. I retired at the
end of Spring 2022 and (after a break) am now devoting much of my time to
learning more about Deep Neural Nets. So far, I've dipped my brain into it
at various points. I think I've learned a fair amount. For example,

   - I now know how to build a neural net (NN) that adds two numbers using
   a single layer with a single neuron. It's really quite simple and is, I
   think, a beautiful example of how NNs work. If I were to teach an intro to
   NNs I'd start with this.
   - I've gone through the Kaggle Deep Learning sequence mentioned earlier.
   - I found a paper that shows how you can approximate any differentiable
   function to any degree of accuracy with a single-layer NN. (This is a very
   nice result, although I believe it's not used explicitly in building
   serious Deep NN systems.)
   - From what I've seen so far, most serious DNNs are built using Keras
   rather than PyTorch.
   - I've looked at Jeremy Howard's fast.ai material. I was going to go
   through the course but stopped when I found that it uses PyTorch. Also, it
   seems to be built on fast.ai libraries that do a lot of the work for you
   without explanation.  And it seems to focus almost exclusively on
   Convolutional NNs.
   - My impression of DNNs is that to a great extent they are *ad hoc*.
   There is no good way to determine the best architecture to use for a given
   problem. By architecture, I mean the number of layers, the number of
   neurons in each layer, the types of layers, the activation functions to
   use, etc.
   - All DNNs that I've seen use Python as code glue rather than R or some
   other language. I like Python--so I'm pleased with that.
   - To build serious NNs one should learn the Python libraries Numpy
   (array manipulation) and Pandas (data processing). Numpy especially seems
   to be used for virtually all DNNs that I've seen.
   - Keras and probably PyTorch include a number of special-purpose
   neurons and layers that can be included in one's DNN. These include: a
   DropOut layer, LSTM (short-long-term memory) neurons, convolutional layers,
   recurrent neural net layers (RNN), and more recently transformers, which
   get credit for ChatGPT and related programs. My impression is that these
   special-purpose layers are *ad hoc* in the same sense that functions or
   libraries that one finds useful in a programming language are *ad hoc*.
   They have been very important for the success of DNNs, but they came into
   existence because people invented them in the same way that people invented
   useful functions and libraries.
   - NN libraries also include a menagerie of activation functions. An
   activation function acts as the final control on the output of a layer.
   Different activation functions are used for different purposes. To be
   successful in building a DNN, one must understand what those activation
   functions do for you and which ones to use.
   - I'm especially interested in DNNs that use reinforcement learning.
   That's because the first DNN work that impressed me was DeepMind's DNNs
   that learned to play Atari games--and then Go, etc. An important advantage
   of Reinforcement Learning (RL) is that it doesn't depend on mountains of
   labeled data.
   - I find RL systems more interesting than image recognition systems. One
   of the striking features of many image recognition systems is that they can
   be thrown off by changing a small number of pixels in an image. The changed
   image would look to a human observer just like the original, but it might
   fool a trained NN into labeling the image as a banana rather than, say, an
   automobile, which is what it really is. To address this problem people have
   developed Generative Adversarial Networks (GANs) which attempt to find such
   weaknesses in a neural net during training and then to train the NN not to
   have those weaknesses. This is a fascinating result, but as far as I can
   tell, it mainly shows how fragile some NNs are and doesn't add much
   conceptual depth to one's understanding of how NNs work.

I'm impressed with this list of things I sort of know. If you had asked me
before I started writing this email I wouldn't have thought I had learned

Re: [FRIAM] The WEBB seeing back to the first millennia

2023-01-08 Thread David Eric Smith
So there’s a “reply” (or whatever) that I have had an impulse to post for two 
weeks now, but had to forbid myself the frivolity of writing.

Also, having seen the recent posts, I think it is already resident in 
everything Glen takes for granted as having settled from our years of 
conversation on this list.


OTOH, I appear to be a strong believer in priming.  On 12/27/22 I wrote the 
tangent about the Edmundson critique of Rorty, and out of the many things that 
could have been triggered by reference to Peirce, Glen replied with things 
related to infinity and fixed points, which were actually what was on my mind 
too.

Then, on 12/29/22 there was the exchange in response to Gil’s questions about 
Big Bang, and infinity became the center of Glen’s and my back-and-forth, 
though more as a feature of another discussion than as the main topic.

Then there was Nick on 01/07/23 and my rejoinder about sample estimators and 
whatever central tendency they might converge to if they are unbiased.

That is my lead-in for making some things related to infinity the main point in 
this post, and not merely features of some other application.


A thing that has been a sort of nuisance to me, on which I would like to have 
an opinion, is a cloud around several of these topics.  I listen to the 
contemplatives talk about the way they actually understand “reality” and 
everyone else is benighted, and I can’t tell if they actually understand 
something or are fetishists for a certain form (this is not directed at DaveW, 
but at a different collection of people).  I don’t mean this antagonistically, 
but just as a statement that if there is substance behind their language, I 
have no ability to tell, or what it might be.  

Then there is the cluster of questions about Truth a la Peirce, and various 
questions about mathematical Platonism. Constructivism, and formalist vs. 
intuitionist schools, where again I find myself having difficulty understanding 
what it is they are willing to fight to the death about, when what I can see on 
the outside is a bunch of conventional behaviors, at most, which seemingly one 
could “feel” about quite many ways.

So, to boil it down to too-few tokens, here is what I try to content myself 
with as an explanation.

1. A lot of this is about getting at the nature and characteristics of thought. 
 To say that, I do not accept being committed to either the 
“philistine-the-world-is-out-there” camp or the dreamy 
“world-is-contained-in-mind” camp.  We haven’t said enough of anything definite 
to have meant anything yet.  I am still at the level of the crudest descriptive 
empiricism, and _NO_ profundity.

2. Some things seem to be pretty tractable as literals, which we might call 
“states of knowledge”.  Finite counts of things, the numerical quantities of 
sample estimators, nouns that are only used to point at things, in the sense of 
directing attention, or whatever.

3. But we also have rules, and a lot of the rules can be applied recursively 
without limit.  We seem to need, as part of “the structure of thought” 
(whatever that should mean), to treat those things we have constructed to be 
unattainable as having been attained.  Chuck Norris has counted to infinity.  
Twice.  

4. What shall we do with point 3?  Well, we can’t attain them, so we will put 
up placeholders to stand for a kind of poetic fiction of “attaining them” — 
meant in the sense of Jerry Sussman’s aphorism that “math is poetry” — and then 
propose finite syntactic constructions to manipulate the fixed points.  
Frequently we want to define the syntax to manipulate the fixed points from 
properties of the rules whose recursion the fixed points are supposed to fix.  
But maybe we have to just invent, out of imagination, other properties we want 
the fixed points to have, which are not constructible directly from the rules 
and their recursions.

5. My claim to Nick is that these placeholders for the fixed points of rule 
recursions are clearly understandable as filling a different mental or 
cognitive role than the states of knowledge that we are aware correspond to 
only finite orders of rule use.

6. The conjecture (by me) is that what we can see of our own thought structure 
from ways of handling infinities is not a bad model, not only for “Truth” a la 
Peirce, but also for tokens like “Reality”.  I don’t generally imagine I have 
any idea what someone else thinks he means when he talks “about reality” or 
“about what is real”.  But I am willing to cast an opinion about what he is 
doing cognitively with such a term, which is treating a thing he has 
constructed as unattainable, as if it had been attained.

7. Of course, there are differences.  For sample estimators and underlying 
properties, we don’t worry about “whether both of these, or only one of them, 
exists”, since we are in a domain where the equivalent status of both as 
existing (whatever status that is) is a starting point of the framing.  Only 
our access

Re: [FRIAM] new thermal tech

2023-01-08 Thread David Eric Smith
The thermoacousktic one is interesting, and surprises me a bit.

I worked on these systems a bit in the mid-1990s, when in a kind of purgatory 
in a navy research lab that mostly did acoustics.

Broadly, there are two limiting cases for a thermoacoutic engine.  One uses a 
standing wave and is simple and robust to design and run.  The other uses a 
traveling wave and is much harder to tune and keep tuned.

A difference is that the SW version, which we might say runs on a 
“thermoacousktic cycle”, makes intrinsic use of the phase lag for diffusion of 
heat through a boundary layer.  As such, it has no nontrivial reversible limit, 
and has severe limits on the efficiency (or coefficient of performance, if you 
are running it as a refrigerator).  So hearing that they get COPs comparable to 
existing mechanical systems would make me suspicious of they were using SW.

The TW version runs on, effectively, the Stirling cycle, and in principle it 
does have a reversible, Carnot-efficient limit.  However, it has parasitic 
losses from viscous boundary layers.  The engineering limit you need to 
approach ideal thermal transfer efficiency is one that chokes off the flow of 
the working fluid, and makes the viscous drag explode.  Using an ideal gas like 
He reduces the viscosity, though also the heat capacity and diffusion rate 
through the fluid.

On their website, they have a little advertising graphic of a sound wave, which 
shows a traveling wave (or a mixed wave with large TW component).  It would be 
reasonable, if they are scientists or engineers, for them to make their public 
graphics true representations of at least qualitatively what their system does. 
 

In view of the fact that there is very little conceptual to do with a 
thermoacousktic engine, and it is all materials science and tweaking 
engineering details, I really wonder what would have taken 27 years to figure 
out, or to get around to doing.


For geeks who like this stuff, there is a fun continuum:

1. When I was a little kid, I got an ultra-simple Stirling engine from a mail 
advertisement (back when those weren’t all scams), and was delighted by it.

2. In reading more about Stirling cycles etc., I learned about “free-piston” 
Stirling engines, which have the same compartments and barriers, but use the 
compression-bounce of the gas to move the displacer piston rather than a 
mechanical linkage.

3. The TW thermoacousktic engine is just a free-piston Stirling without the 
piston: the shuttle of gas becomes the displacer.

4. Some years later, having been thrown out of String Theory for being too 
stupid to understand it, I was interested in the way adiabatic transformations 
look like mere coordinate deformations in state spaces, which means that one 
should be able to make Carnot-efficient reversible movement identical to 
equilibrium by use of a conformal field (the String Theorist’s universal 
symmetry transformation, back in those days).  So we can do thermoacousktic 
engines using String Theory (Horray!):
https://journals.aps.org/pre/abstract/10.1103/PhysRevE.58.2818 

http://www.santafe.edu/~desmith/PDF_pubs/Carnot_1.pdf 

and then 
https://journals.aps.org/pre/abstract/10.1103/PhysRevE.60.3633 

http://www.santafe.edu/~desmith/PDF_pubs/Carnot_2.pdf 

Papers I know no-one has ever had any interest in, and very possibly no-one has 
ever read.

I thought it was very fun to be able to derive Carnot’s theorem directly from a 
symmetry transformation, so entropy flux behaves like any other conserved 
quantity, rather than having to make arguments about limits to thermodynamic 
efficiency by daisy-chain proofs-by-contradiction (If you could do 
such-and-such, then by running an exemplar Carnot engine in reverse, you could 
make a perpetual-motion machine of type-XYZ).  But I never did anything with it 
that yielded a new calculation, as opposed to just a restatement of common 
knowledge.

Anyway…

Eric





> On Jan 6, 2023, at 8:27 AM, Roger Critchlow  wrote:
> 
> I was amused to see an announcement of a thermoacoustic heat pump  the other 
> day:
> 
>   
> https://www.pv-magazine.com/2023/01/02/residential-thermo-acoustic-heat-pump-produces-water-up-to-80-c/
>  
> 
> 
> then an ionocaloric refrigerator announcement turns up this morning
> 
>   https://newscenter.lbl.gov/2023/01/03/cool-new-method-of-refrigeration/ 
> 

Re: [FRIAM] (not) leaving Twitter

2023-01-08 Thread glen

Dispositional belief (by which I mean acting as if you believe) in some thing 
requires there be a somewhat coherent thing in which to believe, whether or not 
that thing actually exists (e.g. a mathematical limit). That's necessary for 
*progression*. Admittedly, there can be a ball of uncertainty around the thing 
and around any wandering path to the thing. But there does have to be a thing 
in order for it to be fideistic. Apotheotic conceptions (e.g. The long-termism) 
follow it to a T.

Of course, there are "behaviorists" out there who impute a thing onto agnostic 
or wave-riding opportunists. (People impute such upon me all the time.) And then, given 
enough data about the actor's behaviors, if their actions really *do* converge over a 
long enough time and toward a small enough ball around some thing, then whether or not 
they're merely opportunists or True Believers is a distinction without a difference. As 
long as we can effectively *treat* them as if they believe in the thing, then it's 
irrelevant whether or not they actually believe in the thing.

So your question of "can't one just ...?" Sure. Of course. But the coherence of 
one's actions is a practical indicator for fideism. And Musk exhibits it. The Uihleins' 
exhibit it. I think Thiel does, too. Estimating what any of them might do, or why they 
might do what they do, then, depends on how coherent you can make the thing and their 
trajectory towards that thing. (It's also a good way to manipulate people by estimating 
what they *might* believe in, then dangling something that looks like it in front of 
them. E.g. Facebook's Metaverse, decentralized finance with the cryptobros, or the 
tendency to ignore the bullsh¡t of ChatGPT.)

In order for actor A to avoid imputing belief in a thing onto actor B, actor 
B's exhibited repertoire has to be incompressible enough to resist reduction to 
1 or a few things. And the controller/modeler has to have at least as much 
variation as the system being controlled/modeled. So actor A has to have enough 
of a (perhaps latent) repertoire to recognize the limits to B's repertoire. 
It's a fine line between genius and insanity. But whether that line is a 
limitation of the observer or the lack of a limitation of the observed is 
particular to the case.

On 10/31/22 10:06, Marcus Daniels wrote:

Why are those communities fideist?  Can’t one just ride waves of uncertainty as 
a curious person or as an opportunist watching for opportunity?


On Oct 31, 2022, at 9:04 AM, glen  wrote:

OK. But even if that's true, there's an overpowering thread of fideism in many 
of these communities. E.g. longevity (parabiosis), long-termism, transhumanism, 
sea-steading, cryptocurrencies (not including wider blockchain), climate 
optimism, etc. Christianity intersects much of this because it's a little bit 
apotheotic, which is one reason many offshoots like Mormonism can infer more 
weirdo beliefs from the basis set. Another example: 
https://en.wikipedia.org/wiki/World_Mission_Society_Church_of_God

That thread is, I think, easily distinguished from the Fire & Brimstone, End 
Times type. That progressive fideism combined with their technical foci couple 
Thiel and Musk fairly tightly. I suspect the Uihleins might be the opposite, more 
regressive. It's relatively easy to believe that Musk is acting in Good Faith, and 
similarly easy to believe the Uihleins are acting in Bad Faith. Thiel's more 
occult. It's inadequate to write him off as simply weirdo. That insults us proud 
weirdos.

On 10/31/22 07:07, Marcus Daniels wrote:

Thiel is a Christian weirdo.
Sent from my iPhone

On Oct 31, 2022, at 6:50 AM, glen  wrote:

Do you get this:

https://theweek.com/speedreads/972170/peter-thiels-largest-disclosed-political-donation-ever-possible-jd-vance-senate-run

Doctorow has an interesting take:

https://pluralistic.net/2022/10/26/boxed-in/
"The Uihleins are ideologues, but it's a mistake to view their authoritarianism, 
antisemitism, racism, and homophobia as the main force of their ideology. First and 
foremost is their belief that they deserve to be rich, and that the rich should be in 
charge of everyone else."

I'm not convinced. But it's plausible. What do Musk, Thiel, and the Uihleins 
have in common? They *probably* think they're better at something than the rest 
of us. What is that something they think they're better at? If you answer that, 
then maybe it'll explain why Musk bought Twitter?


On 10/31/22 06:42, Marcus Daniels wrote:
I don’t get it.  It seems undisciplined to put his successful companies at risk 
to buy this money loser, while at the same time getting all this bad press.

On Oct 31, 2022, at 5:11 AM, glen  wrote:


Yeah, I deleted all my Tweets, unfollowed everyone, and removed all my followers. 
Musk is an asshole. I know my lack of participation means nothing. But at least I 
won't be (as) complicit. There are no good billionaires 


Re: [FRIAM] Dope slaps, anyone? Text displaying correctly?

2023-01-08 Thread glen

This smacks of Feferman's claim that "implicit in the acceptance of given schemata 
is the acceptance of any meaningful substitution instances that one may come to meet, but 
which those instances are is not determined by restriction to a specific language fixed 
in advance." ... or in the language of my youth, you reap what you sow.

To Nick's credit (without any presumption that I know anything about Peirce), 
he seems to be hunting the same unicorn Feferman's hunting, something like a 
language-independent language. Or maybe something analogous to a moment (cf 
https://en.wikipedia.org/wiki/Moment_(mathematics))?

While we're on the subject, Martin Davis died recently: 
https://logicprogramming.org/2023/01/in-memoriam-martin-davis/ As terse as he was with me 
when I complained about him leaving Tarski out of "Engines of Logic", his loss 
will be felt, especially to us randos on the internet.

On 1/7/23 15:20, David Eric Smith wrote:

Nick, the text renders.

You use words in ways that I cannot parse.  Some of them seem very poetic, 
suggesting that your intended meaning is different in its whole cast from one I 
could try for.

FWIW: as I have heard these discussions over the years, to the extent that 
there is a productive analogy, I would say (unapologetically using my words, 
and not trying to quote his) that Peirce’s claimed relation between states of 
knowledge and truth (meaning, some fully-faithful representation of “what is 
the case”) is analogous to the relation of sample estimators in statistics to 
the quantity they are constructed to estimate.

We don’t have any ontological problems understanding sample estimators and the 
quantities estimated, as both have status in the ordinary world of empirical 
things.  In our ontology, they are peers in some sense, but they clearly play 
different roles and stand for different concepts.

When we come, however, to “states of knowledge” and “truth” as “what will bear out in the long run”, in addition to the fact that we must study the roles of these tokens in our thought and discourse, if we want to get at the concepts expressive of their nature, we also have a hideously more complicated structure to categorize, than mere sample estimators and the corresponding “actual” values they are constructed to estimate.  For sample estimation, in some sense, we know that the representation for the estimator and the estimated is the same, and that they are both numbers in some number system.  If we wish to discuss states of knowledge and truth, everything is up for grabs: every convention for a word’s denotation and all the rules for its use in a language that confer parts of its meaning.  All the conventions for procedures of observation and guided experience.  All the formal or informal modes of discourse in which we organize our intersubjective experience pools and 
build something from them.  All of that is allowed to “fluctuate”, as we would say in statistics of sample estimators.  The representation scheme itself, and our capacities to perceive through it, are all things we seek to bring into some convergence toward a “faithful representation” of “what is the case”.


Speaking or thinking in an orderly way about that seems to have many technical 
as well as modal aspects.

Best,

Eric



On Jan 7, 2023, at 5:05 PM, Nicholas Thompson mailto:thompnicks...@gmail.com>> wrote:

*/The relation between the believed in and the True is the relation between a 
limited function and its limit. {a vector, and the thing toward which the 
vector points?]   Ultimately  the observations that the function models 
determine/**/the limit, but the limit is not determined by any particular  
observation or group of observations.  Peirce believes that The World -- if, in 
fact, it makes any sense to speak of a World independent of the human 
experience -- is essentially random and, therefore,  that contingencies among 
experiences that lead to valid expectations are rare.  The apparition of order 
that we experience is due to the fact that such predictive contingencies--rare 
as they may be-- are extraordinarily useful to organisms and so organisms are 
conditioned to attend  to them.  Random events are beyond experience.  Order is 
what can be experienced. /*
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://linkprotect.cudasvc.com/url?a=https%3a%2f%2fbit.ly%2fvirtualfriam&c=E,1,u7FgMP8vm4fHiTW0U3rgvPE9r2t5kIzf_xFQNMb3ARSlP_q5duEv4pYS3k-N_n8IulmaZfRYLq4ORWs5RoTsr-p3wuF5nmKjYs20FlmVPy0w&typo=1
 

to (un)subscribe 
https://linkprotect.cudasvc.com/url?a=http%3a%2f%2fredfish.com%2fmailman%2flistinfo%2ffriam_redfish.com&c=E,1,alS3mZm4sNbe1T2YOOJrUAFbgFQF4hXahJvi

Re: [FRIAM] Deep learning training material

2023-01-08 Thread glen

Yes, the money/expertise bar is still pretty high. But TANSTAAFL still applies. And the overwhelming evidence is 
coming in that specific models do better than those trained up on diverse data sets, "better" meaning 
less prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI *facilitate* trespassing. We have a 
wonderful bloom of non-experts claiming they understand things like "deep learning". But do they? An 
old internet meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D

On 1/8/23 01:06, Jochen Fromm wrote:

I have finished a number of Coursera courses recently, including "Deep Learning & 
Neural Networks with Keras" which was ok but not great. The problems with deep learning 
are

* to achieve impressive results like chatGPT from OpenAi or LaMDA from Goggle 
you need to spend millions on hardware
* only big organisations can afford to create such expensive models
* the resulting network is s black box and it is unclear why it works the way 
it does

In the end it is just the same old back propagation that has been known for decades, just 
on more computers and trained on more data. Peter Norvig calls it "The unreasonable 
effectiveness of data"
https://research.google.com/pubs/archive/35179.pdf

-J.


 Original message 
From: Russ Abbott 
Date: 1/8/23 12:20 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Deep learning training material

Hi Pieter,

A few comments.

  * Much of the actual deep learning material looks like it came from the Kaggle "Deep 
Learning " sequence.
  * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to Python.
  * More importantly, I would put the How-to-use-Python stuff into a 
preliminary class. Assume your audience knows how to use Python and focus on 
Deep Learning. Given that, there is only a minimal amount of information about 
Deep Learning in the write-up. If I were to attend the workshop and thought I 
would be learning about Deep Learning, I would be disappointed--at least with 
what's covered in the write-up.

I say this because I've been looking for a good intro to Deep Learning. Even though I taught Computer Science for many years, and am now retired, I avoided Deep Learning because it was so non-symbolic. My focus has always been on symbolic computing. But Deep Learning has produced so many extraordinarily impressive results, I decided I should learn more about it. I haven't found any really good material. If you are interested, I'd be more than happy to work with you on developing some introductory Deep Learning material. 


-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp mailto:piet...@randcontrols.co.za>> wrote:

Thanks to the kind support of OpenAI's chatGPT, I am in the process of 
gathering materials for a comprehensive and hands-on deep learning workshop. 
Although it is still a work in progress, I welcome any interested parties to 
take a look and provide their valuable input. Thank you!

You can get it from:

https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
 




--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-08 Thread Jochen Fromm
I have finished a number of Coursera courses recently, including "Deep Learning 
& Neural Networks with Keras" which was ok but not great. The problems with 
deep learning are* to achieve impressive results like chatGPT from OpenAi or 
LaMDA from Goggle you need to spend millions on hardware * only big 
organisations can afford to create such expensive models* the resulting network 
is s black box and it is unclear why it works the way it doesIn the end it is 
just the same old back propagation that has been known for decades, just on 
more computers and trained on more data. Peter Norvig calls it "The 
unreasonable effectiveness of 
data"https://research.google.com/pubs/archive/35179.pdf-J.
 Original message From: Russ Abbott  
Date: 1/8/23  12:20 AM  (GMT+01:00) To: The Friday Morning Applied Complexity 
Coffee Group  Subject: Re: [FRIAM] Deep learning training 
material Hi Pieter,A few comments.Much of the actual deep learning material 
looks like it came from the Kaggle "Deep Learning" sequence.In my opinion, R is 
an ugly and ad hoc language. I'd stick to Python.More importantly, I would put 
the How-to-use-Python stuff into a preliminary class. Assume your audience 
knows how to use Python and focus on Deep Learning. Given that, there is only a 
minimal amount of information about Deep Learning in the write-up. If I were to 
attend the workshop and thought I would be learning about Deep Learning, I 
would be disappointed--at least with what's covered in the write-up. I say this 
because I've been looking for a good intro to Deep Learning. Even though I 
taught Computer Science for many years, and am now retired, I avoided Deep 
Learning because it was so non-symbolic. My focus has always been on symbolic 
computing. But Deep Learning has produced so many extraordinarily impressive 
results, I decided I should learn more about it. I haven't found any really 
good material. If you are interested, I'd be more than happy to work with you 
on developing some introductory Deep Learning material. -- Russ Abbott          
                            Professor Emeritus, Computer ScienceCalifornia 
State University, Los AngelesOn Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp 
 wrote:Thanks to the kind support of OpenAI's 
chatGPT, I am in the process of gathering materials for a comprehensive and 
hands-on deep learning workshop. Although it is still a work in progress, I 
welcome any interested parties to take a look and provide their valuable input. 
Thank you!You can get it from: 
https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
 Pieter
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/