Re: [FRIAM] Deep learning training material

2023-01-13 Thread Stephen Guerin
On Sun, Jan 8, 2023, 2:47 AM glen  wrote:

> An old internet meme is brought to mind: "Do you even Linear Algebra,
> bro?" >8^D
>

:-) hadn't heard that one.  I found an instance related to your metal
interest.

https://www.reddit.com/r/MetalMemes/comments/f9ttei/pretty_much/fiuknrn

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-13 Thread Pieter Steenekamp
le's
>>> Python notebook implementation) and Kaggle's similar Python
>>> notebook implementation. (I prefer Colab to Kaggle.) Using either one, it's
>>> super nice not to have to download and install anything!
>>>
>>> I'm continuing my journey to learn more about DNNs. I'd be happy to have
>>> company and to help develop materials to teach about DNNs. (Developing
>>> teaching materials always helps me learn the subject being covered.)
>>>
>>> -- Russ Abbott
>>> Professor Emeritus, Computer Science
>>> California State University, Los Angeles
>>>
>>>
>>> On Sun, Jan 8, 2023 at 1:48 AM glen  wrote:
>>>
>>>> Yes, the money/expertise bar is still pretty high. But TANSTAAFL still
>>>> applies. And the overwhelming evidence is coming in that specific models do
>>>> better than those trained up on diverse data sets, "better" meaning less
>>>> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
>>>> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
>>>> they understand things like "deep learning". But do they? An old internet
>>>> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>>>>
>>>> On 1/8/23 01:06, Jochen Fromm wrote:
>>>> > I have finished a number of Coursera courses recently, including
>>>> "Deep Learning & Neural Networks with Keras" which was ok but not great.
>>>> The problems with deep learning are
>>>> >
>>>> > * to achieve impressive results like chatGPT from OpenAi or LaMDA
>>>> from Goggle you need to spend millions on hardware
>>>> > * only big organisations can afford to create such expensive models
>>>> > * the resulting network is s black box and it is unclear why it works
>>>> the way it does
>>>> >
>>>> > In the end it is just the same old back propagation that has been
>>>> known for decades, just on more computers and trained on more data. Peter
>>>> Norvig calls it "The unreasonable effectiveness of data"
>>>> > https://research.google.com/pubs/archive/35179.pdf
>>>> >
>>>> > -J.
>>>> >
>>>> >
>>>> >  Original message 
>>>> > From: Russ Abbott 
>>>> > Date: 1/8/23 12:20 AM (GMT+01:00)
>>>> > To: The Friday Morning Applied Complexity Coffee Group <
>>>> friam@redfish.com>
>>>> > Subject: Re: [FRIAM] Deep learning training material
>>>> >
>>>> > Hi Pieter,
>>>> >
>>>> > A few comments.
>>>> >
>>>> >   * Much of the actual deep learning material looks like it came from
>>>> the Kaggle "Deep Learning <
>>>> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>>>> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
>>>> Python.
>>>> >   * More importantly, I would put the How-to-use-Python stuff into a
>>>> preliminary class. Assume your audience knows how to use Python and focus
>>>> on Deep Learning. Given that, there is only a minimal amount of information
>>>> about Deep Learning in the write-up. If I were to attend the workshop and
>>>> thought I would be learning about Deep Learning, I would be
>>>> disappointed--at least with what's covered in the write-up.
>>>> >
>>>> > I say this because I've been looking for a good intro to Deep
>>>> Learning. Even though I taught Computer Science for many years, and am now
>>>> retired, I avoided Deep Learning because it was so non-symbolic. My focus
>>>> has always been on symbolic computing. But Deep Learning has produced so
>>>> many extraordinarily impressive results, I decided I should learn more
>>>> about it. I haven't found any really good material. If you are interested,
>>>> I'd be more than happy to work with you on developing some introductory
>>>> Deep Learning material.
>>>> >
>>>> > -- Russ Abbott
>>>> > Professor Emeritus, Computer Science
>>>> > California State University, Los Angeles
>>>> >
>>>> >
>>>> > On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
>>>> piet...@randcontrols.co.za <mailto:piet...@randcontrols.co.

Re: [FRIAM] Deep learning training material

2023-01-09 Thread Steve Smith
hich
ones to use.
  * I'm especially interested in DNNs that use reinforcement
learning. That's because the first DNN work that
impressed me was DeepMind's DNNs that learned to play
Atari games--and then Go, etc. An important advantage of
Reinforcement Learning (RL) is that it doesn't depend on
mountains of labeled data.
  * I find RL systems more interesting than image
recognition systems. One of the striking features of
many image recognition systems is that they can be
thrown off by changing a small number of pixels in an
image. The changed image would look to a human observer
just like the original, but it might fool a trained NN
into labeling the image as a banana rather than, say, an
automobile, which is what it really is. To address this
problem people have developed Generative Adversarial
Networks (GANs) which attempt to find such weaknesses in
a neural net during training and then to train the NN
not to have those weaknesses. This is a fascinating
result, but as far as I can tell, it mainly shows how
fragile some NNs are and doesn't add much conceptual
depth to one's understanding of how NNs work.

I'm impressed with this list of things I sort of know. If
you had asked me before I started writing this email I
wouldn't have thought I had learned as much as I have. Even
so, I feel like I don't understand much of it beyond a
superficial level.

So far I've done all my exploration using Google's Colab
(Google's Python notebook implementation) and Kaggle's
similar Python notebook implementation. (I prefer Colab to
Kaggle.) Using either one, it's super nice not to have to
download and install anything!

I'm continuing my journey to learn more about DNNs. I'd be
happy to have company and to help develop materials to teach
about DNNs. (Developing teaching materials always helps me
learn the subject being covered.)

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 8, 2023 at 1:48 AM glen 
wrote:

Yes, the money/expertise bar is still pretty high. But
TANSTAAFL still applies. And the overwhelming evidence
is coming in that specific models do better than those
trained up on diverse data sets, "better" meaning less
prone to subtle bullsh¡t. What I find fascinating is
tools like OpenAI *facilitate* trespassing. We have a
wonderful bloom of non-experts claiming they understand
things like "deep learning". But do they? An old
internet meme is brought to mind: "Do you even Linear
Algebra, bro?" >8^D

On 1/8/23 01:06, Jochen Fromm wrote:
> I have finished a number of Coursera courses recently,
including "Deep Learning & Neural Networks with Keras"
which was ok but not great. The problems with deep
learning are
>
> * to achieve impressive results like chatGPT from
OpenAi or LaMDA from Goggle you need to spend millions
on hardware
> * only big organisations can afford to create such
expensive models
> * the resulting network is s black box and it is
unclear why it works the way it does
>
> In the end it is just the same old back propagation
that has been known for decades, just on more computers
and trained on more data. Peter Norvig calls it "The
unreasonable effectiveness of data"
> https://research.google.com/pubs/archive/35179.pdf
>
> -J.
>
>
>  Original message ----
        > From: Russ Abbott 
> Date: 1/8/23 12:20 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group

> Subject: Re: [FRIAM] Deep learning training material
>
> Hi Pieter,
>
> A few comments.
>
>   * Much of the actual deep learning material looks
like it came from the Kaggle "Deep Learning
<https://www.kaggle.com/learn/intro-to-deep-learning>"
sequence.
>   * In my opinion, R is an ugly and /ad hoc/ language.
I'd stick to Python.
>   * More importantly, I would put the
How-to-use-Python stuff into a prelimin

Re: [FRIAM] Deep learning training material

2023-01-09 Thread Russ Abbott
hat use reinforcement learning.
>That's because the first DNN work that impressed me was DeepMind's DNNs
>that learned to play Atari games--and then Go, etc. An important advantage
>of Reinforcement Learning (RL) is that it doesn't depend on mountains of
>labeled data.
>- I find RL systems more interesting than image recognition systems.
>One of the striking features of many image recognition systems is that they
>can be thrown off by changing a small number of pixels in an image. The
>changed image would look to a human observer just like the original, but it
>might fool a trained NN into labeling the image as a banana rather than,
>say, an automobile, which is what it really is. To address this problem
>people have developed Generative Adversarial Networks (GANs) which attempt
>to find such weaknesses in a neural net during training and then to train
>the NN not to have those weaknesses. This is a fascinating result, but as
>far as I can tell, it mainly shows how fragile some NNs are and doesn't add
>much conceptual depth to one's understanding of how NNs work.
>
> I'm impressed with this list of things I sort of know. If you had asked me
> before I started writing this email I wouldn't have thought I had learned
> as much as I have. Even so, I feel like I don't understand much of it
> beyond a superficial level.
>
> So far I've done all my exploration using Google's Colab (Google's Python
> notebook implementation) and Kaggle's similar Python
> notebook implementation. (I prefer Colab to Kaggle.) Using either one, it's
> super nice not to have to download and install anything!
>
> I'm continuing my journey to learn more about DNNs. I'd be happy to have
> company and to help develop materials to teach about DNNs. (Developing
> teaching materials always helps me learn the subject being covered.)
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Sun, Jan 8, 2023 at 1:48 AM glen  wrote:
>
>> Yes, the money/expertise bar is still pretty high. But TANSTAAFL still
>> applies. And the overwhelming evidence is coming in that specific models do
>> better than those trained up on diverse data sets, "better" meaning less
>> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
>> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
>> they understand things like "deep learning". But do they? An old internet
>> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>>
>> On 1/8/23 01:06, Jochen Fromm wrote:
>> > I have finished a number of Coursera courses recently, including "Deep
>> Learning & Neural Networks with Keras" which was ok but not great. The
>> problems with deep learning are
>> >
>> > * to achieve impressive results like chatGPT from OpenAi or LaMDA from
>> Goggle you need to spend millions on hardware
>> > * only big organisations can afford to create such expensive models
>> > * the resulting network is s black box and it is unclear why it works
>> the way it does
>> >
>> > In the end it is just the same old back propagation that has been known
>> for decades, just on more computers and trained on more data. Peter Norvig
>> calls it "The unreasonable effectiveness of data"
>> > https://research.google.com/pubs/archive/35179.pdf
>> >
>> > -J.
>> >
>> >
>> >  Original message 
>> > From: Russ Abbott 
>> > Date: 1/8/23 12:20 AM (GMT+01:00)
>> > To: The Friday Morning Applied Complexity Coffee Group <
>> friam@redfish.com>
>> > Subject: Re: [FRIAM] Deep learning training material
>> >
>> > Hi Pieter,
>> >
>> > A few comments.
>> >
>> >   * Much of the actual deep learning material looks like it came from
>> the Kaggle "Deep Learning <
>> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
>> Python.
>> >   * More importantly, I would put the How-to-use-Python stuff into a
>> preliminary class. Assume your audience knows how to use Python and focus
>> on Deep Learning. Given that, there is only a minimal amount of information
>> about Deep Learning in the write-up. If I were to attend the workshop and
>> thought I would be learning about Deep Learning, I would be
>> disappointed--at least with what's covered in the write-up.
>> >
>> > I say this because I've been look

Re: [FRIAM] Deep learning training material

2023-01-09 Thread Pieter Steenekamp
ng evidence is coming in that specific models do
>>> better than those trained up on diverse data sets, "better" meaning less
>>> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
>>> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
>>> they understand things like "deep learning". But do they? An old internet
>>> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>>>
>>> On 1/8/23 01:06, Jochen Fromm wrote:
>>> > I have finished a number of Coursera courses recently, including "Deep
>>> Learning & Neural Networks with Keras" which was ok but not great. The
>>> problems with deep learning are
>>> >
>>> > * to achieve impressive results like chatGPT from OpenAi or LaMDA from
>>> Goggle you need to spend millions on hardware
>>> > * only big organisations can afford to create such expensive models
>>> > * the resulting network is s black box and it is unclear why it works
>>> the way it does
>>> >
>>> > In the end it is just the same old back propagation that has been
>>> known for decades, just on more computers and trained on more data. Peter
>>> Norvig calls it "The unreasonable effectiveness of data"
>>> > https://research.google.com/pubs/archive/35179.pdf
>>> >
>>> > -J.
>>> >
>>> >
>>> >  Original message 
>>> > From: Russ Abbott 
>>> > Date: 1/8/23 12:20 AM (GMT+01:00)
>>> > To: The Friday Morning Applied Complexity Coffee Group <
>>> friam@redfish.com>
>>> > Subject: Re: [FRIAM] Deep learning training material
>>> >
>>> > Hi Pieter,
>>> >
>>> > A few comments.
>>> >
>>> >   * Much of the actual deep learning material looks like it came from
>>> the Kaggle "Deep Learning <
>>> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>>> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
>>> Python.
>>> >   * More importantly, I would put the How-to-use-Python stuff into a
>>> preliminary class. Assume your audience knows how to use Python and focus
>>> on Deep Learning. Given that, there is only a minimal amount of information
>>> about Deep Learning in the write-up. If I were to attend the workshop and
>>> thought I would be learning about Deep Learning, I would be
>>> disappointed--at least with what's covered in the write-up.
>>> >
>>> > I say this because I've been looking for a good intro to Deep
>>> Learning. Even though I taught Computer Science for many years, and am now
>>> retired, I avoided Deep Learning because it was so non-symbolic. My focus
>>> has always been on symbolic computing. But Deep Learning has produced so
>>> many extraordinarily impressive results, I decided I should learn more
>>> about it. I haven't found any really good material. If you are interested,
>>> I'd be more than happy to work with you on developing some introductory
>>> Deep Learning material.
>>> >
>>> > -- Russ Abbott
>>> > Professor Emeritus, Computer Science
>>> > California State University, Los Angeles
>>> >
>>> >
>>> > On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
>>> piet...@randcontrols.co.za <mailto:piet...@randcontrols.co.za>> wrote:
>>> >
>>> > Thanks to the kind support of OpenAI's chatGPT, I am in the
>>> process of gathering materials for a comprehensive and hands-on deep
>>> learning workshop. Although it is still a work in progress, I welcome any
>>> interested parties to take a look and provide their valuable input. Thank
>>> you!
>>> >
>>> > You can get it from:
>>> >
>>> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>>> <
>>> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>>> >
>>> >
>>>
>>> --
>>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com

Re: [FRIAM] Deep learning training material

2023-01-09 Thread Steve Smith
s it "The unreasonable
effectiveness of data"
> https://research.google.com/pubs/archive/35179.pdf
>
> -J.
>
>
>  Original message 
> From: Russ Abbott 
> Date: 1/8/23 12:20 AM (GMT+01:00)
    > To: The Friday Morning Applied Complexity Coffee Group

> Subject: Re: [FRIAM] Deep learning training material
>
> Hi Pieter,
>
> A few comments.
>
>   * Much of the actual deep learning material looks like it
came from the Kaggle "Deep Learning
<https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>   * In my opinion, R is an ugly and /ad hoc/ language. I'd
stick to Python.
>   * More importantly, I would put the How-to-use-Python
stuff into a preliminary class. Assume your audience knows
how to use Python and focus on Deep Learning. Given that,
there is only a minimal amount of information about Deep
Learning in the write-up. If I were to attend the workshop
and thought I would be learning about Deep Learning, I would
be disappointed--at least with what's covered in the write-up.
>
>     I say this because I've been looking for a good intro
to Deep Learning. Even though I taught Computer Science for
many years, and am now retired, I avoided Deep Learning
because it was so non-symbolic. My focus has always been on
symbolic computing. But Deep Learning has produced so many
extraordinarily impressive results, I decided I should learn
more about it. I haven't found any really good material. If
you are interested, I'd be more than happy to work with you
on developing some introductory Deep Learning material.
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp
mailto:piet...@randcontrols.co.za>> wrote:
>
>     Thanks to the kind support of OpenAI's chatGPT, I am in
the process of gathering materials for a comprehensive and
hands-on deep learning workshop. Although it is still a work
in progress, I welcome any interested parties to take a look
and provide their valuable input. Thank you!
>
>     You can get it from:
>

https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0

<https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0>
>

-- 
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. ---
-.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p
Zoom https://bit.ly/virtualfriam
to (un)subscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe
http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021 http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present
https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021 http://friam.383.s1.nabble.com/


-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p 
Zoomhttps://bit.ly/virtualfriam
to (un)subscribehttp://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIChttp://friam-comic.blogspot.com/
archives:  5/2017 thru presenthttps://redfish.com/pipermail/friam_redfish.com/
   1/2003 thru 6/2021http://friam.383.s1.nabble.com/-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv

Re: [FRIAM] Deep learning training material

2023-01-09 Thread Pieter Steenekamp
s. To be
>successful in building a DNN, one must understand what those activation
>functions do for you and which ones to use.
>- I'm especially interested in DNNs that use reinforcement learning.
>That's because the first DNN work that impressed me was DeepMind's DNNs
>that learned to play Atari games--and then Go, etc. An important advantage
>of Reinforcement Learning (RL) is that it doesn't depend on mountains of
>labeled data.
>- I find RL systems more interesting than image recognition systems.
>One of the striking features of many image recognition systems is that they
>can be thrown off by changing a small number of pixels in an image. The
>changed image would look to a human observer just like the original, but it
>might fool a trained NN into labeling the image as a banana rather than,
>say, an automobile, which is what it really is. To address this problem
>people have developed Generative Adversarial Networks (GANs) which attempt
>to find such weaknesses in a neural net during training and then to train
>the NN not to have those weaknesses. This is a fascinating result, but as
>far as I can tell, it mainly shows how fragile some NNs are and doesn't add
>much conceptual depth to one's understanding of how NNs work.
>
> I'm impressed with this list of things I sort of know. If you had asked me
> before I started writing this email I wouldn't have thought I had learned
> as much as I have. Even so, I feel like I don't understand much of it
> beyond a superficial level.
>
> So far I've done all my exploration using Google's Colab (Google's Python
> notebook implementation) and Kaggle's similar Python
> notebook implementation. (I prefer Colab to Kaggle.) Using either one, it's
> super nice not to have to download and install anything!
>
> I'm continuing my journey to learn more about DNNs. I'd be happy to have
> company and to help develop materials to teach about DNNs. (Developing
> teaching materials always helps me learn the subject being covered.)
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Sun, Jan 8, 2023 at 1:48 AM glen  wrote:
>
>> Yes, the money/expertise bar is still pretty high. But TANSTAAFL still
>> applies. And the overwhelming evidence is coming in that specific models do
>> better than those trained up on diverse data sets, "better" meaning less
>> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
>> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
>> they understand things like "deep learning". But do they? An old internet
>> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>>
>> On 1/8/23 01:06, Jochen Fromm wrote:
>> > I have finished a number of Coursera courses recently, including "Deep
>> Learning & Neural Networks with Keras" which was ok but not great. The
>> problems with deep learning are
>> >
>> > * to achieve impressive results like chatGPT from OpenAi or LaMDA from
>> Goggle you need to spend millions on hardware
>> > * only big organisations can afford to create such expensive models
>> > * the resulting network is s black box and it is unclear why it works
>> the way it does
>> >
>> > In the end it is just the same old back propagation that has been known
>> for decades, just on more computers and trained on more data. Peter Norvig
>> calls it "The unreasonable effectiveness of data"
>> > https://research.google.com/pubs/archive/35179.pdf
>> >
>> > -J.
>> >
>> >
>> >  Original message 
>> > From: Russ Abbott 
>> > Date: 1/8/23 12:20 AM (GMT+01:00)
>> > To: The Friday Morning Applied Complexity Coffee Group <
>> friam@redfish.com>
>> > Subject: Re: [FRIAM] Deep learning training material
>> >
>> > Hi Pieter,
>> >
>> > A few comments.
>> >
>> >   * Much of the actual deep learning material looks like it came from
>> the Kaggle "Deep Learning <
>> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
>> Python.
>> >   * More importantly, I would put the How-to-use-Python stuff into a
>> preliminary class. Assume your audience knows how to use Python and focus
>> on Deep Learning. Given that, there is only a minimal amount of information
>> about Deep Learning in the write-up. If I were to attend the workshop and
>> thought I wo

Re: [FRIAM] Deep learning training material

2023-01-08 Thread Marcus Daniels
s I can tell, it mainly 
shows how fragile some NNs are and doesn't add much conceptual depth to one's 
understanding of how NNs work.

I'm impressed with this list of things I sort of know. If you had asked me 
before I started writing this email I wouldn't have thought I had learned as 
much as I have. Even so, I feel like I don't understand much of it beyond a 
superficial level.

So far I've done all my exploration using Google's Colab (Google's Python 
notebook implementation) and Kaggle's similar Python notebook implementation. 
(I prefer Colab to Kaggle.) Using either one, it's super nice not to have to 
download and install anything!

I'm continuing my journey to learn more about DNNs. I'd be happy to have 
company and to help develop materials to teach about DNNs. (Developing teaching 
materials always helps me learn the subject being covered.)

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 8, 2023 at 1:48 AM glen 
mailto:geprope...@gmail.com>> wrote:
Yes, the money/expertise bar is still pretty high. But TANSTAAFL still applies. 
And the overwhelming evidence is coming in that specific models do better than 
those trained up on diverse data sets, "better" meaning less prone to subtle 
bullsh¡t. What I find fascinating is tools like OpenAI *facilitate* 
trespassing. We have a wonderful bloom of non-experts claiming they understand 
things like "deep learning". But do they? An old internet meme is brought to 
mind: "Do you even Linear Algebra, bro?" >8^D

On 1/8/23 01:06, Jochen Fromm wrote:
> I have finished a number of Coursera courses recently, including "Deep 
> Learning & Neural Networks with Keras" which was ok but not great. The 
> problems with deep learning are
>
> * to achieve impressive results like chatGPT from OpenAi or LaMDA from Goggle 
> you need to spend millions on hardware
> * only big organisations can afford to create such expensive models
> * the resulting network is s black box and it is unclear why it works the way 
> it does
>
> In the end it is just the same old back propagation that has been known for 
> decades, just on more computers and trained on more data. Peter Norvig calls 
> it "The unreasonable effectiveness of data"
> https://research.google.com/pubs/archive/35179.pdf
>
> -J.
>
>
>  Original message 
> From: Russ Abbott mailto:russ.abb...@gmail.com>>
> Date: 1/8/23 12:20 AM (GMT+01:00)
> To: The Friday Morning Applied Complexity Coffee Group 
> mailto:friam@redfish.com>>
> Subject: Re: [FRIAM] Deep learning training material
>
> Hi Pieter,
>
> A few comments.
>
>   * Much of the actual deep learning material looks like it came from the 
> Kaggle "Deep Learning <https://www.kaggle.com/learn/intro-to-deep-learning>" 
> sequence.
>   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to Python.
>   * More importantly, I would put the How-to-use-Python stuff into a 
> preliminary class. Assume your audience knows how to use Python and focus on 
> Deep Learning. Given that, there is only a minimal amount of information 
> about Deep Learning in the write-up. If I were to attend the workshop and 
> thought I would be learning about Deep Learning, I would be disappointed--at 
> least with what's covered in the write-up.
>
> I say this because I've been looking for a good intro to Deep Learning. 
> Even though I taught Computer Science for many years, and am now retired, I 
> avoided Deep Learning because it was so non-symbolic. My focus has always 
> been on symbolic computing. But Deep Learning has produced so many 
> extraordinarily impressive results, I decided I should learn more about it. I 
> haven't found any really good material. If you are interested, I'd be more 
> than happy to work with you on developing some introductory Deep Learning 
> material.
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp 
> mailto:piet...@randcontrols.co.za> 
> <mailto:piet...@randcontrols.co.za<mailto:piet...@randcontrols.co.za>>> wrote:
>
> Thanks to the kind support of OpenAI's chatGPT, I am in the process of 
> gathering materials for a comprehensive and hands-on deep learning workshop. 
> Although it is still a work in progress, I welcome any interested parties to 
> take a look and provide their valuable input. Thank you!
>
> You can get it from:
> 
> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>  
> <https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0>
>

--

Re: [FRIAM] Deep learning training material

2023-01-08 Thread Russ Abbott
as much as I have. Even so, I feel like I don't understand much of it
beyond a superficial level.

So far I've done all my exploration using Google's Colab (Google's Python
notebook implementation) and Kaggle's similar Python
notebook implementation. (I prefer Colab to Kaggle.) Using either one, it's
super nice not to have to download and install anything!

I'm continuing my journey to learn more about DNNs. I'd be happy to have
company and to help develop materials to teach about DNNs. (Developing
teaching materials always helps me learn the subject being covered.)

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Sun, Jan 8, 2023 at 1:48 AM glen  wrote:

> Yes, the money/expertise bar is still pretty high. But TANSTAAFL still
> applies. And the overwhelming evidence is coming in that specific models do
> better than those trained up on diverse data sets, "better" meaning less
> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
> they understand things like "deep learning". But do they? An old internet
> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>
> On 1/8/23 01:06, Jochen Fromm wrote:
> > I have finished a number of Coursera courses recently, including "Deep
> Learning & Neural Networks with Keras" which was ok but not great. The
> problems with deep learning are
> >
> > * to achieve impressive results like chatGPT from OpenAi or LaMDA from
> Goggle you need to spend millions on hardware
> > * only big organisations can afford to create such expensive models
> > * the resulting network is s black box and it is unclear why it works
> the way it does
> >
> > In the end it is just the same old back propagation that has been known
> for decades, just on more computers and trained on more data. Peter Norvig
> calls it "The unreasonable effectiveness of data"
> > https://research.google.com/pubs/archive/35179.pdf
> >
> > -J.
> >
> >
> >  Original message ----
> > From: Russ Abbott 
> > Date: 1/8/23 12:20 AM (GMT+01:00)
> > To: The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> > Subject: Re: [FRIAM] Deep learning training material
> >
> > Hi Pieter,
> >
> > A few comments.
> >
> >   * Much of the actual deep learning material looks like it came from
> the Kaggle "Deep Learning <
> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
> Python.
> >   * More importantly, I would put the How-to-use-Python stuff into a
> preliminary class. Assume your audience knows how to use Python and focus
> on Deep Learning. Given that, there is only a minimal amount of information
> about Deep Learning in the write-up. If I were to attend the workshop and
> thought I would be learning about Deep Learning, I would be
> disappointed--at least with what's covered in the write-up.
> >
> > I say this because I've been looking for a good intro to Deep
> Learning. Even though I taught Computer Science for many years, and am now
> retired, I avoided Deep Learning because it was so non-symbolic. My focus
> has always been on symbolic computing. But Deep Learning has produced so
> many extraordinarily impressive results, I decided I should learn more
> about it. I haven't found any really good material. If you are interested,
> I'd be more than happy to work with you on developing some introductory
> Deep Learning material.
> >
> > -- Russ Abbott
> > Professor Emeritus, Computer Science
> > California State University, Los Angeles
> >
> >
> > On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
> piet...@randcontrols.co.za <mailto:piet...@randcontrols.co.za>> wrote:
> >
> > Thanks to the kind support of OpenAI's chatGPT, I am in the process
> of gathering materials for a comprehensive and hands-on deep learning
> workshop. Although it is still a work in progress, I welcome any interested
> parties to take a look and provide their valuable input. Thank you!
> >
> > You can get it from:
> >
> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
> <
> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
> >
> >
>
> --
> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12

Re: [FRIAM] Deep learning training material

2023-01-08 Thread glen

Yes, the money/expertise bar is still pretty high. But TANSTAAFL still applies. And the overwhelming evidence is 
coming in that specific models do better than those trained up on diverse data sets, "better" meaning 
less prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI *facilitate* trespassing. We have a 
wonderful bloom of non-experts claiming they understand things like "deep learning". But do they? An 
old internet meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D

On 1/8/23 01:06, Jochen Fromm wrote:

I have finished a number of Coursera courses recently, including "Deep Learning & 
Neural Networks with Keras" which was ok but not great. The problems with deep learning 
are

* to achieve impressive results like chatGPT from OpenAi or LaMDA from Goggle 
you need to spend millions on hardware
* only big organisations can afford to create such expensive models
* the resulting network is s black box and it is unclear why it works the way 
it does

In the end it is just the same old back propagation that has been known for decades, just 
on more computers and trained on more data. Peter Norvig calls it "The unreasonable 
effectiveness of data"
https://research.google.com/pubs/archive/35179.pdf

-J.


 Original message 
From: Russ Abbott 
Date: 1/8/23 12:20 AM (GMT+01:00)
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Deep learning training material

Hi Pieter,

A few comments.

  * Much of the actual deep learning material looks like it came from the Kaggle "Deep 
Learning <https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
  * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to Python.
  * More importantly, I would put the How-to-use-Python stuff into a 
preliminary class. Assume your audience knows how to use Python and focus on 
Deep Learning. Given that, there is only a minimal amount of information about 
Deep Learning in the write-up. If I were to attend the workshop and thought I 
would be learning about Deep Learning, I would be disappointed--at least with 
what's covered in the write-up.

I say this because I've been looking for a good intro to Deep Learning. Even though I taught Computer Science for many years, and am now retired, I avoided Deep Learning because it was so non-symbolic. My focus has always been on symbolic computing. But Deep Learning has produced so many extraordinarily impressive results, I decided I should learn more about it. I haven't found any really good material. If you are interested, I'd be more than happy to work with you on developing some introductory Deep Learning material. 


-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp mailto:piet...@randcontrols.co.za>> wrote:

Thanks to the kind support of OpenAI's chatGPT, I am in the process of 
gathering materials for a comprehensive and hands-on deep learning workshop. 
Although it is still a work in progress, I welcome any interested parties to 
take a look and provide their valuable input. Thank you!

You can get it from:

https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
 
<https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0>



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-08 Thread Jochen Fromm
I have finished a number of Coursera courses recently, including "Deep Learning 
& Neural Networks with Keras" which was ok but not great. The problems with 
deep learning are* to achieve impressive results like chatGPT from OpenAi or 
LaMDA from Goggle you need to spend millions on hardware * only big 
organisations can afford to create such expensive models* the resulting network 
is s black box and it is unclear why it works the way it doesIn the end it is 
just the same old back propagation that has been known for decades, just on 
more computers and trained on more data. Peter Norvig calls it "The 
unreasonable effectiveness of 
data"https://research.google.com/pubs/archive/35179.pdf-J.
 Original message From: Russ Abbott  
Date: 1/8/23  12:20 AM  (GMT+01:00) To: The Friday Morning Applied Complexity 
Coffee Group  Subject: Re: [FRIAM] Deep learning training 
material Hi Pieter,A few comments.Much of the actual deep learning material 
looks like it came from the Kaggle "Deep Learning" sequence.In my opinion, R is 
an ugly and ad hoc language. I'd stick to Python.More importantly, I would put 
the How-to-use-Python stuff into a preliminary class. Assume your audience 
knows how to use Python and focus on Deep Learning. Given that, there is only a 
minimal amount of information about Deep Learning in the write-up. If I were to 
attend the workshop and thought I would be learning about Deep Learning, I 
would be disappointed--at least with what's covered in the write-up. I say this 
because I've been looking for a good intro to Deep Learning. Even though I 
taught Computer Science for many years, and am now retired, I avoided Deep 
Learning because it was so non-symbolic. My focus has always been on symbolic 
computing. But Deep Learning has produced so many extraordinarily impressive 
results, I decided I should learn more about it. I haven't found any really 
good material. If you are interested, I'd be more than happy to work with you 
on developing some introductory Deep Learning material. -- Russ Abbott          
                            Professor Emeritus, Computer ScienceCalifornia 
State University, Los AngelesOn Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp 
 wrote:Thanks to the kind support of OpenAI's 
chatGPT, I am in the process of gathering materials for a comprehensive and 
hands-on deep learning workshop. Although it is still a work in progress, I 
welcome any interested parties to take a look and provide their valuable input. 
Thank you!You can get it from: 
https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
 Pieter
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-07 Thread Pieter Steenekamp
Hi Russ,

Thank you so much for providing such valuable feedback. I appreciate your
thoughts and am taking them very seriously.

Regarding the choice of Python over R, I agree with your recommendation and
plan to focus solely on Python.

Your comments about the overall approach of the workshop have given me
pause to reconsider my initial plan. It is clear that the current material
does not meet the stated objective of providing a comprehensive
understanding of the principles of deep learning. Rather, it is intended to
empower participants to utilize deep learning in their daily work.

I am considering two options: either making it clear that the workshop is
not designed to cover the principles in-depth, or revising the material to
meet the stated objective. I am also open to finding a middle ground, such
as incorporating more theory while being transparent about the workshop's
goals.

I would be very interested in exploring the possibility of collaborating
with you to develop some introductory deep learning training material.
Perhaps we can continue this conversation off this platform? Please don't
hesitate to reach out to me at piet...@randcontrols.co.za.

Pieter

On Sun, 8 Jan 2023 at 01:19, Russ Abbott  wrote:

> Hi Pieter,
>
> A few comments.
>
>- Much of the actual deep learning material looks like it came from
>the Kaggle "Deep Learning
>" sequence.
>- In my opinion, R is an ugly and *ad hoc* language. I'd stick to
>Python.
>- More importantly, I would put the How-to-use-Python stuff into a
>preliminary class. Assume your audience knows how to use Python and focus
>on Deep Learning. Given that, there is only a minimal amount of information
>about Deep Learning in the write-up. If I were to attend the workshop and
>thought I would be learning about Deep Learning, I would be
>disappointed--at least with what's covered in the write-up.
>
>I say this because I've been looking for a good intro to Deep
>Learning. Even though I taught Computer Science for many years, and am now
>retired, I avoided Deep Learning because it was so non-symbolic. My focus
>has always been on symbolic computing. But Deep Learning has produced so
>many extraordinarily impressive results, I decided I should learn more
>about it. I haven't found any really good material. If you are interested,
>I'd be more than happy to work with you on developing some introductory
>Deep Learning material.
>
> -- Russ Abbott
> Professor Emeritus, Computer Science
> California State University, Los Angeles
>
>
> On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
> piet...@randcontrols.co.za> wrote:
>
>> Thanks to the kind support of OpenAI's chatGPT, I am in the process of
>> gathering materials for a comprehensive and hands-on deep learning
>> workshop. Although it is still a work in progress, I welcome any interested
>> parties to take a look and provide their valuable input. Thank you!
>>
>> You can get it from:
>>
>> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>>
>>
>> Pieter
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-07 Thread Russ Abbott
Hi Pieter,

A few comments.

   - Much of the actual deep learning material looks like it came from the
   Kaggle "Deep Learning
   " sequence.
   - In my opinion, R is an ugly and *ad hoc* language. I'd stick to Python.
   - More importantly, I would put the How-to-use-Python stuff into a
   preliminary class. Assume your audience knows how to use Python and focus
   on Deep Learning. Given that, there is only a minimal amount of information
   about Deep Learning in the write-up. If I were to attend the workshop and
   thought I would be learning about Deep Learning, I would be
   disappointed--at least with what's covered in the write-up.

   I say this because I've been looking for a good intro to Deep Learning.
   Even though I taught Computer Science for many years, and am now retired, I
   avoided Deep Learning because it was so non-symbolic. My focus has always
   been on symbolic computing. But Deep Learning has produced so many
   extraordinarily impressive results, I decided I should learn more about it.
   I haven't found any really good material. If you are interested, I'd be
   more than happy to work with you on developing some introductory Deep
   Learning material.

-- Russ Abbott
Professor Emeritus, Computer Science
California State University, Los Angeles


On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
piet...@randcontrols.co.za> wrote:

> Thanks to the kind support of OpenAI's chatGPT, I am in the process of
> gathering materials for a comprehensive and hands-on deep learning
> workshop. Although it is still a work in progress, I welcome any interested
> parties to take a look and provide their valuable input. Thank you!
>
> You can get it from:
>
> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>
>
> Pieter
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


Re: [FRIAM] Deep learning training material

2023-01-05 Thread glen

Very cool! Thanks for posting this. I'll take a look and comment if I have 
anything useful.

On 1/5/23 11:30, Pieter Steenekamp wrote:

Thanks to the kind support of OpenAI's chatGPT, I am in the process of 
gathering materials for a comprehensive and hands-on deep learning workshop. 
Although it is still a work in progress, I welcome any interested parties to 
take a look and provide their valuable input. Thank you!

You can get it from:
https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
 



--
ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ

-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
 1/2003 thru 6/2021  http://friam.383.s1.nabble.com/


[FRIAM] Deep learning training material

2023-01-05 Thread Pieter Steenekamp
Thanks to the kind support of OpenAI's chatGPT, I am in the process of
gathering materials for a comprehensive and hands-on deep learning
workshop. Although it is still a work in progress, I welcome any interested
parties to take a look and provide their valuable input. Thank you!

You can get it from:
https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0


Pieter
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/