In response to the very valuable feedback I got, including from Rus in this
group but also from others outside of this group, I reworked my deep
learning training material very significantly. For if anybody is interested
it's available from
https://www.dropbox.com/s/y5fkp10ar5ze67n/deep%20learning%20training%20rev%2013012023.zip?dl=0

Just to make sure you guys are clear about the objective of the workshop,
below is a quote from the training document.

But before I give the quote, just an answer to why am I doing this workshop?
a) There is plenty of very good material on the internet for free to go
into deep learning for people with software skills. But none (that I know
of) for those without any software expertise wanting to know and use deep
learning
b) AI is making a splash in the world right now and there are (I think)
many people without software skills yearning to know more about it.
c) It is really very easy (I think) for a professional without software
skills to acquire what's necessary to apply deep learning for applications
where tabular data is available to train the deep learning. This workshop
intends to do exactly that.

Now for the quote from the training material:

"This workshop aims to provide a broad overview of deep learning
principles, rather than focusing on technical details. The goal is to give
you a sense of how deep learning can be used to make predictions on
practical problems using labeled data in spreadsheet form. By the end of
the workshop, you will have the skills to apply deep learning to address
challenges in your area of expertise.

During this workshop, we will be utilizing R or Python programming
languages as a means of configuring and making predictions with labeled
data in spreadsheet form. However, it is crucial to note that the primary
focus of the workshop is not on programming itself, but rather on the
fundamental concepts and techniques of deep learning. We will provide
template programs that can be used with minimal modifications, and the main
emphasis will be on guidance and exercises to solidify participants'
understanding of the material. For those with little to no prior
programming experience, we have included an optional section to introduce
basic concepts of R or Python, but it is important to note that this
section is only intended to provide a basic understanding and will not make
you an expert in programming."


Pieter

On Mon, 9 Jan 2023 at 20:32, Steve Smith <sasm...@swcp.com> wrote:

> The "celebration of the hand" project (coordinated with UNM's Maxwell
> museum of anthropology) is nearly 15 years defunct now.   We made a tiny
> bit of progress on some LANL small-business-assistance time from an ML
> researcher, a small bit of seed-funding and matching time on my own part.
> The key to this was the newly available prosumer-grade laser-scanners of
> the time, as well as emerging photogrammetric techniques.
>
> The point of the project was to augment what humans were already doing...
> both pointing to similarities too subtle for a human to notice or based on
> dimensions a human expert wouldn't be able to identify directly.   I
> haven't touched bases on where that field has gone in most of that
> intervening time.   My paleontological/archaelogical partner in that
> endeavor went his own way (he was more interested in developing a lifestyle
> business for himself than actually applying advanced techniques to the
> problem at "hand" (pun acknowledged)) and then a few years later he died
> which is part of the reason for not (re)visiting it myself...
>
> The relevant issue to the linked article(s) is mostly that we were
> building modelless models (model-free learning) from the data with the hope
> (assumption?) that *some* of the features that might emerge as being good
> correlates to related/identical "hands" might be ones that humans could
> detect or make sense of themselves.   The state of the art at the time was
> definitely an "art" and was based on the expertise of contemporary flint
> knappers trying to reproduce the patterns found in non-contemporary
> artifacts.   We did not do any actual work (just speculation guided
> background research) on pottery and textiles...  the Maxwell museum
> researcher we worked with was also "overcome by events" simply needing to
> develop the exhibits and not having time/focus to attend to the more
> researchy aspects...  her interest was more on textiles than ceramics or
> lithics... which seemed to be the area we likely had the least advantage
> over human-experts.
>
> My niece works at the U of Az Archaelogy dept cataloging the accellerating
> number of artifacts coming in.   At some point I can imagine an automated
> classification system taking over much of the "mundane" aspects of this
> work...   like a self-driving car that at least provides collision
> avoidance and lane following...
>
>
> On 1/9/23 10:29 AM, Pieter Steenekamp wrote:
>
> Thanks for the references, I've briefly looked at them and am looking
> forward to perusing them more closely. The interpretation of ML is a big
> thing of course. The machine gives you the results and it would sometimes
> be nice if it can be accompanied by some explanation of how it achieved it.
>
> What you describe about your work sounds very interesting indeed. My gut
> feel is that, at least with the current generation of AI, human ingenuity
> and judgement would be far superior to AI in recognising similarities in
> the "hand" of the artisans involved. But AI could well be a powerful tool
> in assisting the humans?
>
> On Mon, 9 Jan 2023 at 18:50, Steve Smith <sasm...@swcp.com> wrote:
>
>> I was hoping this article would have more meat to it, but the main point
>> seems highly relevant to a practical/introductory workshop such as the one
>> you are developing:
>>
>> The Need for Interpretable Features: Taxonomy and Motivation
>> <https://dl.acm.org/doi/10.1145/3544903.3544905>
>>
>>
>> https://scitechdaily.com/mit-taxonomy-helps-build-explainability-into-the-components-of-machine-learning-models/
>>
>> My limited experience with ML is that the convo/invo-lutions that
>> developers go through to make their learning models work well tend to
>> obscure the interpretability of the results.   Some of this seems
>> unavoidable (inevitable?), particularly the highly technical reserved
>> terminology that often exists in a given domain, but this team purports to
>> help provide guidelines for minimizing the consequences...
>>
>> My main experience with this domain involved an attempt to find distance
>> measures between the flake patterns in 3D scanned archaeological artifacts,
>> starting with lithics but also aspiring to work with pottery and
>> textiles...  essentially trying to recognize similarities in the "hand" of
>> the artisans involved.
>>
>>
>> On 1/9/23 1:29 AM, Pieter Steenekamp wrote:
>>
>> @Russ,
>>
>> a) I appreciate the suggestion to include a simple neural network that
>> can make predictions based on inputs and be trained using steepest descent
>> to optimize its weights. This would be a valuable addition to my training
>> material, as it provides a foundational understanding of how neural
>> networks work.
>>
>> b) My focus is on providing practical training for professionals with
>> deep domain knowledge but limited software experience, who are looking to
>> utilize deep learning as a tool in their respective fields. In contrast, it
>> seems that your focus is on understanding the inner workings of deep
>> learning. Both approaches have their own merits, and it is important to
>> cater to the needs and goals of different learners.
>>
>> Pieter
>>
>> On Sun, 8 Jan 2023 at 21:20, Marcus Daniels <mar...@snoutfarm.com> wrote:
>>
>>> The main defects of both R and Python are a lack of a typing system and
>>> high performance compilation.  I find R still follows (is used by) the
>>> statistics research community more than Python.   Common Lisp was always
>>> better than either.
>>>
>>> Sent from my iPhone
>>>
>>> On Jan 8, 2023, at 11:03 AM, Russ Abbott <russ.abb...@gmail.com> wrote:
>>>
>>> 
>>> As indicated in my original reply, my interest in this project
>>> grows from my relative ignorance of Deep Learning. My career has focussed
>>> exclusively on symbolic computing. I've worked with and taught (a)
>>> functional programming, logic programming, and related issues in advanced
>>> Python; (b) complex systems, agent-based modeling, genetic algorithms, and
>>> related evolutionary processes, (c) a bit of constraint programming,
>>> especially in MiniZinc, and (d) reinforcement learning as Q-learning, which
>>> is reinforcement learning without neural nets. I've always avoided neural
>>> nets--and more generally numerical programming of any sort.
>>>
>>> Deep learning has produced so many impressive results that I've decided
>>> to devote much of my retirement life to learning about it. I retired at the
>>> end of Spring 2022 and (after a break) am now devoting much of my time to
>>> learning more about Deep Neural Nets. So far, I've dipped my brain into it
>>> at various points. I think I've learned a fair amount. For example,
>>>
>>>    - I now know how to build a neural net (NN) that adds two numbers
>>>    using a single layer with a single neuron. It's really quite simple and 
>>> is,
>>>    I think, a beautiful example of how NNs work. If I were to teach an intro
>>>    to NNs I'd start with this.
>>>    - I've gone through the Kaggle Deep Learning sequence mentioned
>>>    earlier.
>>>    - I found a paper that shows how you can approximate
>>>    any differentiable function to any degree of accuracy with a single-layer
>>>    NN. (This is a very nice result, although I believe it's not used
>>>    explicitly in building serious Deep NN systems.)
>>>    - From what I've seen so far, most serious DNNs are built using
>>>    Keras rather than PyTorch.
>>>    - I've looked at Jeremy Howard's fast.ai material. I was going to go
>>>    through the course but stopped when I found that it uses PyTorch. Also, 
>>> it
>>>    seems to be built on fast.ai libraries that do a lot of the work for
>>>    you without explanation.  And it seems to focus almost exclusively on
>>>    Convolutional NNs.
>>>    - My impression of DNNs is that to a great extent they are *ad hoc*.
>>>    There is no good way to determine the best architecture to use for a 
>>> given
>>>    problem. By architecture, I mean the number of layers, the number of
>>>    neurons in each layer, the types of layers, the activation functions to
>>>    use, etc.
>>>    - All DNNs that I've seen use Python as code glue rather than R or
>>>    some other language. I like Python--so I'm pleased with that.
>>>    - To build serious NNs one should learn the Python libraries Numpy
>>>    (array manipulation) and Pandas (data processing). Numpy especially seems
>>>    to be used for virtually all DNNs that I've seen.
>>>    - Keras and probably PyTorch include a number of special-purpose
>>>    neurons and layers that can be included in one's DNN. These include: a
>>>    DropOut layer, LSTM (short-long-term memory) neurons, convolutional 
>>> layers,
>>>    recurrent neural net layers (RNN), and more recently transformers, which
>>>    get credit for ChatGPT and related programs. My impression is that these
>>>    special-purpose layers are *ad hoc* in the same sense that functions
>>>    or libraries that one finds useful in a programming language are *ad
>>>    hoc*. They have been very important for the success of DNNs, but
>>>    they came into existence because people invented them in the same way 
>>> that
>>>    people invented useful functions and libraries.
>>>    - NN libraries also include a menagerie of activation functions. An
>>>    activation function acts as the final control on the output of a layer.
>>>    Different activation functions are used for different purposes. To be
>>>    successful in building a DNN, one must understand what those activation
>>>    functions do for you and which ones to use.
>>>    - I'm especially interested in DNNs that use reinforcement learning.
>>>    That's because the first DNN work that impressed me was DeepMind's DNNs
>>>    that learned to play Atari games--and then Go, etc. An important 
>>> advantage
>>>    of Reinforcement Learning (RL) is that it doesn't depend on mountains of
>>>    labeled data.
>>>    - I find RL systems more interesting than image recognition systems.
>>>    One of the striking features of many image recognition systems is that 
>>> they
>>>    can be thrown off by changing a small number of pixels in an image. The
>>>    changed image would look to a human observer just like the original, but 
>>> it
>>>    might fool a trained NN into labeling the image as a banana rather than,
>>>    say, an automobile, which is what it really is. To address this problem
>>>    people have developed Generative Adversarial Networks (GANs) which 
>>> attempt
>>>    to find such weaknesses in a neural net during training and then to train
>>>    the NN not to have those weaknesses. This is a fascinating result, but as
>>>    far as I can tell, it mainly shows how fragile some NNs are and doesn't 
>>> add
>>>    much conceptual depth to one's understanding of how NNs work.
>>>
>>> I'm impressed with this list of things I sort of know. If you had asked
>>> me before I started writing this email I wouldn't have thought I had
>>> learned as much as I have. Even so, I feel like I don't understand much of
>>> it beyond a superficial level.
>>>
>>> So far I've done all my exploration using Google's Colab (Google's
>>> Python notebook implementation) and Kaggle's similar Python
>>> notebook implementation. (I prefer Colab to Kaggle.) Using either one, it's
>>> super nice not to have to download and install anything!
>>>
>>> I'm continuing my journey to learn more about DNNs. I'd be happy to have
>>> company and to help develop materials to teach about DNNs. (Developing
>>> teaching materials always helps me learn the subject being covered.)
>>>
>>> -- Russ Abbott
>>> Professor Emeritus, Computer Science
>>> California State University, Los Angeles
>>>
>>>
>>> On Sun, Jan 8, 2023 at 1:48 AM glen <geprope...@gmail.com> wrote:
>>>
>>>> Yes, the money/expertise bar is still pretty high. But TANSTAAFL still
>>>> applies. And the overwhelming evidence is coming in that specific models do
>>>> better than those trained up on diverse data sets, "better" meaning less
>>>> prone to subtle bullsh¡t. What I find fascinating is tools like OpenAI
>>>> *facilitate* trespassing. We have a wonderful bloom of non-experts claiming
>>>> they understand things like "deep learning". But do they? An old internet
>>>> meme is brought to mind: "Do you even Linear Algebra, bro?" >8^D
>>>>
>>>> On 1/8/23 01:06, Jochen Fromm wrote:
>>>> > I have finished a number of Coursera courses recently, including
>>>> "Deep Learning & Neural Networks with Keras" which was ok but not great.
>>>> The problems with deep learning are
>>>> >
>>>> > * to achieve impressive results like chatGPT from OpenAi or LaMDA
>>>> from Goggle you need to spend millions on hardware
>>>> > * only big organisations can afford to create such expensive models
>>>> > * the resulting network is s black box and it is unclear why it works
>>>> the way it does
>>>> >
>>>> > In the end it is just the same old back propagation that has been
>>>> known for decades, just on more computers and trained on more data. Peter
>>>> Norvig calls it "The unreasonable effectiveness of data"
>>>> > https://research.google.com/pubs/archive/35179.pdf
>>>> >
>>>> > -J.
>>>> >
>>>> >
>>>> > -------- Original message --------
>>>> > From: Russ Abbott <russ.abb...@gmail.com>
>>>> > Date: 1/8/23 12:20 AM (GMT+01:00)
>>>> > To: The Friday Morning Applied Complexity Coffee Group <
>>>> friam@redfish.com>
>>>> > Subject: Re: [FRIAM] Deep learning training material
>>>> >
>>>> > Hi Pieter,
>>>> >
>>>> > A few comments.
>>>> >
>>>> >   * Much of the actual deep learning material looks like it came from
>>>> the Kaggle "Deep Learning <
>>>> https://www.kaggle.com/learn/intro-to-deep-learning>" sequence.
>>>> >   * In my opinion, R is an ugly and /ad hoc/ language. I'd stick to
>>>> Python.
>>>> >   * More importantly, I would put the How-to-use-Python stuff into a
>>>> preliminary class. Assume your audience knows how to use Python and focus
>>>> on Deep Learning. Given that, there is only a minimal amount of information
>>>> about Deep Learning in the write-up. If I were to attend the workshop and
>>>> thought I would be learning about Deep Learning, I would be
>>>> disappointed--at least with what's covered in the write-up.
>>>> >
>>>> >     I say this because I've been looking for a good intro to Deep
>>>> Learning. Even though I taught Computer Science for many years, and am now
>>>> retired, I avoided Deep Learning because it was so non-symbolic. My focus
>>>> has always been on symbolic computing. But Deep Learning has produced so
>>>> many extraordinarily impressive results, I decided I should learn more
>>>> about it. I haven't found any really good material. If you are interested,
>>>> I'd be more than happy to work with you on developing some introductory
>>>> Deep Learning material.
>>>> >
>>>> > -- Russ Abbott
>>>> > Professor Emeritus, Computer Science
>>>> > California State University, Los Angeles
>>>> >
>>>> >
>>>> > On Thu, Jan 5, 2023 at 11:31 AM Pieter Steenekamp <
>>>> piet...@randcontrols.co.za <mailto:piet...@randcontrols.co.za>> wrote:
>>>> >
>>>> >     Thanks to the kind support of OpenAI's chatGPT, I am in the
>>>> process of gathering materials for a comprehensive and hands-on deep
>>>> learning workshop. Although it is still a work in progress, I welcome any
>>>> interested parties to take a look and provide their valuable input. Thank
>>>> you!
>>>> >
>>>> >     You can get it from:
>>>> >
>>>> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>>>> <
>>>> https://www.dropbox.com/s/eyx4iumb0439wlx/deep%20learning%20training%20rev%2005012023.zip?dl=0
>>>> >
>>>> >
>>>>
>>>> --
>>>> ꙮ Mɥǝu ǝlǝdɥɐuʇs ɟᴉƃɥʇ' ʇɥǝ ƃɹɐss snɟɟǝɹs˙ ꙮ
>>>>
>>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>>> FRIAM Applied Complexity Group listserv
>>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>>> https://bit.ly/virtualfriam
>>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>>> archives:  5/2017 thru present
>>>> https://redfish.com/pipermail/friam_redfish.com/
>>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> archives:  5/2017 thru present
>>> https://redfish.com/pipermail/friam_redfish.com/
>>>  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>>> FRIAM Applied Complexity Group listserv
>>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>>> https://bit.ly/virtualfriam
>>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>>> FRIAM-COMIC http://friam-comic.blogspot.com/
>>> archives:  5/2017 thru present
>>> https://redfish.com/pipermail/friam_redfish.com/
>>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>>
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present 
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
>> FRIAM Applied Complexity Group listserv
>> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
>> https://bit.ly/virtualfriam
>> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
>> FRIAM-COMIC http://friam-comic.blogspot.com/
>> archives:  5/2017 thru present
>> https://redfish.com/pipermail/friam_redfish.com/
>>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>>
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present 
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
> -. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
> FRIAM Applied Complexity Group listserv
> Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom
> https://bit.ly/virtualfriam
> to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives:  5/2017 thru present
> https://redfish.com/pipermail/friam_redfish.com/
>   1/2003 thru 6/2021  http://friam.383.s1.nabble.com/
>
-. --- - / ...- .- .-.. .. -.. / -- --- .-. ... . / -.-. --- -.. .
FRIAM Applied Complexity Group listserv
Fridays 9a-12p Friday St. Johns Cafe   /   Thursdays 9a-12p Zoom 
https://bit.ly/virtualfriam
to (un)subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives:  5/2017 thru present https://redfish.com/pipermail/friam_redfish.com/
  1/2003 thru 6/2021  http://friam.383.s1.nabble.com/

Reply via email to