Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Frank Wimberly
Insight probably requires background knowledge from apparently irrelevant
areas.  If the credit software knew about the importance of gender equality
in our current culture it wouldn't have made those mistakes.  All AI
software needs to read newspapers and some books.

Frank

On Wed, Jul 21, 2021 at 10:36 AM Barry MacKichan <
barry.mackic...@mackichan.com> wrote:

> I think one of the shortcomings of machine learning is that it can learn
> but has no insight.
>
> A recent lesson about this comes from David Heinemeier Hansson, who
> reported that Apple Card gave him a credit limit 20x that of his wife. They
> live in a community property state, file a joint tax return, and have been
> married a long time. Steve Wozniak reported the same thing with a 10x
> factor. Goldman Sachs has a problem ¯\_(ツ)_/¯. Of course they can’t really
> explain why their AI turned out so sexist. Is the only way to correct this
> to throw out the current model and restart the learning from scratch? Is
> there any other way to correct this?
>
> —Barry
>
> On 21 Jul 2021, at 12:08, Roger Critchlow wrote:
>
> The current neural network based AI does add novelty to the solution. It
> learns and gains insight from the data in ways that humans can not.
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>


-- 
Frank Wimberly
140 Calle Ojo Feliz
Santa Fe, NM 87505
505 670-9918

Research:  https://www.researchgate.net/profile/Frank_Wimberly2
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Barry MacKichan
I think one of the shortcomings of machine learning is that it can learn 
but has no insight.


A recent lesson about this comes from David Heinemeier Hansson, who 
reported that Apple Card gave him a credit limit 20x that of his wife. 
They live in a community property state, file a joint tax return, and 
have been married a long time. Steve Wozniak reported the same thing 
with a 10x factor. Goldman Sachs has a problem ¯\_(ツ)_/¯. Of course 
they can’t really explain why their AI turned out so sexist. Is the 
only way to correct this to throw out the current model and restart the 
learning from scratch? Is there any other way to correct this?


—Barry


On 21 Jul 2021, at 12:08, Roger Critchlow wrote:

The current neural network based AI does add novelty to the solution. 
It learns and gains insight from the data in ways that humans can not.
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Roger Critchlow
The AlphaFold paper is pre-released at the Nature url Pieter provided in
the original post.

Here's the pdf
https://www.nature.com/articles/s41586-021-03819-2_reference.pdf, and it's
supplementary information
https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM1_ESM.pdf.
The program is not listed as an author.

https://www.blopig.com/blog/2021/07/alphafold-2-is-here-whats-behind-the-structure-prediction-miracle/
is a non-author's evaluation of the paper.

-- rec --



-- rec --




On Wed, Jul 21, 2021 at 7:58 AM Pieter Steenekamp <
piet...@randcontrols.co.za> wrote:

> Prof Dave West,
>
> There is something different happening with the current generation of AI
> compared to the previous generation. The AI generation of Alan Newell @
> Herb Simon, and I also want to include Big Blue that beat Gary Kasparov at
> chess was just encoding human intelligence and making it much faster. The
> AI did not contribute any novel concepts to the algorithm, the humans did
> that.
>
> The current neural network based AI does add novelty to the solution. It
> learns and gains insight from the data in ways that humans can not.
>
> Take AlphaFold for example, humans do not understand how to predict the
> folding of a protein by analyzing the amino acid sequence; it's beyond
> human understanding to do that. It's like AlphaFold looked at the 10
> odd known examples of amino acid sequences and the resulting folded protein
> structure and say - it's easy, you just look at this, then that, and the
> resulting folded protein is this. Even comprehending what AlphFold is
> saying is beyond human understanding - it's there to look at, it's in the
> weights of the gain and biases of the many connections of the artificial
> neural network, but humans can just not interpret it.
>
> For a human to understand AlphaFold's reasoning to solve the protein
> folding problem is like expecting a 2 year old child to understand quantum
> mechanics. Or like me to understand my wife's mind.
>
> AI does not have general intelligence, maybe it will never happen. But I
> think it's safe to say that in some narrow fields, like in the protein
> folding problem, AI is certainly more intelligent than humans. The
> important issue is that there is evidence that AI does add novelty to the
> solution.
>
> Pieter
>
>
>
> On Tue, 20 Jul 2021 at 22:46, Prof David West 
> wrote:
>
>> Thirty something years ago, Alan Newell walked into his classroom and
>> announced, "over Christmas break, Herb Simon and I created an artificial
>> intelligence." He was referring to the program Bacon, which fed with the
>> same dataset as the human deduced the same set of "laws." It even deduced a
>> couple of minor ones that Bacon missed (or, at least, did not publish).
>>
>> Simon and Newell tried to publish a paper with Bacon as author, but were
>> rejected.
>>
>> AlphaFold (which I think is based on a program Google announced but has
>> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it
>> is not "doing science," but is merely a tool that resolves a very specific
>> scientific problem and the use of that tool will facilitate humans who
>> actually do the science.
>>
>> I will change my mind when the journals of record publish a paper
>> authored by AlphaFold (or kin) as author and that paper at least posits a
>> credible theory or partial theory that transcends "here is the fold of the
>> xyz sequence to address why that fold is 'necessary' or 'useful'.
>>
>> davew
>>
>>
>> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>>
>> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion
>> Lee Sedol at a time when leading Ai researchers predicted it will be at
>> least 10 years before AI can reach that level. But the valid question then
>> was - why so excited? It's just a game. There is an interesting documentary
>> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
>>
>> What's happening now is that AI makes scientific discoveries beyond human
>> ability.
>>
>> Is anybody worried where it will end?
>>
>> I quote from https://www.nature.com/articles/s41586-021-03819-2
>> Highly accurate protein structure prediction with AlphaFold
>> Proteins are essential to life, and understanding their structure can
>> facilitate a mechanistic understanding of their function. Through an
>> enormous experimental effort1–4, the structures of around 100,000 unique
>> proteins have been determined5, but this represents a small fraction of the
>> billions of known protein sequences6,7. Structural coverage is bottlenecked
>> by the months to years of painstaking effort required to determine a single
>> protein structure. Accurate computational approaches are needed to address
>> this gap and to enable large-scale structural bioinformatics. Predicting
>> the 3-D structure that a protein will adopt based solely on its amino acid
>> sequence, the structure prediction componen

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Prof David West
Pieter,

Thank you for your thoughtful response. This subject has some wonderful 
potential for discussion and exploration; but I doubt it is possible to find 
some common ground from which to launch the journey. Our backgrounds are quite 
diverse and our perspectives are almost orthogonal — especially when it comes 
to what we might be willing to accept as "evidence" and "understanding."

For example: I think I attained an "understanding" of protein folding a number 
of years ago. First, I read a number of books on protein folding (most of which 
I barely comprehended) and then multiple books on the theory and mathematics of 
origami. [It is possible to draw fold lines with pencil and straightedge on a 
sheet of paper that will result in a particular 3-D shape, It is also possible 
to 'mentally' 'decompose' a 3-D shape to fold lines.] I then undertook a series 
of 400 mics Acid Trips (250s mics is the dose that Hoffman took when he first 
discovered LSD, and 400 is more than twice that dose because the dosage curve 
is not linear). Over the years, I have learned how to "direct" these sessions 
to focus on a particular kind of problem.

The result: I "saw" how a particular amino acid sequence "necessarily" produced 
a specific fold. I could "see" the 3-D figure in the pattern of fold lines on 
paper (and vice versa). Moreover I obtained a limited form of "knowledge" that 
I retained post-trip. Specifically, "families" of folds and or origami. I can 
still look at lines on paper and tell you if the resulting figure will have 1, 
2, 3, or 4 extremities, even if I cannot visualize the exact figure and I could 
tell you if a particular amino sequence would be in a "family" of folds with 
x-number of right angles, spirals, or waves. I "saw" the sequence with overlays 
of some kind of synthetic (artificial) synesthesia.

*BTW: the actual experience involved "conversations" with folded proteins, 
origami figures, amino acid sequences, and papers-eith-fold lines as if they 
were sentient objects capable of "talking" to me and "telling" me what they 
were doing.*

As an outside observer, you are just as limited — if not more so — in your 
ability to "comprehend" what I did as you are with what AlphaFold does.  And I 
am just as handicapped as AlphaFold in terms of every getting my "insights" and 
"knowledge" published. Ain't gonna happen!

I see a fundamental and critical error being made by AI folk: the assumption 
that the human brain/mind is capable only of that which a computer is capable. 
A computer, an AI, is faster and less error prone than a a human mind/brain and 
for this reason alone, an AI is superior to a human.

The hubris of a lot of AI people, classical and contemporary, asserting the 
superiority of their computer toys over the human mind/brain is simultaneously 
amusing and appalling. Making such claims about AI should, in my opinion, wait 
until such time as we collectively understand more than 1% of what the human 
mind does and is capable of doing.

davew



On Wed, Jul 21, 2021, at 5:57 AM, Pieter Steenekamp wrote:
> Prof Dave West,
> 
> There is something different happening with the current generation of AI 
> compared to the previous generation. The AI generation of Alan Newell @ Herb 
> Simon, and I also want to include Big Blue that beat Gary Kasparov at chess 
> was just encoding human intelligence and making it much faster. The AI did 
> not contribute any novel concepts to the algorithm, the humans did that.
> 
> The current neural network based AI does add novelty to the solution. It 
> learns and gains insight from the data in ways that humans can not.
> 
> Take AlphaFold for example, humans do not understand how to predict the 
> folding of a protein by analyzing the amino acid sequence; it's beyond human 
> understanding to do that. It's like AlphaFold looked at the 10 odd known 
> examples of amino acid sequences and the resulting folded protein structure 
> and say - it's easy, you just look at this, then that, and the resulting 
> folded protein is this. Even comprehending what AlphFold is saying is beyond 
> human understanding - it's there to look at, it's in the weights of the gain 
> and biases of the many connections of the artificial neural network, but 
> humans can just not interpret it.
> 
> For a human to understand AlphaFold's reasoning to solve the protein folding 
> problem is like expecting a 2 year old child to understand quantum mechanics. 
> Or like me to understand my wife's mind.
> 
> AI does not have general intelligence, maybe it will never happen. But I 
> think it's safe to say that in some narrow fields, like in the protein 
> folding problem, AI is certainly more intelligent than humans. The important 
> issue is that there is evidence that AI does add novelty to the solution.
> 
> Pieter
> 
>  
> 
> On Tue, 20 Jul 2021 at 22:46, Prof David West  wrote:
>> __
>> Thirty something years ago, Alan Newell walked into his classroom and 
>> announced, "ov

Re: [FRIAM] Sean x Carrol

2021-07-21 Thread uǝlƃ ☤ $
Reality as a Vector in Hilbert Space
Sean M. Carroll
https://arxiv.org/abs/2103.09780

On 7/20/21 7:49 AM, Barry MacKichan wrote:
> One of the (trivial, granted) ways the universe amuses me is that there are 
> two Sean Carrols, one of whom authored ‘Endless Forms Most Beautiful’. I 
> first heard of that book on a Friday morning at St. John’s. The are both 
> prominent in their fields (physics and biology) and are very good 
> popularizers.
> 
> At times, when my critical faculties are sleeping, I can imagine that at some 
> point the universe split and re-merged with itself and that the Sean Carrols 
> represent a glitch in the merging process. That would explain that there are 
> two of them, very similar in many ways and yet distinct.
> 
> —Barry
> 
> On 19 Jul 2021, at 21:42, David Eric Smith wrote:
> 
> Anyway, what came up today was a Sean Carroll interview with Wolfram, 
> which fronts hypergraphs as Wolfram’s base-level abstraction.  It is a couple 


-- 
☤>$ uǝlƃ

-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Marcus Daniels
What do you mean?  It will be the great equalizer.

From: Friam  On Behalf Of Pieter Steenekamp
Sent: Tuesday, July 20, 2021 12:12 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: [FRIAM] Can current AI beat humans at doing science?

A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
Sedol at a time when leading Ai researchers predicted it will be at least 10 
years before AI can reach that level. But the valid question then was - why so 
excited? It's just a game. There is an interesting documentary on youtube about 
this at https://www.youtube.com/watch?v=WXuK6gekU1Y

What's happening now is that AI makes scientific discoveries beyond human 
ability.

Is anybody worried where it will end?

I quote from https://www.nature.com/articles/s41586-021-03819-2
Highly accurate protein structure prediction with AlphaFold
Proteins are essential to life, and understanding their structure can 
facilitate a mechanistic understanding of their function. Through an enormous 
experimental effort1–4, the structures of around 100,000 unique proteins have 
been determined5, but this represents a small fraction of the billions of known 
protein sequences6,7. Structural coverage is bottlenecked by the months to 
years of painstaking effort required to determine a single protein structure. 
Accurate computational approaches are needed to address this gap and to enable 
large-scale structural bioinformatics. Predicting the 3-D structure that a 
protein will adopt based solely on its amino acid sequence, the structure 
prediction component of the ‘protein folding problem’8, has been an important 
open research problem for more than 50 years9. Despite recent progress10–14, 
existing methods fall far short of atomic accuracy, especially when no 
homologous structure is available. Here we provide the first computational 
method that can regularly predict protein structures with atomic accuracy even 
where no similar structure is known. We validated an entirely redesigned 
version of our neural network-based model, AlphaFold, in the challenging 14th 
Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating 
accuracy competitive with experiment in a majority of cases and greatly 
outperforming other methods. Underpinning the latest version of AlphaFold is a 
novel machine learning approach that incorporates physical and biological 
knowledge about protein structure, leveraging multi-sequence alignments, into 
the design of the deep learning algorithm.




-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread uǝlƃ ☤ $
I tend to side with Dave and Patrick on this issue, I guess. The following 
article provides a bit of an on-ramp to my perspective:

Beware explanations from AI in health care
https://science.sciencemag.org/content/373/6552/284

Their distinction is valid and sound. So, the authors would object to the idea 
that AlphaFold holds any substantive semantics. (By "semantics", I don't mean 
the modern computer science sense of that, but the more general philosophical 
sense.) But, in the spirit of true factoids, false narrative, I disagree with 
the gist of the article.

We can't explain the behavior of animals any better than we can explain the 
behavior of AlphaFold. In fact, we're much better at explaining the behavior of 
AlphaFold because, as Feynman suggests, we can *build* AlphaFold (... or some 
of us can, anyway), whereas we're still having some trouble with synthetic 
organisms.

But the fact that we can't understand animal behavior does NOT mean much. We've 
been NOT understanding animal behavior for millennia. Yet we make progress. So, 
in the end, I side with strong AGI position *and* reject the idea that 
AlphaFold is categorically different from prior technologies.

The resolution of that apparent contradiction is that I believe *all* 
explanation is via mimic models. Interpretable algorithms are a convenient 
fiction. (I.e. all models are always wrong.)


On 7/21/21 4:57 AM, Pieter Steenekamp wrote:
> Prof Dave West,
> 
> There is something different happening with the current generation of AI 
> compared to the previous generation. The AI generation of Alan Newell @ Herb 
> Simon, and I also want to include Big Blue that beat Gary Kasparov at chess 
> was just encoding human intelligence and making it much faster. The AI did 
> not contribute any novel concepts to the algorithm, the humans did that.
> 
> The current neural network based AI does add novelty to the solution. It 
> learns and gains insight from the data in ways that humans can not.
> 
> Take AlphaFold for example, humans do not understand how to predict the 
> folding of a protein by analyzing the amino acid sequence; it's beyond human 
> understanding to do that. It's like AlphaFold looked at the 10 odd known 
> examples of amino acid sequences and the resulting folded protein structure 
> and say - it's easy, you just look at this, then that, and the resulting 
> folded protein is this. Even comprehending what AlphFold is saying is beyond 
> human understanding - it's there to look at, it's in the weights of the gain 
> and biases of the many connections of the artificial neural network, but 
> humans can just not interpret it.
> 
> For a human to understand AlphaFold's reasoning to solve the protein folding 
> problem is like expecting a 2 year old child to understand quantum mechanics. 
> Or like me to understand my wife's mind.
> 
> AI does not have general intelligence, maybe it will never happen. But I 
> think it's safe to say that in some narrow fields, like in the protein 
> folding problem, AI is certainly more intelligent than humans. The important 
> issue is that there is evidence that AI does add novelty to the solution.
> 
> Pieter
> 
>  
> 
> On Tue, 20 Jul 2021 at 22:46, Prof David West  > wrote:
> 
> __
> Thirty something years ago, Alan Newell walked into his classroom and 
> announced, "over Christmas break, Herb Simon and I created an artificial 
> intelligence." He was referring to the program Bacon, which fed with the same 
> dataset as the human deduced the same set of "laws." It even deduced a couple 
> of minor ones that Bacon missed (or, at least, did not publish).
> 
> Simon and Newell tried to publish a paper with Bacon as author, but were 
> rejected.
> 
> AlphaFold (which I think is based on a program Google announced but has 
> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it is 
> not "doing science," but is merely a tool that resolves a very specific 
> scientific problem and the use of that tool will facilitate humans who 
> actually do the science.
> 
> I will change my mind when the journals of record publish a paper 
> authored by AlphaFold (or kin) as author and that paper at least posits a 
> credible theory or partial theory that transcends "here is the fold of the 
> xyz sequence to address why that fold is 'necessary' or 'useful'.
> 
> davew
> 
> 
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion 
>> Lee Sedol at a time when leading Ai researchers predicted it will be at 
>> least 10 years before AI can reach that level. But the valid question then 
>> was - why so excited? It's just a game. There is an interesting documentary 
>> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y 
>> 
>>
>> What's happening now is that AI makes scientific discoveri

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-21 Thread Pieter Steenekamp
Prof Dave West,

There is something different happening with the current generation of AI
compared to the previous generation. The AI generation of Alan Newell @
Herb Simon, and I also want to include Big Blue that beat Gary Kasparov at
chess was just encoding human intelligence and making it much faster. The
AI did not contribute any novel concepts to the algorithm, the humans did
that.

The current neural network based AI does add novelty to the solution. It
learns and gains insight from the data in ways that humans can not.

Take AlphaFold for example, humans do not understand how to predict the
folding of a protein by analyzing the amino acid sequence; it's beyond
human understanding to do that. It's like AlphaFold looked at the 10
odd known examples of amino acid sequences and the resulting folded protein
structure and say - it's easy, you just look at this, then that, and the
resulting folded protein is this. Even comprehending what AlphFold is
saying is beyond human understanding - it's there to look at, it's in the
weights of the gain and biases of the many connections of the artificial
neural network, but humans can just not interpret it.

For a human to understand AlphaFold's reasoning to solve the protein
folding problem is like expecting a 2 year old child to understand quantum
mechanics. Or like me to understand my wife's mind.

AI does not have general intelligence, maybe it will never happen. But I
think it's safe to say that in some narrow fields, like in the protein
folding problem, AI is certainly more intelligent than humans. The
important issue is that there is evidence that AI does add novelty to the
solution.

Pieter



On Tue, 20 Jul 2021 at 22:46, Prof David West  wrote:

> Thirty something years ago, Alan Newell walked into his classroom and
> announced, "over Christmas break, Herb Simon and I created an artificial
> intelligence." He was referring to the program Bacon, which fed with the
> same dataset as the human deduced the same set of "laws." It even deduced a
> couple of minor ones that Bacon missed (or, at least, did not publish).
>
> Simon and Newell tried to publish a paper with Bacon as author, but were
> rejected.
>
> AlphaFold (which I think is based on a program Google announced but has
> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it
> is not "doing science," but is merely a tool that resolves a very specific
> scientific problem and the use of that tool will facilitate humans who
> actually do the science.
>
> I will change my mind when the journals of record publish a paper authored
> by AlphaFold (or kin) as author and that paper at least posits a credible
> theory or partial theory that transcends "here is the fold of the xyz
> sequence to address why that fold is 'necessary' or 'useful'.
>
> davew
>
>
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion
> Lee Sedol at a time when leading Ai researchers predicted it will be at
> least 10 years before AI can reach that level. But the valid question then
> was - why so excited? It's just a game. There is an interesting documentary
> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
>
> What's happening now is that AI makes scientific discoveries beyond human
> ability.
>
> Is anybody worried where it will end?
>
> I quote from https://www.nature.com/articles/s41586-021-03819-2
> Highly accurate protein structure prediction with AlphaFold
> Proteins are essential to life, and understanding their structure can
> facilitate a mechanistic understanding of their function. Through an
> enormous experimental effort1–4, the structures of around 100,000 unique
> proteins have been determined5, but this represents a small fraction of the
> billions of known protein sequences6,7. Structural coverage is bottlenecked
> by the months to years of painstaking effort required to determine a single
> protein structure. Accurate computational approaches are needed to address
> this gap and to enable large-scale structural bioinformatics. Predicting
> the 3-D structure that a protein will adopt based solely on its amino acid
> sequence, the structure prediction component of the ‘protein folding
> problem’8, has been an important open research problem for more than 50
> years9. Despite recent progress10–14, existing methods fall far short of
> atomic accuracy, especially when no homologous structure is available. Here
> we provide the first computational method that can regularly predict
> protein structures with atomic accuracy even where no similar structure is
> known. We validated an entirely redesigned version of our neural
> network-based model, AlphaFold, in the challenging 14th Critical Assessment
> of protein Structure Prediction (CASP14)15, demonstrating accuracy
> competitive with experiment in a majority of cases and greatly
> outperforming other methods. Underpinning the latest version of Al

Re: [FRIAM] Collective sensemaking

2021-07-21 Thread uǝlƃ ☤ $
I'll attempt to correct you on postmodernism. But I expect you to move the 
goalposts again. Here are 2 articles that may help. I've posted them before, to 
no avail.

https://michel-foucault.com/2018/12/19/postmodernism-didnt-cause-trump-it-explains-him-2018/
https://www.vox.com/features/2019/11/11/18273141/postmodernism-donald-trump-lyotard-baudrillard

But I'll move on to the relevant question of the practical reasons the Ground 
Truth Challenge will not deliver. There are 3 referees, who exhibit very 
different *methods* for evaluating the objections. 3 is a very small sample. 
Anyone familiar with the recent achievements of induction (both [un]supervised) 
will recognize that's useless. Perhaps 30 referees would give us some practical 
progress. But 3? No.

Of course, you could cite qualitative research, case studies, etc. as valid 
types of knowledge. But even with peer-review (and its many flaws), such 
results are subject to cherry-picking and confirmation bias. E.g. you chose the 
least offensive of the valid objections to use in your post. How much a 
remdesivir regimen costs is trivial compared to misrepresenting the 
biodistribution results in order to scare the hell out of people and vie for 
clicks and youtube views.

In short, this has absolutely nothing to do with silly red herrings like the 
understanding of what is false. The failure of the Ground Truth Challenge will 
be about *method*.


On 7/20/21 11:51 AM, Pieter Steenekamp wrote:
> To somewhat reflect on an exercise like this; it obviously depends on your 
> understanding of what is "false". A postmodern point of view emphasizes the 
> importance of perspective, there is no absolute ground truth. If you view the 
> world from this perspective, then obviously an exercise like this is 
> meaningless. On the other hand, modernists argue there is a ground truth and 
> an exercise like this can help to get to the ground truth. (I admit I don't 
> really know what is postmodernism and modernism, that's my understanding and 
> I'm open to learn if someone more knowledgeable corrects me)
> 
> Personally, I don't really know where to draw the line. There are obviously 
> issues where the truth is subjective, but I do think there's enough validity 
> in an objective ground truth to experiment with an exercise like this. Take 
> the example from the very first "valid" falsification - Malone claimed a 
> treatment cost for Remdsivir is approximately $6-$8k and the submission was 
> made that this is false, it is only about $3k. Malone was wrong, there's no 
> subjectivity involved, he made an objectively measurable false claim. 
> 
> Again, I like to withold my final verdict on the process till it's completed 
> and we can study the results, but the openness and transparency of the 
> process enthuses me. Of course, there could be practical reasons why it does 
> not deliver what's expected?

-- 
☤>$ uǝlƃ

-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/