Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread David Eric Smith
Could I revive within me
Her symphony and song
To such deep delight ’twould win me
That with music loud and long
I would build that pleasure dome in air
That sunny dome, those caves of ice
etc. etc.

> On Jul 21, 2021, at 8:41 AM, Frank Wimberly  wrote:
> 
> When I was in the Robotics Institute (now department) at CMU, Raj Reddy used 
> to say that a professor would be easy to replace with an AI program.  He felt 
> that a genuinely hard problem would be to develop an intelligent bulldozer.  
> That's why I have suggested to Stephen over the years that he build a 
> miniature bulldozer that could read a topographic map and create that 
> landscape on the sand table.
> 
> The few people who don't know what I'm talking about should see simtable.com 
> 
> 
> Frank
> 
> ---
> Frank C. Wimberly
> 140 Calle Ojo Feliz, 
> Santa Fe, NM 87505
> 
> 505 670-9918
> Santa Fe, NM
> 
> On Tue, Jul 20, 2021, 5:14 PM Marcus Daniels  > wrote:
> I don’t have the quote handy but I recall the folks at Allen AI talking about 
> their hard problems.
> 
> Acing the SAT, easy.   Math is the hardest.
> 
>  
> 
> From: Friam mailto:friam-boun...@redfish.com>> On 
> Behalf Of Patrick Reilly
> Sent: Tuesday, July 20, 2021 3:02 PM
> To: The Friday Morning Applied Complexity Coffee Group  >
> Subject: Re: [FRIAM] Can current AI beat humans at doing science?
> 
>  
> 
> Prof. West has it right. Human intelligence requires melding intents. Solving 
> mathematical algorithms requires no creativity or shifting of intentions.
> 
> On Tuesday, July 20, 2021, Prof David West  > wrote:
> 
> Thirty something years ago, Alan Newell walked into his classroom and 
> announced, "over Christmas break, Herb Simon and I created an artificial 
> intelligence." He was referring to the program Bacon, which fed with the same 
> dataset as the human deduced the same set of "laws." It even deduced a couple 
> of minor ones that Bacon missed (or, at least, did not publish).
> 
>  
> 
> Simon and Newell tried to publish a paper with Bacon as author, but were 
> rejected.
> 
>  
> 
> AlphaFold (which I think is based on a program Google announced but has yet 
> to publish in a "proper" journal) is, to me, akin to Bacon, in that it is not 
> "doing science," but is merely a tool that resolves a very specific 
> scientific problem and the use of that tool will facilitate humans who 
> actually do the science.
> 
>  
> 
> I will change my mind when the journals of record publish a paper authored by 
> AlphaFold (or kin) as author and that paper at least posits a credible theory 
> or partial theory that transcends "here is the fold of the xyz sequence to 
> address why that fold is 'necessary' or 'useful'.
> 
>  
> 
> davew
> 
>  
> 
>  
> 
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
> 
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
> Sedol at a time when leading Ai researchers predicted it will be at least 10 
> years before AI can reach that level. But the valid question then was - why 
> so excited? It's just a game. There is an interesting documentary on youtube 
> about this at https://www.youtube.com/watch?v=WXuK6gekU1Y 
> 
>  
> 
> What's happening now is that AI makes scientific discoveries beyond human 
> ability. 
> 
>  
> 
> Is anybody worried where it will end?
> 
>  
> 
> I quote from https://www.nature.com/articles/s41586-021-03819-2 
> 
> Highly accurate protein structure prediction with AlphaFold
> 
> Proteins are essential to life, and understanding their structure can 
> facilitate a mechanistic understanding of their function. Through an enormous 
> experimental effort1–4, the structures of around 100,000 unique proteins have 
> been determined5, but this represents a small fraction of the billions of 
> known protein sequences6,7. Structural coverage is bottlenecked by the months 
> to years of painstaking effort required to determine a single protein 
> structure. Accurate computational approaches are needed to address this gap 
> and to enable large-scale structural bioinformatics. Predicting the 3-D 
> structure that a protein will adopt based solely on its amino acid sequence, 
> the structure prediction component of the ‘protein folding problem’8, has 
> been an important open research problem for more than 50 years9. Despite 
> recent progress10–14, existing methods fall far short of atomic accuracy, 
> especially when no homologous structure is available. Here we provide the 
> first computational method that can regularly predict protein structures with 
> atomic accuracy even where no similar structure is known. We

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread David Eric Smith
Nah.

The thing that will drive academic scientists extinct within a semester is when 
google reveals AlphaGrant.

Our world is not the one Simon and Newall lived in.  The worth of an idea today 
is determined entirely and exclusively by what dollar value the proponent can 
fetch with it at the bazaar.  That still has a fairly strong correlation with 
the implications of the idea to fairly short-term business applications, and 
thus with whether the idea’s conclusions could be correct.  It may still have 
some lingering but much weaker correlation with whether the ideas are correct 
as a stand-alone criterion, which is sort of timeless and not necessarily 
connectable to business applications on any specific term.

Following on, the worth of academic employees is exactly the amortized sum of 
the dollar value they have been able to fetch for their “products” (yes, that 
is the term on the application forms, to one of these agencies).  It’s the only 
population process I can think of where additive fitness is a strictly correct 
model.

When administrators learn that they can use AlphaGrant to write grant proposals 
that succeed over those written by their academic employees, the employees will 
be out on the next trash pickup day.  I even think — here slightly less snarky 
— that outcompeting people in grant review is more likely than getting 
machine-written papers published under the machine’s name.  The review process 
has been steadily more tightly choreographed, to try to curb the impulses of 
panelists to use complex modes of cognition or imagination, and make sure they 
stay within a kind of ant-algorithm that has legal precedent.  So it is the 
kind of narrowly codified process that deep learning is good at fitting to.

We’ll see.  Shouldn’t be long now.

Eric


> On Jul 21, 2021, at 7:02 AM, Patrick Reilly  
> wrote:
> 
> Prof. West has it right. Human intelligence requires melding intents. Solving 
> mathematical algorithms requires no creativity or shifting of intentions.
> 
> On Tuesday, July 20, 2021, Prof David West  > wrote:
> Thirty something years ago, Alan Newell walked into his classroom and 
> announced, "over Christmas break, Herb Simon and I created an artificial 
> intelligence." He was referring to the program Bacon, which fed with the same 
> dataset as the human deduced the same set of "laws." It even deduced a couple 
> of minor ones that Bacon missed (or, at least, did not publish).
> 
> Simon and Newell tried to publish a paper with Bacon as author, but were 
> rejected.
> 
> AlphaFold (which I think is based on a program Google announced but has yet 
> to publish in a "proper" journal) is, to me, akin to Bacon, in that it is not 
> "doing science," but is merely a tool that resolves a very specific 
> scientific problem and the use of that tool will facilitate humans who 
> actually do the science.
> 
> I will change my mind when the journals of record publish a paper authored by 
> AlphaFold (or kin) as author and that paper at least posits a credible theory 
> or partial theory that transcends "here is the fold of the xyz sequence to 
> address why that fold is 'necessary' or 'useful'.
> 
> davew
> 
> 
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
>> Sedol at a time when leading Ai researchers predicted it will be at least 10 
>> years before AI can reach that level. But the valid question then was - why 
>> so excited? It's just a game. There is an interesting documentary on youtube 
>> about this at https://www.youtube.com/watch?v=WXuK6gekU1Y 
>> 
>> 
>> What's happening now is that AI makes scientific discoveries beyond human 
>> ability. 
>> 
>> Is anybody worried where it will end?
>> 
>> I quote from https://www.nature.com/articles/s41586-021-03819-2 
>> 
>> Highly accurate protein structure prediction with AlphaFold
>> Proteins are essential to life, and understanding their structure can 
>> facilitate a mechanistic understanding of their function. Through an 
>> enormous experimental effort1–4, the structures of around 100,000 unique 
>> proteins have been determined5, but this represents a small fraction of the 
>> billions of known protein sequences6,7. Structural coverage is bottlenecked 
>> by the months to years of painstaking effort required to determine a single 
>> protein structure. Accurate computational approaches are needed to address 
>> this gap and to enable large-scale structural bioinformatics. Predicting the 
>> 3-D structure that a protein will adopt based solely on its amino acid 
>> sequence, the structure prediction component of the ‘protein folding 
>> problem’8, has been an important open research problem for more than 50 
>> years9. Despite recent progress10–14, existing methods fall far short of 
>> atomic accuracy, especial

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Marcus Daniels
It is intelligent because it knows who ought to get bulldozed?   Yes, it could 
solve so many problems these days!

From: Friam  On Behalf Of Frank Wimberly
Sent: Tuesday, July 20, 2021 4:41 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Can current AI beat humans at doing science?

When I was in the Robotics Institute (now department) at CMU, Raj Reddy used to 
say that a professor would be easy to replace with an AI program.  He felt that 
a genuinely hard problem would be to develop an intelligent bulldozer.  That's 
why I have suggested to Stephen over the years that he build a miniature 
bulldozer that could read a topographic map and create that landscape on the 
sand table.

The few people who don't know what I'm talking about should see 
simtable.com

Frank
---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 20, 2021, 5:14 PM Marcus Daniels 
mailto:mar...@snoutfarm.com>> wrote:
I don’t have the quote handy but I recall the folks at Allen AI talking about 
their hard problems.
Acing the SAT, easy.   Math is the hardest.

From: Friam mailto:friam-boun...@redfish.com>> On 
Behalf Of Patrick Reilly
Sent: Tuesday, July 20, 2021 3:02 PM
To: The Friday Morning Applied Complexity Coffee Group 
mailto:friam@redfish.com>>
Subject: Re: [FRIAM] Can current AI beat humans at doing science?

Prof. West has it right. Human intelligence requires melding intents. Solving 
mathematical algorithms requires no creativity or shifting of intentions.

On Tuesday, July 20, 2021, Prof David West 
mailto:profw...@fastmail.fm>> wrote:
Thirty something years ago, Alan Newell walked into his classroom and 
announced, "over Christmas break, Herb Simon and I created an artificial 
intelligence." He was referring to the program Bacon, which fed with the same 
dataset as the human deduced the same set of "laws." It even deduced a couple 
of minor ones that Bacon missed (or, at least, did not publish).

Simon and Newell tried to publish a paper with Bacon as author, but were 
rejected.

AlphaFold (which I think is based on a program Google announced but has yet to 
publish in a "proper" journal) is, to me, akin to Bacon, in that it is not 
"doing science," but is merely a tool that resolves a very specific scientific 
problem and the use of that tool will facilitate humans who actually do the 
science.

I will change my mind when the journals of record publish a paper authored by 
AlphaFold (or kin) as author and that paper at least posits a credible theory 
or partial theory that transcends "here is the fold of the xyz sequence to 
address why that fold is 'necessary' or 'useful'.

davew


On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
Sedol at a time when leading Ai researchers predicted it will be at least 10 
years before AI can reach that level. But the valid question then was - why so 
excited? It's just a game. There is an interesting documentary on youtube about 
this at https://www.youtube.com/watch?v=WXuK6gekU1Y

What's happening now is that AI makes scientific discoveries beyond human 
ability.

Is anybody worried where it will end?

I quote from https://www.nature.com/articles/s41586-021-03819-2
Highly accurate protein structure prediction with AlphaFold
Proteins are essential to life, and understanding their structure can 
facilitate a mechanistic understanding of their function. Through an enormous 
experimental effort1–4, the structures of around 100,000 unique proteins have 
been determined5, but this represents a small fraction of the billions of known 
protein sequences6,7. Structural coverage is bottlenecked by the months to 
years of painstaking effort required to determine a single protein structure. 
Accurate computational approaches are needed to address this gap and to enable 
large-scale structural bioinformatics. Predicting the 3-D structure that a 
protein will adopt based solely on its amino acid sequence, the structure 
prediction component of the ‘protein folding problem’8, has been an important 
open research problem for more than 50 years9. Despite recent progress10–14, 
existing methods fall far short of atomic accuracy, especially when no 
homologous structure is available. Here we provide the first computational 
method that can regularly predict protein structures with atomic accuracy even 
where no similar structure is known. We validated an entirely redesigned 
version of our neural network-based model, AlphaFold, in the challenging 14th 
Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating 
accuracy competitive with experiment in a majority of cases and greatly 
outperforming other methods. Underpinning the latest version of AlphaFold is a 
novel machine learning approach that incorporates physical and biological 
knowledge about protein structure, leveraging multi-sequence align

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Frank Wimberly
When I was in the Robotics Institute (now department) at CMU, Raj Reddy
used to say that a professor would be easy to replace with an AI program.
He felt that a genuinely hard problem would be to develop an intelligent
bulldozer.  That's why I have suggested to Stephen over the years that he
build a miniature bulldozer that could read a topographic map and create
that landscape on the sand table.

The few people who don't know what I'm talking about should see simtable.com

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 20, 2021, 5:14 PM Marcus Daniels  wrote:

> I don’t have the quote handy but I recall the folks at Allen AI talking
> about their hard problems.
>
> Acing the SAT, easy.   Math is the hardest.
>
>
>
> *From:* Friam  *On Behalf Of *Patrick Reilly
> *Sent:* Tuesday, July 20, 2021 3:02 PM
> *To:* The Friday Morning Applied Complexity Coffee Group <
> friam@redfish.com>
> *Subject:* Re: [FRIAM] Can current AI beat humans at doing science?
>
>
>
> Prof. West has it right. Human intelligence requires melding intents.
> Solving mathematical algorithms requires no creativity or shifting of
> intentions.
>
> On Tuesday, July 20, 2021, Prof David West  wrote:
>
> Thirty something years ago, Alan Newell walked into his classroom and
> announced, "over Christmas break, Herb Simon and I created an artificial
> intelligence." He was referring to the program Bacon, which fed with the
> same dataset as the human deduced the same set of "laws." It even deduced a
> couple of minor ones that Bacon missed (or, at least, did not publish).
>
>
>
> Simon and Newell tried to publish a paper with Bacon as author, but were
> rejected.
>
>
>
> AlphaFold (which I think is based on a program Google announced but has
> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it
> is not "doing science," but is merely a tool that resolves a very specific
> scientific problem and the use of that tool will facilitate humans who
> actually do the science.
>
>
>
> I will change my mind when the journals of record publish a paper authored
> by AlphaFold (or kin) as author and that paper at least posits a credible
> theory or partial theory that transcends "here is the fold of the xyz
> sequence to address why that fold is 'necessary' or 'useful'.
>
>
>
> davew
>
>
>
>
>
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion
> Lee Sedol at a time when leading Ai researchers predicted it will be at
> least 10 years before AI can reach that level. But the valid question then
> was - why so excited? It's just a game. There is an interesting documentary
> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
>
>
>
> What's happening now is that AI makes scientific discoveries beyond human
> ability.
>
>
>
> Is anybody worried where it will end?
>
>
>
> I quote from https://www.nature.com/articles/s41586-021-03819-2
>
> Highly accurate protein structure prediction with AlphaFold
>
> Proteins are essential to life, and understanding their structure can
> facilitate a mechanistic understanding of their function. Through an
> enormous experimental effort1–4, the structures of around 100,000 unique
> proteins have been determined5, but this represents a small fraction of the
> billions of known protein sequences6,7. Structural coverage is bottlenecked
> by the months to years of painstaking effort required to determine a single
> protein structure. Accurate computational approaches are needed to address
> this gap and to enable large-scale structural bioinformatics. Predicting
> the 3-D structure that a protein will adopt based solely on its amino acid
> sequence, the structure prediction component of the ‘protein folding
> problem’8, has been an important open research problem for more than 50
> years9. Despite recent progress10–14, existing methods fall far short of
> atomic accuracy, especially when no homologous structure is available. Here
> we provide the first computational method that can regularly predict
> protein structures with atomic accuracy even where no similar structure is
> known. We validated an entirely redesigned version of our neural
> network-based model, AlphaFold, in the challenging 14th Critical Assessment
> of protein Structure Prediction (CASP14)15, demonstrating accuracy
> competitive with experiment in a majority of cases and greatly
> outperforming other methods. Underpinning the latest version of AlphaFold
> is a novel machine learning approach that incorporates physical and
> biological knowledge about protein structure, leveraging multi-sequence
> alignments, into the design of the deep learning algorithm.
>
>
>
>
>
>
>
>
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
>
> FRIAM Applied Complexity Group listserv
>
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
>
> un/subscribe http://redfish.com/mailman/listinf

Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Marcus Daniels
I don’t have the quote handy but I recall the folks at Allen AI talking about 
their hard problems.
Acing the SAT, easy.   Math is the hardest.

From: Friam  On Behalf Of Patrick Reilly
Sent: Tuesday, July 20, 2021 3:02 PM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Can current AI beat humans at doing science?

Prof. West has it right. Human intelligence requires melding intents. Solving 
mathematical algorithms requires no creativity or shifting of intentions.

On Tuesday, July 20, 2021, Prof David West 
mailto:profw...@fastmail.fm>> wrote:
Thirty something years ago, Alan Newell walked into his classroom and 
announced, "over Christmas break, Herb Simon and I created an artificial 
intelligence." He was referring to the program Bacon, which fed with the same 
dataset as the human deduced the same set of "laws." It even deduced a couple 
of minor ones that Bacon missed (or, at least, did not publish).

Simon and Newell tried to publish a paper with Bacon as author, but were 
rejected.

AlphaFold (which I think is based on a program Google announced but has yet to 
publish in a "proper" journal) is, to me, akin to Bacon, in that it is not 
"doing science," but is merely a tool that resolves a very specific scientific 
problem and the use of that tool will facilitate humans who actually do the 
science.

I will change my mind when the journals of record publish a paper authored by 
AlphaFold (or kin) as author and that paper at least posits a credible theory 
or partial theory that transcends "here is the fold of the xyz sequence to 
address why that fold is 'necessary' or 'useful'.

davew


On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
Sedol at a time when leading Ai researchers predicted it will be at least 10 
years before AI can reach that level. But the valid question then was - why so 
excited? It's just a game. There is an interesting documentary on youtube about 
this at https://www.youtube.com/watch?v=WXuK6gekU1Y

What's happening now is that AI makes scientific discoveries beyond human 
ability.

Is anybody worried where it will end?

I quote from https://www.nature.com/articles/s41586-021-03819-2
Highly accurate protein structure prediction with AlphaFold
Proteins are essential to life, and understanding their structure can 
facilitate a mechanistic understanding of their function. Through an enormous 
experimental effort1–4, the structures of around 100,000 unique proteins have 
been determined5, but this represents a small fraction of the billions of known 
protein sequences6,7. Structural coverage is bottlenecked by the months to 
years of painstaking effort required to determine a single protein structure. 
Accurate computational approaches are needed to address this gap and to enable 
large-scale structural bioinformatics. Predicting the 3-D structure that a 
protein will adopt based solely on its amino acid sequence, the structure 
prediction component of the ‘protein folding problem’8, has been an important 
open research problem for more than 50 years9. Despite recent progress10–14, 
existing methods fall far short of atomic accuracy, especially when no 
homologous structure is available. Here we provide the first computational 
method that can regularly predict protein structures with atomic accuracy even 
where no similar structure is known. We validated an entirely redesigned 
version of our neural network-based model, AlphaFold, in the challenging 14th 
Critical Assessment of protein Structure Prediction (CASP14)15, demonstrating 
accuracy competitive with experiment in a majority of cases and greatly 
outperforming other methods. Underpinning the latest version of AlphaFold is a 
novel machine learning approach that incorporates physical and biological 
knowledge about protein structure, leveraging multi-sequence alignments, into 
the design of the deep learning algorithm.




-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  
bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/




--
Sent from Gmail Mobile
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Patrick Reilly
Prof. West has it right. Human intelligence requires melding intents.
Solving mathematical algorithms requires no creativity or shifting of
intentions.

On Tuesday, July 20, 2021, Prof David West  wrote:

> Thirty something years ago, Alan Newell walked into his classroom and
> announced, "over Christmas break, Herb Simon and I created an artificial
> intelligence." He was referring to the program Bacon, which fed with the
> same dataset as the human deduced the same set of "laws." It even deduced a
> couple of minor ones that Bacon missed (or, at least, did not publish).
>
> Simon and Newell tried to publish a paper with Bacon as author, but were
> rejected.
>
> AlphaFold (which I think is based on a program Google announced but has
> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it
> is not "doing science," but is merely a tool that resolves a very specific
> scientific problem and the use of that tool will facilitate humans who
> actually do the science.
>
> I will change my mind when the journals of record publish a paper authored
> by AlphaFold (or kin) as author and that paper at least posits a credible
> theory or partial theory that transcends "here is the fold of the xyz
> sequence to address why that fold is 'necessary' or 'useful'.
>
> davew
>
>
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion
> Lee Sedol at a time when leading Ai researchers predicted it will be at
> least 10 years before AI can reach that level. But the valid question then
> was - why so excited? It's just a game. There is an interesting documentary
> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
>
> What's happening now is that AI makes scientific discoveries beyond human
> ability.
>
> Is anybody worried where it will end?
>
> I quote from https://www.nature.com/articles/s41586-021-03819-2
> Highly accurate protein structure prediction with AlphaFold
> Proteins are essential to life, and understanding their structure can
> facilitate a mechanistic understanding of their function. Through an
> enormous experimental effort1–4, the structures of around 100,000 unique
> proteins have been determined5, but this represents a small fraction of the
> billions of known protein sequences6,7. Structural coverage is bottlenecked
> by the months to years of painstaking effort required to determine a single
> protein structure. Accurate computational approaches are needed to address
> this gap and to enable large-scale structural bioinformatics. Predicting
> the 3-D structure that a protein will adopt based solely on its amino acid
> sequence, the structure prediction component of the ‘protein folding
> problem’8, has been an important open research problem for more than 50
> years9. Despite recent progress10–14, existing methods fall far short of
> atomic accuracy, especially when no homologous structure is available. Here
> we provide the first computational method that can regularly predict
> protein structures with atomic accuracy even where no similar structure is
> known. We validated an entirely redesigned version of our neural
> network-based model, AlphaFold, in the challenging 14th Critical Assessment
> of protein Structure Prediction (CASP14)15, demonstrating accuracy
> competitive with experiment in a majority of cases and greatly
> outperforming other methods. Underpinning the latest version of AlphaFold
> is a novel machine learning approach that incorporates physical and
> biological knowledge about protein structure, leveraging multi-sequence
> alignments, into the design of the deep learning algorithm.
>
>
>
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
>
>

-- 
Sent from Gmail Mobile
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Frank Wimberly
It must have been more than 30 something years ago.  In 1967 I took a
course in cognitive psychology at Carnegie Mellon in which we studied
Bacon, Logic Theorist, and GPS (General Problem Solver).  The course was
usually taught by Simon but he was on sabbatical.  The text was Computers
and Thought by Feigenbaum and Feldman.

Sorry for the cavil.

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 20, 2021, 2:46 PM Prof David West  wrote:

> Thirty something years ago, Alan Newell walked into his classroom and
> announced, "over Christmas break, Herb Simon and I created an artificial
> intelligence." He was referring to the program Bacon, which fed with the
> same dataset as the human deduced the same set of "laws." It even deduced a
> couple of minor ones that Bacon missed (or, at least, did not publish).
>
> Simon and Newell tried to publish a paper with Bacon as author, but were
> rejected.
>
> AlphaFold (which I think is based on a program Google announced but has
> yet to publish in a "proper" journal) is, to me, akin to Bacon, in that it
> is not "doing science," but is merely a tool that resolves a very specific
> scientific problem and the use of that tool will facilitate humans who
> actually do the science.
>
> I will change my mind when the journals of record publish a paper authored
> by AlphaFold (or kin) as author and that paper at least posits a credible
> theory or partial theory that transcends "here is the fold of the xyz
> sequence to address why that fold is 'necessary' or 'useful'.
>
> davew
>
>
> On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
>
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion
> Lee Sedol at a time when leading Ai researchers predicted it will be at
> least 10 years before AI can reach that level. But the valid question then
> was - why so excited? It's just a game. There is an interesting documentary
> on youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
>
> What's happening now is that AI makes scientific discoveries beyond human
> ability.
>
> Is anybody worried where it will end?
>
> I quote from https://www.nature.com/articles/s41586-021-03819-2
> Highly accurate protein structure prediction with AlphaFold
> Proteins are essential to life, and understanding their structure can
> facilitate a mechanistic understanding of their function. Through an
> enormous experimental effort1–4, the structures of around 100,000 unique
> proteins have been determined5, but this represents a small fraction of the
> billions of known protein sequences6,7. Structural coverage is bottlenecked
> by the months to years of painstaking effort required to determine a single
> protein structure. Accurate computational approaches are needed to address
> this gap and to enable large-scale structural bioinformatics. Predicting
> the 3-D structure that a protein will adopt based solely on its amino acid
> sequence, the structure prediction component of the ‘protein folding
> problem’8, has been an important open research problem for more than 50
> years9. Despite recent progress10–14, existing methods fall far short of
> atomic accuracy, especially when no homologous structure is available. Here
> we provide the first computational method that can regularly predict
> protein structures with atomic accuracy even where no similar structure is
> known. We validated an entirely redesigned version of our neural
> network-based model, AlphaFold, in the challenging 14th Critical Assessment
> of protein Structure Prediction (CASP14)15, demonstrating accuracy
> competitive with experiment in a majority of cases and greatly
> outperforming other methods. Underpinning the latest version of AlphaFold
> is a novel machine learning approach that incorporates physical and
> biological knowledge about protein structure, leveraging multi-sequence
> alignments, into the design of the deep learning algorithm.
>
>
>
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Prof David West
Thirty something years ago, Alan Newell walked into his classroom and 
announced, "over Christmas break, Herb Simon and I created an artificial 
intelligence." He was referring to the program Bacon, which fed with the same 
dataset as the human deduced the same set of "laws." It even deduced a couple 
of minor ones that Bacon missed (or, at least, did not publish).

Simon and Newell tried to publish a paper with Bacon as author, but were 
rejected.

AlphaFold (which I think is based on a program Google announced but has yet to 
publish in a "proper" journal) is, to me, akin to Bacon, in that it is not 
"doing science," but is merely a tool that resolves a very specific scientific 
problem and the use of that tool will facilitate humans who actually do the 
science.

I will change my mind when the journals of record publish a paper authored by 
AlphaFold (or kin) as author and that paper at least posits a credible theory 
or partial theory that transcends "here is the fold of the xyz sequence to 
address why that fold is 'necessary' or 'useful'.

davew


On Tue, Jul 20, 2021, at 1:12 PM, Pieter Steenekamp wrote:
> A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee 
> Sedol at a time when leading Ai researchers predicted it will be at least 10 
> years before AI can reach that level. But the valid question then was - why 
> so excited? It's just a game. There is an interesting documentary on youtube 
> about this at https://www.youtube.com/watch?v=WXuK6gekU1Y
> 
> What's happening now is that AI makes scientific discoveries beyond human 
> ability. 
> 
> Is anybody worried where it will end?
> 
> I quote from https://www.nature.com/articles/s41586-021-03819-2
> Highly accurate protein structure prediction with AlphaFold
> Proteins are essential to life, and understanding their structure can 
> facilitate a mechanistic understanding of their function. Through an enormous 
> experimental effort1–4, the structures of around 100,000 unique proteins have 
> been determined5, but this represents a small fraction of the billions of 
> known protein sequences6,7. Structural coverage is bottlenecked by the months 
> to years of painstaking effort required to determine a single protein 
> structure. Accurate computational approaches are needed to address this gap 
> and to enable large-scale structural bioinformatics. Predicting the 3-D 
> structure that a protein will adopt based solely on its amino acid sequence, 
> the structure prediction component of the ‘protein folding problem’8, has 
> been an important open research problem for more than 50 years9. Despite 
> recent progress10–14, existing methods fall far short of atomic accuracy, 
> especially when no homologous structure is available. Here we provide the 
> first computational method that can regularly predict protein structures with 
> atomic accuracy even where no similar structure is known. We validated an 
> entirely redesigned version of our neural network-based model, AlphaFold, in 
> the challenging 14th Critical Assessment of protein Structure Prediction 
> (CASP14)15, demonstrating accuracy competitive with experiment in a majority 
> of cases and greatly outperforming other methods. Underpinning the latest 
> version of AlphaFold is a novel machine learning approach that incorporates 
> physical and biological knowledge about protein structure, leveraging 
> multi-sequence alignments, into the design of the deep learning algorithm.
> 
> 
> 
> 
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
> 
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


[FRIAM] Can current AI beat humans at doing science?

2021-07-20 Thread Pieter Steenekamp
A year or so ago, Deepmind's AlphGo defeated the then world Go-champion Lee
Sedol at a time when leading Ai researchers predicted it will be at least
10 years before AI can reach that level. But the valid question then was -
why so excited? It's just a game. There is an interesting documentary on
youtube about this at https://www.youtube.com/watch?v=WXuK6gekU1Y

What's happening now is that AI makes scientific discoveries beyond human
ability.

Is anybody worried where it will end?

I quote from https://www.nature.com/articles/s41586-021-03819-2
Highly accurate protein structure prediction with AlphaFold
Proteins are essential to life, and understanding their structure can
facilitate a mechanistic understanding of their function. Through an
enormous experimental effort1–4, the structures of around 100,000 unique
proteins have been determined5, but this represents a small fraction of the
billions of known protein sequences6,7. Structural coverage is bottlenecked
by the months to years of painstaking effort required to determine a single
protein structure. Accurate computational approaches are needed to address
this gap and to enable large-scale structural bioinformatics. Predicting
the 3-D structure that a protein will adopt based solely on its amino acid
sequence, the structure prediction component of the ‘protein folding
problem’8, has been an important open research problem for more than 50
years9. Despite recent progress10–14, existing methods fall far short of
atomic accuracy, especially when no homologous structure is available. Here
we provide the first computational method that can regularly predict
protein structures with atomic accuracy even where no similar structure is
known. We validated an entirely redesigned version of our neural
network-based model, AlphaFold, in the challenging 14th Critical Assessment
of protein Structure Prediction (CASP14)15, demonstrating accuracy
competitive with experiment in a majority of cases and greatly
outperforming other methods. Underpinning the latest version of AlphaFold
is a novel machine learning approach that incorporates physical and
biological knowledge about protein structure, leveraging multi-sequence
alignments, into the design of the deep learning algorithm.
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread Pieter Steenekamp
To somewhat reflect on an exercise like this; it obviously depends on your
understanding of what is "false". A postmodern point of view emphasizes the
importance of perspective, there is no absolute ground truth. If you view
the world from this perspective, then obviously an exercise like this is
meaningless. On the other hand, modernists argue there is a ground truth
and an exercise like this can help to get to the ground truth. (I admit I
don't really know what is postmodernism and modernism, that's my
understanding and I'm open to learn if someone more knowledgeable corrects
me)

Personally, I don't really know where to draw the line. There are obviously
issues where the truth is subjective, but I do think there's enough
validity in an objective ground truth to experiment with an exercise like
this. Take the example from the very first "valid" falsification - Malone
claimed a treatment cost for Remdsivir is approximately $6-$8k and the
submission was made that this is false, it is only about $3k. Malone was
wrong, there's no subjectivity involved, he made an objectively
measurable false claim.

Again, I like to withold my final verdict on the process till it's
completed and we can study the results, but the openness and transparency
of the process enthuses me. Of course, there could be practical reasons why
it does not deliver what's expected?

On Tue, 20 Jul 2021 at 19:41, uǝlƃ ☤>$  wrote:

> But, for what it's worth, reading the current state of the spreadsheet,
> Jocelynn clearly thinks more like I do. Paul, in my opinion, doesn't
> understand how to evaluate the claims. Max is in the middle, perhaps taking
> too much of a myopic, literalist perspective on falsification.
>
>
> https://docs.google.com/spreadsheets/d/1nEsf6l_dEv_NQqLvoHYP-eRZ0UmZGqo16uSHub53h9Q/edit#gid=31863150
>
> As expected, this Ground Truth Challenge is mostly a waste of time. And
> I'm sure Bret *loves* the attention. But to the extent it gets people
> talking at least somewhat dispassionately about composition and narrative,
> maybe it'll be slightly helpful.
>
> So far, 5 of the counterclaims are scored as "valid". I.e. One of
> Weinstein, Kory, or Malone made a blatantly false statement. But the devil
> is always in the details.
>
>
> On 7/20/21 8:16 AM, uǝlƃ ☤>$ wrote:
> > No. Trust is a bug, not a feature in this context. Now, *if* the
> referees come back with a nuanced evaluation of any of the objections, then
> I would be impressed. One of the reasons most philosophers and scientists
> don't respond well to falsificationism is because it can be myopically
> taken out of context (which I think this Ground Truth effort does as well).
> Theories are never actually falsified, per se. It's a mix of testing and
> iteration, mixing and matching from old theories and tiny incremental
> progress.
> >
> > The same would be true of the evaluations from the referees. It's not a
> matter of trust, argument from authority. It's a matter of good faith
> mechanistic explanation ... something Weinstein fails at continually. Irony
> is broken, here. Weinstein wants us to see him as democratizing,
> anti-censorship, blahblah. But he never seems to deliver the contextual
> nuance required for it. His appeals to emotion, anecdote, special pleading,
> and a variety of other fallacies obstruct democracy.
> >
> > This is where, despite my misgivings, someone like Joe Rogan is WAY more
> informative and defensible. Another fundamental pillar of Popperianism is
> *openness*, that untested hypotheses can enter the testing pipeline from
> anywhere. Rogan is open minded to a fault. (If your mind is too open, your
> brains will fall out.) Weinstein is *motivated* and pre-filters hypotheses,
> especially anything appearing "woke" or "mainstream". And that's just
> stupid.
> >
> >
> > On 7/19/21 8:58 PM, Pieter Steenekamp wrote:
> >> Am I correct in asserting that the gist of what you guys say about this
> ground truth exercise is that if you don't trust the referees you can't
> trust the result? If yes, I'll agree with you on that point.
> >
>
> --
> ☤>$ uǝlƃ
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread uǝlƃ ☤ $
But, for what it's worth, reading the current state of the spreadsheet, 
Jocelynn clearly thinks more like I do. Paul, in my opinion, doesn't understand 
how to evaluate the claims. Max is in the middle, perhaps taking too much of a 
myopic, literalist perspective on falsification.

https://docs.google.com/spreadsheets/d/1nEsf6l_dEv_NQqLvoHYP-eRZ0UmZGqo16uSHub53h9Q/edit#gid=31863150

As expected, this Ground Truth Challenge is mostly a waste of time. And I'm 
sure Bret *loves* the attention. But to the extent it gets people talking at 
least somewhat dispassionately about composition and narrative, maybe it'll be 
slightly helpful.

So far, 5 of the counterclaims are scored as "valid". I.e. One of Weinstein, 
Kory, or Malone made a blatantly false statement. But the devil is always in 
the details.


On 7/20/21 8:16 AM, uǝlƃ ☤>$ wrote:
> No. Trust is a bug, not a feature in this context. Now, *if* the referees 
> come back with a nuanced evaluation of any of the objections, then I would be 
> impressed. One of the reasons most philosophers and scientists don't respond 
> well to falsificationism is because it can be myopically taken out of context 
> (which I think this Ground Truth effort does as well). Theories are never 
> actually falsified, per se. It's a mix of testing and iteration, mixing and 
> matching from old theories and tiny incremental progress.
> 
> The same would be true of the evaluations from the referees. It's not a 
> matter of trust, argument from authority. It's a matter of good faith 
> mechanistic explanation ... something Weinstein fails at continually. Irony 
> is broken, here. Weinstein wants us to see him as democratizing, 
> anti-censorship, blahblah. But he never seems to deliver the contextual 
> nuance required for it. His appeals to emotion, anecdote, special pleading, 
> and a variety of other fallacies obstruct democracy.
> 
> This is where, despite my misgivings, someone like Joe Rogan is WAY more 
> informative and defensible. Another fundamental pillar of Popperianism is 
> *openness*, that untested hypotheses can enter the testing pipeline from 
> anywhere. Rogan is open minded to a fault. (If your mind is too open, your 
> brains will fall out.) Weinstein is *motivated* and pre-filters hypotheses, 
> especially anything appearing "woke" or "mainstream". And that's just stupid.
> 
> 
> On 7/19/21 8:58 PM, Pieter Steenekamp wrote:
>> Am I correct in asserting that the gist of what you guys say about this 
>> ground truth exercise is that if you don't trust the referees you can't 
>> trust the result? If yes, I'll agree with you on that point. 
> 

-- 
☤>$ uǝlƃ

-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread thompnickson2
Bats in a cave =  Commuters in a new York subway car at rush hour = carousers 
in a trendy bar at 11.30pm.  Definitely cheek-by-jowl.  Or cheek by cheek, for 
that matter.

 

Nick Thompson

thompnicks...@gmail.com  

https://wordpress.clarku.edu/nthompson/

 

From: Friam  On Behalf Of Barry MacKichan
Sent: Tuesday, July 20, 2021 11:16 AM
To: The Friday Morning Applied Complexity Coffee Group 
Subject: Re: [FRIAM] Collective sensemaking

 

I did a fair amount of spelunking in my undergraduate days. While I 
occasionally encountered a solitary bat, most bats that I’m familiar with hang 
cheek to jowl in vast crowds. In caves, which you could fairly describe as 
“inside”. To answer BW: No, not suspicious at all.

The Sixth Extinction by Elizabeth Colbert has a chapter about an infection 
racing through bat populations in the US.

—Barry

On 19 Jul 2021, at 22:08, David Eric Smith wrote:

I remember the following to assertions from them.  (Paraphrased, but should be 
close): 

BW: (about whether the virus was in some way manmade) “Isn’t it suspicious that 
most people have infected each other inside, yet bats live outside.”  

-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread Barry MacKichan
I did a fair amount of spelunking in my undergraduate days. While I 
occasionally encountered a solitary bat, most bats that I’m familiar 
with hang cheek to jowl in vast crowds. In caves, which you could fairly 
describe as “inside”. To answer BW: No, not suspicious at all.


*The Sixth Extinction* by Elizabeth Colbert has a chapter about an 
infection racing through bat populations in the US.


—Barry



On 19 Jul 2021, at 22:08, David Eric Smith wrote:

I remember the following to assertions from them.  (Paraphrased, but 
should be close): 


BW: (about whether the virus was in some way manmade) “Isn’t it 
suspicious that most people have infected each other inside, yet bats 
live outside.”  
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread uǝlƃ ☤ $
No. Trust is a bug, not a feature in this context. Now, *if* the referees come 
back with a nuanced evaluation of any of the objections, then I would be 
impressed. One of the reasons most philosophers and scientists don't respond 
well to falsificationism is because it can be myopically taken out of context 
(which I think this Ground Truth effort does as well). Theories are never 
actually falsified, per se. It's a mix of testing and iteration, mixing and 
matching from old theories and tiny incremental progress.

The same would be true of the evaluations from the referees. It's not a matter 
of trust, argument from authority. It's a matter of good faith mechanistic 
explanation ... something Weinstein fails at continually. Irony is broken, 
here. Weinstein wants us to see him as democratizing, anti-censorship, 
blahblah. But he never seems to deliver the contextual nuance required for it. 
His appeals to emotion, anecdote, special pleading, and a variety of other 
fallacies obstruct democracy.

This is where, despite my misgivings, someone like Joe Rogan is WAY more 
informative and defensible. Another fundamental pillar of Popperianism is 
*openness*, that untested hypotheses can enter the testing pipeline from 
anywhere. Rogan is open minded to a fault. (If your mind is too open, your 
brains will fall out.) Weinstein is *motivated* and pre-filters hypotheses, 
especially anything appearing "woke" or "mainstream". And that's just stupid.


On 7/19/21 8:58 PM, Pieter Steenekamp wrote:
> Am I correct in asserting that the gist of what you guys say about this 
> ground truth exercise is that if you don't trust the referees you can't trust 
> the result? If yes, I'll agree with you on that point. 

-- 
☤>$ uǝlƃ

-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Sean x Carrol

2021-07-20 Thread Frank Wimberly
Barry,

My middle name is Carroll.  My great great-grandfather Michael Carroll was
born in Ireland but spent most of his life in New Orleans, I believe.  I
had a third cousin named Barbara Carroll Volz (she died) who developed an
extensive family tree.  I will see if there is any information about
whether the two Seans are related.  Note that the physicist is Sean Michael
Carroll.

Frank

---
Frank C. Wimberly
140 Calle Ojo Feliz,
Santa Fe, NM 87505

505 670-9918
Santa Fe, NM

On Tue, Jul 20, 2021, 8:50 AM Barry MacKichan 
wrote:

> One of the (trivial, granted) ways the universe amuses me is that there
> are two Sean Carrols, one of whom authored ‘Endless Forms Most Beautiful’.
> I first heard of that book on a Friday morning at St. John’s. The are both
> prominent in their fields (physics and biology) and are very good
> popularizers.
>
> At times, when my critical faculties are sleeping, I can imagine that at
> some point the universe split and re-merged with itself and that the Sean
> Carrols represent a glitch in the merging process. That would explain that
> there are two of them, very similar in many ways and yet distinct.
>
> —Barry
>
> On 19 Jul 2021, at 21:42, David Eric Smith wrote:
>
> Anyway, what came up today was a Sean Carroll interview with Wolfram,
> which fronts hypergraphs as Wolfram’s base-level abstraction.  It is a
> couple
>
> -  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
> FRIAM Applied Complexity Group listserv
> Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
> un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
> FRIAM-COMIC http://friam-comic.blogspot.com/
> archives: http://friam.471366.n2.nabble.com/
>
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


Re: [FRIAM] Collective sensemaking

2021-07-20 Thread uǝlƃ ☤ $
The bats infect outside ⨂ humans infect inside, is excellent example of true 
components composing into false narrative. And Gil is our man for Star Trek 
refs. It's nice of you to fill in for him! 8^D

Thanks for the Liang et al adjuvants paper.

The Guillain-Barre issue is interesting. I've argued that AZ, J&J, and Sputnik 
V might be more *trusted* because they're "traditional". It hadn't really 
crossed my mind that the tightly targeted Pfizer and Moderna present us with 
more of a controlled experiment. The research nurse talked quite a bit about my 
risk of Guillain-Barre during cancer treatment ... it was right up there with 
encephalopathy from infection. I did catch the flu while on the drug 
(obinituzumab) way after the chemo had ended. But they dosed me with tamiflu 
off the bat. I'd intended to look into G-B but never got around to it. Any 
clues would be welcome.


On 7/19/21 7:08 PM, David Eric Smith wrote:
> It is generous (and good), to try to reduce this to something as clean as 
> logical fallacies.
> 
> Your earlier email was really to the point, though, about motives.
> 
> Neither here nor there, an anecdote from my own experience.
> 
> I had not heard of any of these people, as I normally don’t, until Bill Maher 
> had BW and HH on his show.  It is a pity that Bill badly enough needs the 
> persona of the cynical skeptic that some subset of his commitments are 
> contrarian just, it seems, for its own sake.
> 
> I remember the following to assertions from them.  (Paraphrased, but should 
> be close): 
> BW: (about whether the virus was in some way manmade) “Isn’t it suspicious 
> that most people have infected each other inside, yet bats live outside.”  
> I immediately brought to mind Spock’s line to Kirk in one of the 1960s Star 
> Trek episodes (the one about Nomad) “A dazzling display of logic, captain.”
> 
> A poor fact-checker would be stuck on that one: Bats, after all, _do_ live 
> outside, and people _do_ mostly infect each other with COVID inside.  Hmm.  
> Now what?
> 
> Then on why they wouldn’t take vaccines:
> BW and HH jointly: Our ancestors didn’t evolve with vaccines, so we should 
> expect them to be dangerous in unknown ways. 
> 
> It is interesting that the only biological component of the mRNA vaccines — 
> mRNA in the medium or injected into cells — is the one thing we _have_ been 
> living with since we were bacteria.  That’s even before the origin Stone Age. 
>  The parts of the vaccines that are new are the chemical parts: the delivery 
> vehicle and the adjuvants.  If there were to be real surprises, I would 
> expect those to come from those.  But of course a one-time chemical exposure 
> is limited in its effect by dose and whatever the chemical does.  I continue 
> to be interested in what the adjuvants are in these vaccines, and what is 
> known of their history, but haven’t taken time to read.  A source is here:
> https://www.frontiersin.org/articles/10.3389/fimmu.2020.589833/full 
> 
> 
> That all becomes interesting now, in light of the fact that the mRNA vaccines 
> are the _simplest_ RNA-carrying vaccines we have ever had; much simpler than 
> viral vector vaccines.  I wondered if there might be some advantage from 
> having so little uncontrolled diversity and complexity.  Right now, it 
> appears that both of the adenovirus vaccines (AZ and J&J) may have an 
> identifiable incidence of Gillian-Barre at about the 10e-5 level, which would 
> put it at about 4x the annual flu vaccine’s correlation.  That is not settled 
> yet, but the experts think there might be one.  Yet, with many more doses in 
> the US, EU, Japan, and I guess elsewhere, of the Pfizer and Moderna formulae, 
> I am not yet seeing any reports of G-B upticks that seem to correlate with 
> them.  And it is the same data sets that would be a source for all these.  So 
> I am eager to see if there is a real difference, and whether we can find out 
> where it comes from.  It could well come back to the way our familiarity with
> viruses, possibly in combination with adjuvants, tunes immune responses.

-- 
☤>$ uǝlƃ
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/


[FRIAM] Sean x Carrol

2021-07-20 Thread Barry MacKichan
One of the (trivial, granted) ways the universe amuses me is that there 
are two Sean Carrols, one of whom authored ‘Endless Forms Most 
Beautiful’. I first heard of that book on a Friday morning at St. 
John’s. The are both prominent in their fields (physics and biology) 
and are very good popularizers.


At times, when my critical faculties are sleeping, I can imagine that at 
some point the universe split and re-merged with itself and that the 
Sean Carrols represent a glitch in the merging process. That would 
explain that there are two of them, very similar in many ways and yet 
distinct.


—Barry

On 19 Jul 2021, at 21:42, David Eric Smith wrote:

Anyway, what came up today was a Sean Carroll interview with Wolfram, 
which fronts hypergraphs as Wolfram’s base-level abstraction.  It 
is a couple 
-  . -..-. . -. -.. -..-. .. ... -..-.  . .-. .
FRIAM Applied Complexity Group listserv
Zoom Fridays 9:30a-12p Mtn GMT-6  bit.ly/virtualfriam
un/subscribe http://redfish.com/mailman/listinfo/friam_redfish.com
FRIAM-COMIC http://friam-comic.blogspot.com/
archives: http://friam.471366.n2.nabble.com/