Re: Fwd: The Machine Intelligence Research Institute Blog

2014-09-05 Thread 'Chris de Morsella' via Everything List
Does not look like there is a nice formatting option. They do have an enable 
conversations setting, but I do not thing that provides formatting and 
indentation. If I have some free time -- which I have very little of 
unfortunately -- I will look.





 From: Terren Suydam 
To: everything-list@googlegroups.com 
Sent: Friday, September 5, 2014 4:02 PM
Subject: Re: Fwd: The Machine Intelligence Research Institute Blog
 


I left Yahoo mail five years ago because they do such a terrible job of 
engineering. I have embraced the Google. Thanks for whatever you can do. 
Usually email clients offer a couple of modes for how to include the original 
email... Is there a different mode you can try?
Terren
On Sep 5, 2014 5:50 PM, "'Chris de Morsella' via Everything List" 
 wrote:

Terren - You should forward your concerns to the folks who code the yahoo 
webmail client... when I am at work I use its webmail client, which does a poor 
job of threading a conversation. Will try to remember that and put in manual 
'>>' marks to show what I am replying to.
>
>
>
>
> From: Terren Suydam 
>To: everything-list@googlegroups.com 
>Sent: Friday, September 5, 2014 12:47 PM
>Subject: Re: Fwd: The Machine Intelligence Research Institute Blog
> 
>
>
>Chris, is there a way you can improve your email client?  Sometimes your 
>responses are very hard to detect because they're at the same indentation and 
>font as the one you are reply to, as below. Someone new to the conversation 
>would have no way of knowing that Brent did not write that entire thing, as 
>you didn't sign your name.
>
>
>Thanks, Terren
>
>
>
>
>
>
>On Fri, Sep 5, 2014 at 2:15 PM, 'Chris de Morsella' via Everything List 
> wrote:
>
>
>>
>>
>>
>>
>>
>> From: meekerdb 
>>To: EveryThing  
>>Sent: Friday, September 5, 2014 9:47 AM
>>Subject: Fwd: The Machine Intelligence Research Institute Blog
>> 
>>
>>
>>For you who are worried about the threat of artificial intelligence, MIRI 
>>seems to make it their main concern.  Look up their website and subscribe.  
>>On my list of existential threats it comes well below natural stupidity.
>>
>>
>>On mine as well... judging by how far the google car still has to go before 
>>it does not drive straight into that pothole or require that its every route 
>>be very carefully mapped down to the level of each single driveway. Real 
>>world AI is still mired in the stubbornly, dumb as sand nature of our silicon 
>>based deterministic logic gate architecture.
>>Much higher chance that we will blow ourselves up in some existentially 
>>desperate final energy war, or so poison our earth's biosphere that systemic 
>>collapse is triggered and the deep ocean's flip into an anoxic state favoring 
>>the hydrogen sulfide producing microorganisms that are poisoned by oxygen, 
>>resulting in another great belch of poisonous (to animals and plants) 
>>hydrogen sulfide into the planet's atmosphere -- as occurred during the great 
>>Permian extinction.
>>Speaking of which has anyone read the recent study that concludes the current 
>>anthropocene boundary layer extinction rate is more than one thousand times 
>>the average extinction level that prevailed from the last great extinction 
>>(Jurassic) until now. See: Extinctions during human era one thousand times 
>>more than before
>>
>>Brent
>>
>> 
>>
>>
>> Original Message  
>>Subject: The Machine Intelligence Research Institute Blog 
>>Date: Fri, 05 Sep 2014 12:07:00 + 
>>From: Machine Intelligence Research Institute » Blog  
>>To: meeke...@verizon.net 
>>
>>
>>The Machine Intelligence Research Institute Blog  
>> 
>>
>> 
>>John Fox on AI safety 
>>Posted: 04 Sep 2014 12:00 PM PDT
>> John Fox is an interdisciplinary scientist with theoretical interests in AI 
>> and computer science, and an applied focus in medicine and medical software 
>> engineering. After training in experimental psychology at Durham and 
>> Cambridge Universities and post-doctoral fellowships at CMU and Cornell in 
>> the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer 
>> Research UK) in 1981 as a researcher in medical AI. The group’s research was 
>> explicitly multidisciplinary and it subsequently made significant 
>> contributions in basic computer science, AI and medical informatics, and 
>> developed a number of successful te

Re: Fwd: The Machine Intelligence Research Institute Blog

2014-09-05 Thread Terren Suydam
I left Yahoo mail five years ago because they do such a terrible job of
engineering. I have embraced the Google. Thanks for whatever you can do.
Usually email clients offer a couple of modes for how to include the
original email... Is there a different mode you can try?

Terren
On Sep 5, 2014 5:50 PM, "'Chris de Morsella' via Everything List" <
everything-list@googlegroups.com> wrote:

> Terren - You should forward your concerns to the folks who code the yahoo
> webmail client... when I am at work I use its webmail client, which does a
> poor job of threading a conversation. Will try to remember that and put in
> manual '>>' marks to show what I am replying to.
>
>   --
>  *From:* Terren Suydam 
> *To:* everything-list@googlegroups.com
> *Sent:* Friday, September 5, 2014 12:47 PM
> *Subject:* Re: Fwd: The Machine Intelligence Research Institute Blog
>
> Chris, is there a way you can improve your email client?  Sometimes your
> responses are very hard to detect because they're at the same indentation
> and font as the one you are reply to, as below. Someone new to the
> conversation would have no way of knowing that Brent did not write that
> entire thing, as you didn't sign your name.
>
> Thanks, Terren
>
>
>
>
> On Fri, Sep 5, 2014 at 2:15 PM, 'Chris de Morsella' via Everything List <
> everything-list@googlegroups.com> wrote:
>
>
>
>   --
>  *From:* meekerdb 
> *To:* EveryThing 
> *Sent:* Friday, September 5, 2014 9:47 AM
> *Subject:* Fwd: The Machine Intelligence Research Institute Blog
>
>  For you who are worried about the threat of artificial intelligence,
> MIRI seems to make it their main concern.  Look up their website and
> subscribe.  On my list of existential threats it comes well below natural
> stupidity.
>
> On mine as well... judging by how far the google car still has to go
> before it does not drive straight into that pothole or require that its
> every route be very carefully mapped down to the level of each single
> driveway. Real world AI is still mired in the stubbornly, dumb as sand
> nature of our silicon based deterministic logic gate architecture.
> Much higher chance that we will blow ourselves up in some existentially
> desperate final energy war, or so poison our earth's biosphere that
> systemic collapse is triggered and the deep ocean's flip into an anoxic
> state favoring the hydrogen sulfide producing microorganisms that are
> poisoned by oxygen, resulting in another great belch of poisonous (to
> animals and plants) hydrogen sulfide into the planet's atmosphere -- as
> occurred during the great Permian extinction.
> Speaking of which has anyone read the recent study that concludes the
> current anthropocene boundary layer extinction rate is more than one
> thousand times the average extinction level that prevailed from the last
> great extinction (Jurassic) until now. See: Extinctions during human era
> one thousand times more than before
> <http://www.sciencedaily.com/releases/2014/09/140902151125.htm>
>
> Brent
>
>
>
>  Original Message  Subject: The Machine Intelligence
> Research Institute BlogDate: Fri, 05 Sep 2014 12:07:00 +From: Machine
> Intelligence Research Institute » Blog 
> To: meeke...@verizon.net
>
>   The Machine Intelligence Research Institute Blog
> <http://intelligence.org/>
>  --
>  John Fox on AI safety
> <http://intelligence.org/2014/09/04/john-fox/?utm_source=rss&utm_medium=rss&utm_campaign=john-fox>
> Posted: 04 Sep 2014 12:00 PM PDT
>  [image: John Fox portrait] John Fox
> <http://www.cossac.org/people/johnfox> is an interdisciplinary scientist
> with theoretical interests in AI and computer science, and an applied focus
> in medicine and medical software engineering. After training in
> experimental psychology at Durham and Cambridge Universities and
> post-doctoral fellowships at CMU and Cornell in the USA and UK (MRC) he
> joined the Imperial Cancer Research Fund (now Cancer Research UK) in 1981
> as a researcher in medical AI. The group’s research was explicitly
> multidisciplinary and it subsequently made significant contributions in
> basic computer science, AI and medical informatics, and developed a number
> of successful technologies which have been commercialised.
> In 1996 he and his team were awarded the 20th Anniversary Gold Medal of
> the European Federation of Medical Informatics for the development of
> PROforma, arguably the first formal computer language for modeling clinical
> decision and processes. Fox has published widely in computer science,
> c

Re: Fwd: The Machine Intelligence Research Institute Blog

2014-09-05 Thread 'Chris de Morsella' via Everything List
Terren - You should forward your concerns to the folks who code the yahoo 
webmail client... when I am at work I use its webmail client, which does a poor 
job of threading a conversation. Will try to remember that and put in manual 
'>>' marks to show what I am replying to.



 From: Terren Suydam 
To: everything-list@googlegroups.com 
Sent: Friday, September 5, 2014 12:47 PM
Subject: Re: Fwd: The Machine Intelligence Research Institute Blog
 


Chris, is there a way you can improve your email client?  Sometimes your 
responses are very hard to detect because they're at the same indentation and 
font as the one you are reply to, as below. Someone new to the conversation 
would have no way of knowing that Brent did not write that entire thing, as you 
didn't sign your name.

Thanks, Terren





On Fri, Sep 5, 2014 at 2:15 PM, 'Chris de Morsella' via Everything List 
 wrote:


>
>
>
>
>
> From: meekerdb 
>To: EveryThing  
>Sent: Friday, September 5, 2014 9:47 AM
>Subject: Fwd: The Machine Intelligence Research Institute Blog
> 
>
>
>For you who are worried about the threat of artificial intelligence, MIRI 
>seems to make it their main concern.  Look up their website and subscribe.  On 
>my list of existential threats it comes well below natural stupidity.
>
>
>On mine as well... judging by how far the google car still has to go before it 
>does not drive straight into that pothole or require that its every route be 
>very carefully mapped down to the level of each single driveway. Real world AI 
>is still mired in the stubbornly, dumb as sand nature of our silicon based 
>deterministic logic gate architecture.
>Much higher chance that we will blow ourselves up in some existentially 
>desperate final energy war, or so poison our earth's biosphere that systemic 
>collapse is triggered and the deep ocean's flip into an anoxic state favoring 
>the hydrogen sulfide producing microorganisms that are poisoned by oxygen, 
>resulting in another great belch of poisonous (to animals and plants) hydrogen 
>sulfide into the planet's atmosphere -- as occurred during the great Permian 
>extinction.
>Speaking of which has anyone read the recent study that concludes the current 
>anthropocene boundary layer extinction rate is more than one thousand times 
>the average extinction level that prevailed from the last great extinction 
>(Jurassic) until now. See: Extinctions during human era one thousand times 
>more than before
>
>Brent
>
> 
>
>
> Original Message  
>Subject: The Machine Intelligence Research Institute Blog 
>Date: Fri, 05 Sep 2014 12:07:00 + 
>From: Machine Intelligence Research Institute » Blog  
>To: meeke...@verizon.net 
>
>
>The Machine Intelligence Research Institute Blog  
> 
>
> 
>John Fox on AI safety 
>Posted: 04 Sep 2014 12:00 PM PDT
> John Fox is an interdisciplinary scientist with theoretical interests in AI 
> and computer science, and an applied focus in medicine and medical software 
> engineering. After training in experimental psychology at Durham and 
> Cambridge Universities and post-doctoral fellowships at CMU and Cornell in 
> the USA and UK (MRC) he joined the Imperial Cancer Research Fund (now Cancer 
> Research UK) in 1981 as a researcher in medical AI. The group’s research was 
> explicitly multidisciplinary and it subsequently made significant 
> contributions in basic computer science, AI and medical informatics, and 
> developed a number of successful technologies which have been commercialised.
>In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the 
>European Federation of Medical Informatics for the development of PROforma, 
>arguably the first formal computer language for modeling clinical decision and 
>processes. Fox has published widely in computer science, cognitive science and 
>biomedical engineering, and was the founding editor of the Knowledge 
>Engineering Review  (Cambridge University Press). Recent publications include 
>a research monograph Safe and Sound: Artificial Intelligence in Hazardous 
>Applications (MIT Press, 2000) which deals with the use of AI in 
>safety-critical fields such as medicine.
>Luke Muehlhauser: You’ve spent many years studying AI safety issues, in 
>particular in medical contexts, e.g. in your 2000 book with Subrata Das, Safe 
>and Sound: Artificial Intelligence in Hazardous Applications. What kinds of AI 
>safety challenges have you focused on in the past decade or so?
>
> 
>John Fox: From my first research job, as a post-doc with AI founders Allen 
>Newell and Herb Simon at CMU, I have been 

Re: Fwd: The Machine Intelligence Research Institute Blog

2014-09-05 Thread Terren Suydam
Chris, is there a way you can improve your email client?  Sometimes your
responses are very hard to detect because they're at the same indentation
and font as the one you are reply to, as below. Someone new to the
conversation would have no way of knowing that Brent did not write that
entire thing, as you didn't sign your name.

Thanks, Terren


On Fri, Sep 5, 2014 at 2:15 PM, 'Chris de Morsella' via Everything List <
everything-list@googlegroups.com> wrote:

>
>
>   --
>  *From:* meekerdb 
> *To:* EveryThing 
> *Sent:* Friday, September 5, 2014 9:47 AM
> *Subject:* Fwd: The Machine Intelligence Research Institute Blog
>
>  For you who are worried about the threat of artificial intelligence,
> MIRI seems to make it their main concern.  Look up their website and
> subscribe.  On my list of existential threats it comes well below natural
> stupidity.
>
> On mine as well... judging by how far the google car still has to go
> before it does not drive straight into that pothole or require that its
> every route be very carefully mapped down to the level of each single
> driveway. Real world AI is still mired in the stubbornly, dumb as sand
> nature of our silicon based deterministic logic gate architecture.
> Much higher chance that we will blow ourselves up in some existentially
> desperate final energy war, or so poison our earth's biosphere that
> systemic collapse is triggered and the deep ocean's flip into an anoxic
> state favoring the hydrogen sulfide producing microorganisms that are
> poisoned by oxygen, resulting in another great belch of poisonous (to
> animals and plants) hydrogen sulfide into the planet's atmosphere -- as
> occurred during the great Permian extinction.
> Speaking of which has anyone read the recent study that concludes the
> current anthropocene boundary layer extinction rate is more than one
> thousand times the average extinction level that prevailed from the last
> great extinction (Jurassic) until now. See: Extinctions during human era
> one thousand times more than before
> 
>
> Brent
>
>
>
>  Original Message   Subject: The Machine Intelligence
> Research Institute Blog  Date: Fri, 05 Sep 2014 12:07:00 +  From: Machine
> Intelligence Research Institute » Blog 
>   To: meeke...@verizon.net
>
>The Machine Intelligence Research Institute Blog
> 
>   --
>John Fox on AI safety
> 
> Posted: 04 Sep 2014 12:00 PM PDT
>  [image: John Fox portrait] John Fox
>  is an interdisciplinary scientist
> with theoretical interests in AI and computer science, and an applied focus
> in medicine and medical software engineering. After training in
> experimental psychology at Durham and Cambridge Universities and
> post-doctoral fellowships at CMU and Cornell in the USA and UK (MRC) he
> joined the Imperial Cancer Research Fund (now Cancer Research UK) in 1981
> as a researcher in medical AI. The group’s research was explicitly
> multidisciplinary and it subsequently made significant contributions in
> basic computer science, AI and medical informatics, and developed a number
> of successful technologies which have been commercialised.
> In 1996 he and his team were awarded the 20th Anniversary Gold Medal of
> the European Federation of Medical Informatics for the development of
> PROforma, arguably the first formal computer language for modeling clinical
> decision and processes. Fox has published widely in computer science,
> cognitive science and biomedical engineering, and was the founding editor
> of the *Knowledge Engineering Review * (Cambridge University Press).
> Recent publications include a research monograph *Safe and Sound:
> Artificial Intelligence in Hazardous Applications
> *
> (MIT Press, 2000) which deals with the use of AI in safety-critical fields
> such as medicine.
>  *Luke Muehlhauser*: You’ve spent many years studying AI safety issues,
> in particular in medical contexts, e.g. in your 2000 book with Subrata Das, 
> *Safe
> and Sound: Artificial Intelligence in Hazardous Applications
> *.
> What kinds of AI safety challenges have you focused on in the past decade
> or so?
> --
> *John Fox*: From my first research job, as a post-doc with AI founders
> Allen Newell and Herb Simon at CMU, I have been interested in computational
> theories of high level cognition. As a cognitive scientist I have been
> interested in theories that subsume a range of cognitive functions, from
> perception and reasoning to the uses of knowledge 

Re: Fwd: The Machine Intelligence Research Institute Blog

2014-09-05 Thread 'Chris de Morsella' via Everything List





 From: meekerdb 
To: EveryThing  
Sent: Friday, September 5, 2014 9:47 AM
Subject: Fwd: The Machine Intelligence Research Institute Blog
 


For you who are worried about the threat of artificial intelligence, MIRI seems 
to make it their main concern.  Look up their website and subscribe.  On my 
list of existential threats it comes well below natural stupidity.

On mine as well... judging by how far the google car still has to go before it 
does not drive straight into that pothole or require that its every route be 
very carefully mapped down to the level of each single driveway. Real world AI 
is still mired in the stubbornly, dumb as sand nature of our silicon based 
deterministic logic gate architecture.
Much higher chance that we will blow ourselves up in some existentially 
desperate final energy war, or so poison our earth's biosphere that systemic 
collapse is triggered and the deep ocean's flip into an anoxic state favoring 
the hydrogen sulfide producing microorganisms that are poisoned by oxygen, 
resulting in another great belch of poisonous (to animals and plants) hydrogen 
sulfide into the planet's atmosphere -- as occurred during the great Permian 
extinction.
Speaking of which has anyone read the recent study that concludes the current 
anthropocene boundary layer extinction rate is more than one thousand times the 
average extinction level that prevailed from the last great extinction 
(Jurassic) until now. See: Extinctions during human era one thousand times more 
than before

Brent

 


 Original Message  
Subject: The Machine Intelligence Research Institute Blog 
Date: Fri, 05 Sep 2014 12:07:00 + 
From: Machine Intelligence Research Institute » Blog  
To: meeke...@verizon.net 

Machine Intelligence Research Institute » Blog  
The Machine Intelligence Research Institute Blog  
 

 
John Fox on AI safety 
Posted: 04 Sep 2014 12:00 PM PDT
 John Fox is an interdisciplinary scientist with theoretical interests in AI 
and computer science, and an applied focus in medicine and medical software 
engineering. After training in experimental psychology at Durham and Cambridge 
Universities and post-doctoral fellowships at CMU and Cornell in the USA and UK 
(MRC) he joined the Imperial Cancer Research Fund (now Cancer Research UK) in 
1981 as a researcher in medical AI. The group’s research was explicitly 
multidisciplinary and it subsequently made significant contributions in basic 
computer science, AI and medical informatics, and developed a number of 
successful technologies which have been commercialised.
In 1996 he and his team were awarded the 20th Anniversary Gold Medal of the 
European Federation of Medical Informatics for the development of PROforma, 
arguably the first formal computer language for modeling clinical decision and 
processes. Fox has published widely in computer science, cognitive science and 
biomedical engineering, and was the founding editor of the Knowledge 
Engineering Review  (Cambridge University Press). Recent publications include a 
research monograph Safe and Sound: Artificial Intelligence in Hazardous 
Applications (MIT Press, 2000) which deals with the use of AI in 
safety-critical fields such as medicine.
Luke Muehlhauser: You’ve spent many years studying AI safety issues, in 
particular in medical contexts, e.g. in your 2000 book with Subrata Das, Safe 
and Sound: Artificial Intelligence in Hazardous Applications. What kinds of AI 
safety challenges have you focused on in the past decade or so?

 
John Fox: From my first research job, as a post-doc with AI founders Allen 
Newell and Herb Simon at CMU, I have been interested in computational theories 
of high level cognition. As a cognitive scientist I have been interested in 
theories that subsume a range of cognitive functions, from perception and 
reasoning to the uses of knowledge in autonomous decision-making. After I came 
back to the UK in 1975 I began to combine my theoretical interests with the 
practical goals of designing and deploying AI systems in medicine.
Since our book was published in 2000 I have been committed to testing the ideas 
in it by designing and deploying many kind of clinical systems, and 
demonstrating that AI techniques can significantly improve quality and safety 
of clinical decision-making and process management. Patient safety is 
fundamental to clinical practice so, alongside the goals of building systems 
that can improve on human performance, safety and ethics have always been near 
the top of my research agenda.

 
Luke Muehlhauser: Was it straightforward to address issues like safety and 
ethics in practice?

 
John Fox: While our concepts and technologies have proved to be clinically 
successful we have not achieved everything we hoped for. Our attempts to 
ensure, for example, that pract