OK ok.  There's some hyperbole in this article and we are the type of
people bent on citations and support. This isn't a research publication and
Caroline admits in the beginning that she's going to get into a bit of a
lecturing tone.

But honestly I liked the article.  It makes a good point and pushes a
sentiment that I share.  Hearing about killer robots turning on humanity is
sort of like hearing someone tell you that they are worried about global
warming on Mars for future civilizations there when we ought to be more
alarmed and focused on the coastal cities on Earth right now.  We have so
many pressing issues with AIs that are affecting people right now that the
future focused alarm is, well, a bit alarmist!  Honestly, I think that's
the side of AI that lay people understand while the nuanced issues present
in the AIs alive today are poorly understood and desperately in need of
regulation

I don't think that the people who ought to worry about AIs current problems
are "plumbers".  They are you.  They are me.  They are Elon Musk.
Identifying and dealing with the structural inequalities that AIs create
today is state-of-the-art work.  If we knew how to do it, we'd be done
already.  If you disagree, please show me where I can go get a tradeschool
degree that will tell me what to do and negate the need for my research
agenda.

-Aaron

On Mon, Aug 28, 2017 at 1:58 AM, Robert West <w...@cs.stanford.edu> wrote:

> Hi Caroline,
>
> The premise of this article seems to be that everyone needs to solve either
> the immediate or the distant problems. No one (and certainly not Elon Musk)
> would argue that there are no immediate problems with AI, but why should
> that keep us from thinking ahead?
>
> In a company, too, you have plumbers who fix the bathrooms today and
> strategists who plan business 20 years ahead. We need both. If the plumbers
> didn't worry about the immediate problems, the strategists couldn't do
> their jobs. If the strategists didn't worry about the distant problems, the
> plumbers might not have jobs down the road.
>
> Also, your argument stands on sandy ground from paragraph one, where you
> claim that AI will never threaten humanity, without giving the inkling of
> an argument.
>
> Bob
>
> On Fri, Aug 25, 2017 at 6:50 PM, Caroline Sinders <csind...@wikimedia.org>
> wrote:
>
> > hi all,
> > i just started a column with fast co and wrote an article about elon
> musk's
> > AI panic.
> >
> > https://www.fastcodesign.com/90137818/dear-elon-forget-
> > killer-robots-heres-what-you-should-really-worry-about
> >
> > would love some feedback :)
> >
> > best,
> > caroline
> > _______________________________________________
> > Wiki-research-l mailing list
> > Wiki-research-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
> >
> _______________________________________________
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l

Reply via email to