Re: [Wiki-research-l] feedback appreciated

2017-08-30 Thread Joe Corneli
On Tue, Aug 29, 2017 at 12:09 AM, Caroline Sinders 
 wrote:

> What I am doing *right now* at the Wikimedia Foundation is the
> fantastically weird but unsexy of job of designing tools and UI to mitigate
> online harassment while studying on wiki-harassment. It's not just research
> but a design schedule of rolling
> out tools quickly for the community to mitigate the onslaught of a lot of
> very real problems that are happening as we speak.


Hey, this sounds very interesting Caroline!  I realise the data and
application are potentially quite sensitive, but to the extent that there
are things you can share, it would be super interesting for my students in
"Data Science for Design" at the University of Edinburgh to follow along
with some of what you are doing.

In Week 6 of the Autumn term we're running a Data Fair and I've invited our
University's Wikimedian in Residence, Ewan McAndrew, to come and present
some real-world problems that Master's-level students in design can help
out with.  Again, given the sensitive nature of the problem you're tackling
I have to wonder if there is any room for outside helpers on this
particular problem -- but it's a fascinating one nonetheless.

Also in Week 6 there's a lecture on "data ethics". Your question, "how do
you design and utilize design thinking to make *something right now* and
how do you do
that without recreating a surveillance tool?" is very much the kind of call
to action (and to reflection) that I was asking about earlier on.  Thanks
for the intro to your project!

If there are ways to get involved without getting in the way, or other
related resources that I can share with the students, please follow up with
me here or off list.

-Joe
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-28 Thread Caroline Sinders
Hi all!

Sorry for the delay- I had a super jam packed weekend and now upcoming week.

A few points- thank you for the feedback! In general, I love feedback and
criticism and I definitely got it :) Two, didn't realize this was a *wiki
only* related research channel, so I'll try to bear that in mind in the
future when sharing things I am writing or have written.

But thirdly and lastly, this is not an academic article. This is an article
published in design magazine about research related to ethics within
product design, specifically products using utilizing machine learning and
artificial intelligence. Though, I would love to write an academic paper on
ethics of design utilizing machine learning *in* product design. If that
sounds interesting to any of you, please get at me. I love to collaborate.

So- the tone of voice is *quite* snarky but I stand by it, again because
this was written for Fast Company. I have much more academic writing, if
you are interested in reading that, but it is on online harassment and
automation. This article is designed to be a primer of information for
product designers who may have heard Elon focusing on the dangers of AI.
There are plenty of things to worry about in the future of AI, like the
integration of artificial intelligence into the military or drones, for
example. But publicly, there are no cases of that. There is, publicly, a
variety of investigations done by ProPublica, which I link to in my
article, about predictive policing and it's racial bias. The article itself
is designed to be *approachable* for all readers, *especially non technical
readers*.  And this piece, in it's tone which I stand by, was designed to
jokingly respond to Musk's hyperbolic freak out.

This is, instead, an article designed for lay people, and everyday
designers, to think about what are the current issues with AI, examples of
current issues with implicit bias in machine learning products right now,
and other articles and videos to watch. What this is is a class syllabus
wrapped in a layer of a very genial tone so everyday designers have
something to chew on and some real information to grasp.

There aren't a lot of resources for everyday designers out there. There are
not a lot of resources for start ups, product managers, designers, front
end developers, etc on what is out there in this new and emerging field of
artificial intelligence and how it exists currently within products already
out in the world. Truth be told, this is an article I wrote for my old
coworkers at IBM Watson Design- on why having a real conversation about
ethically how you should design, ethically how you should build products
using machine learning and what questions you should ask about what you are
building and why. I saw and had *very few* of those conversations. I am
writing for *those plumbers* who are out there making things right now, and
have bad leadership and bad guidance, but are generally excited about
product design and the future of AI, and they also have to ship their
products now. Because, I am, also, a plumber. What I am doing *right now*
at the Wikimedia Foundation is the fantastically weird but unsexy of job of
designing tools and UI to mitigate online harassment while studying on
wiki-harassment. It's not just research but a design schedule of rolling
out tools quickly for the community to mitigate the onslaught of a lot of
very real problems that are happening as we speak. I love it, I love the
research that I'm doing because it's about the present and the future.
Plumbing is important, it's how we all avoid cholera. Future city planning
is important, it's how larger society functions together. Both are
important.

I think we're really lucky to work where we all work and to be a part of
this community. We get to question, openly and transparently, we get to
solicit feedback, and we get to work on very meaningful software. Not every
technologist or researcher is as lucky as we are. And those are the
technologists I am most keen to talk to- what does it mean to fold in a
technology that you don't understand very well, how do you design and
utilize design thinking to make *something right now* and how do you do
that without recreating a surveillance tool? It's really hard if you don't
understand how to think about the threat model of your product, of what you
intend to make and how it can be used to harm. There are so few primers for
designers that exist on thinking about products from an ethical standpoint,
and a standpoint of implicit bias. All of which are such important things
to talk about when you are building products that use algorithms, and data,
and the algorithm + the data really will determine what your product does
more so than the design intends.

But you all know this already, it's lot's of other people that don't :)

Best,
Caroline

Ps. the briefest, tiniest of FYIs, in online harassment and security,
plumbers have a *hyper specific* connotation to them

Re: [Wiki-research-l] feedback appreciated

2017-08-28 Thread Aaron Halfaker
OK ok.  There's some hyperbole in this article and we are the type of
people bent on citations and support. This isn't a research publication and
Caroline admits in the beginning that she's going to get into a bit of a
lecturing tone.

But honestly I liked the article.  It makes a good point and pushes a
sentiment that I share.  Hearing about killer robots turning on humanity is
sort of like hearing someone tell you that they are worried about global
warming on Mars for future civilizations there when we ought to be more
alarmed and focused on the coastal cities on Earth right now.  We have so
many pressing issues with AIs that are affecting people right now that the
future focused alarm is, well, a bit alarmist!  Honestly, I think that's
the side of AI that lay people understand while the nuanced issues present
in the AIs alive today are poorly understood and desperately in need of
regulation

I don't think that the people who ought to worry about AIs current problems
are "plumbers".  They are you.  They are me.  They are Elon Musk.
Identifying and dealing with the structural inequalities that AIs create
today is state-of-the-art work.  If we knew how to do it, we'd be done
already.  If you disagree, please show me where I can go get a tradeschool
degree that will tell me what to do and negate the need for my research
agenda.

-Aaron

On Mon, Aug 28, 2017 at 1:58 AM, Robert West  wrote:

> Hi Caroline,
>
> The premise of this article seems to be that everyone needs to solve either
> the immediate or the distant problems. No one (and certainly not Elon Musk)
> would argue that there are no immediate problems with AI, but why should
> that keep us from thinking ahead?
>
> In a company, too, you have plumbers who fix the bathrooms today and
> strategists who plan business 20 years ahead. We need both. If the plumbers
> didn't worry about the immediate problems, the strategists couldn't do
> their jobs. If the strategists didn't worry about the distant problems, the
> plumbers might not have jobs down the road.
>
> Also, your argument stands on sandy ground from paragraph one, where you
> claim that AI will never threaten humanity, without giving the inkling of
> an argument.
>
> Bob
>
> On Fri, Aug 25, 2017 at 6:50 PM, Caroline Sinders 
> wrote:
>
> > hi all,
> > i just started a column with fast co and wrote an article about elon
> musk's
> > AI panic.
> >
> > https://www.fastcodesign.com/90137818/dear-elon-forget-
> > killer-robots-heres-what-you-should-really-worry-about
> >
> > would love some feedback :)
> >
> > best,
> > caroline
> > ___
> > Wiki-research-l mailing list
> > Wiki-research-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
> >
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-28 Thread Robert West
Hi Caroline,

The premise of this article seems to be that everyone needs to solve either
the immediate or the distant problems. No one (and certainly not Elon Musk)
would argue that there are no immediate problems with AI, but why should
that keep us from thinking ahead?

In a company, too, you have plumbers who fix the bathrooms today and
strategists who plan business 20 years ahead. We need both. If the plumbers
didn't worry about the immediate problems, the strategists couldn't do
their jobs. If the strategists didn't worry about the distant problems, the
plumbers might not have jobs down the road.

Also, your argument stands on sandy ground from paragraph one, where you
claim that AI will never threaten humanity, without giving the inkling of
an argument.

Bob

On Fri, Aug 25, 2017 at 6:50 PM, Caroline Sinders 
wrote:

> hi all,
> i just started a column with fast co and wrote an article about elon musk's
> AI panic.
>
> https://www.fastcodesign.com/90137818/dear-elon-forget-
> killer-robots-heres-what-you-should-really-worry-about
>
> would love some feedback :)
>
> best,
> caroline
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-27 Thread James Salsman
>... what does this post have to do with wikis?

FRSbot is a very prominent bot on Wikipedia crucial to obtaining neutral
feedback for less-prominent RFCs, but it doesn't work the way people think
it does, or the way it's authors have implied it does, or the way it should
if it was going to be neutral.

Take a look at its code and see how it distributes requests. They aren't
automated, just automatically prepared for a completely obscured step
requiring manual intervention which, in my opinion, gives the person doing
that manual step a whole lot more power over the controversies in the
encyclopedia than any other role.

Who actually does that manual distribution step? Legotkm or James Hare?
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-26 Thread James Salsman
Dr. Heather Ford wrote:

>... You may want to read Angele Christin's paper that just came
> out in Big Data and Society that complicates the notion of judges
> accepting algorithmic reasoning wholesale in making decisions.
>
> http://journals.sagepub.com/eprint/SPgDYyisV8mAJn4fm7Xi/full

I am in Australia right now, working today to save its would-be
immigrants from stupid robot AI pronunciation assessments:

https://www.theguardian.com/australia-news/2017/aug/08/computer-says-no-irish-vet-fails-oral-english-test-needed-to-stay-in-australia

http://www.smh.com.au/technology/technology-news/australian-exnews-reader-with-english-degree-fails-robots-english-test-20170809-gxsjv2.html

I clearly remember the day in 1996 when the guy who since
wrote the Pearson Spoken English test rejected my attempts
at accent adaptation.

The fight isn't against robots, it's against their lazy creators.

Best regards,
James

___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-26 Thread Joe Corneli
On Sat, Aug 26 2017, Leila Zia wrote:

> ** I personally would skip the whole conversation style in this kind
> of article. In some of your audience, including me, it creates a first
> reaction of "yes, we taught him a lesson."

There's no accounting for taste, but I found the style offputting.

I'll comment on one claim in the article:

 « Robots are never going to “think” like humans »

That is, in general, a non sequitor.  Airplanes don't fly the same way
that birds fly.  AlphaGo apparently doesn't play Go the way humans do.
Wikipedia isn't written the same way that Britannica is -- and no one
really seems to mind ;-)

... and, yes, what does this post have to do with wikis?

Could we use some theorising about AI systems to understand and correct
some of the infelicities of good old fashioned Web 2.0 systems?

What would the article look like if it was a broader call to action?

___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-25 Thread Leila Zia
[The views below are my views and not necessarily the views of my
employer, the Wikimedia Foundation.]

Hi Caroline,

Here are a few feedback points on my end:

* I'm not sure what the ultimate goal of the piece is: to raise
awareness about the issues around machine learning and artificial
intelligence, or to say that Elon Musk doesn't know the real
challenges of AI, or to have a (friendly?) conversation with him, or
something else. I would focus on one goal, and I would personally go
with the first goal as it's an important one (and I know you know
it:).

* Assuming that the goal is the first one:

** There are a few instances that you claim the future where machines
can replace humans will never be here. You are basically claiming that
artificial general intelligence (AGI) research will not result in what
its aim is. This is a big claim: if you say it, you should prove it.
:) I personally recommend staying away from this line of claim,
because it's hard to prove, in fact, there may be such a future.

** In some dimensions, the future that Elon Musk is concerned about is
very near (in some it's potentially very far, and it's good to plan
for now: see the next point): the self-driving cars are one example.
It is safe to say that they are here (it's the matter of when and not
if). It is not hard to imagine all the traffic of a state in the U.S.
such as California to be replaced with self-driving cars in some
years, and these combinations of machines can cause serious harm. This
can be due to ethical gaps, privacy and security gaps, etc. Once you
enter the military world, there are even more real examples that
again, are either being used or can be used relatively soon. The
concerns around a distributed system of machines making decisions
about what the next target is and how to react to it are very real and
along the lines of what Elon Musk may be concerned about.

** The regulations have been forming in a reactive way in the past
decades and this is a problem on its own, imo. It is reasonable to say
that now that we have time and control over where we are heading,
let's make sure we regulate things at a pace that we don't end up
getting surprised and over-regulate, for example.

** I personally would skip the whole conversation style in this kind
of article. In some of your audience, including me, it creates a first
reaction of "yes, we taught him a lesson." which is (hopefully)
quickly followed by: "but wait a minute, this person has so many great
achievements and the media may have been exaggerating his views based
on isolated comments." I cannot think that Elon Musk doesn't see many
of the issues that the not-fully-informed/educated machine learning
implementations can cause and you have listed. If the point of your
piece is not to tell him he's wrong, then I would reconsider the
style.

I hope this helps. :)

Best,
Leila

--
Leila Zia
Senior Research Scientist
Wikimedia Foundation


On Fri, Aug 25, 2017 at 9:50 AM, Caroline Sinders
 wrote:
> hi all,
> i just started a column with fast co and wrote an article about elon musk's
> AI panic.
>
> https://www.fastcodesign.com/90137818/dear-elon-forget-killer-robots-heres-what-you-should-really-worry-about
>
> would love some feedback :)
>
> best,
> caroline
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l

___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] feedback appreciated

2017-08-25 Thread Heather Ford
This is excellent, Caroline. What a powerful piece.

You may want to read Angele Christin's paper that just came out in Big Data
and Society that complicates the notion of judges accepting algorithmic
reasoning wholesale in making decisions.

http://journals.sagepub.com/eprint/SPgDYyisV8mAJn4fm7Xi/full

Best,
Heather.

Dr Heather Ford
University Academic Fellow
School of Media and Communications , The
University of Leeds
w: hblog.org / EthnographyMatters.net  / t:
@hfordsa 


On 26 August 2017 at 02:50, Caroline Sinders  wrote:

> hi all,
> i just started a column with fast co and wrote an article about elon musk's
> AI panic.
>
> https://www.fastcodesign.com/90137818/dear-elon-forget-
> killer-robots-heres-what-you-should-really-worry-about
>
> would love some feedback :)
>
> best,
> caroline
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l