Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-26 Thread Pine W
On Sat, Jun 24, 2017 at 2:49 AM, Kerry Raymond 
wrote:

> No right to be offended? To say to someone "you don't have the right to be
> offended" seems pretty offensive in itself. It seems to imply that their
> cultural norms are somehow inferior or unacceptable.
>

I'm not sure that I worded my comment clearly as I would like. I would like
to reduce the intensity and frequency of toxic behavior, but there's some
difficulty in defining what is toxic or unacceptable. If person X says
something that person Y finds offensive, that in and of itself doesn't mean
that person X was being intentionally malicious. Cultural norms and
personal sensitivities vary widely, and there is a danger that attempts to
reduce conflict will be done in such a way that freedom of expression is
suppressed. As an example, there are statements in British English that I
am told are highly offensive, but to me as an American seem mild when I
hear them through an American cultural lens. Having an AI, or humans,
attempt to police the degree to which a statement is offensive seems like a
minefield. Perhaps a better way to approach the situation is to try to a
look at intent, which I think is similar to your next point:


>
> With the global reach of Wikipedia, there are obviously many points of
> view on what is or isn't offensive in what circumstances. Offence may not
> be intended at first, but, if after a person is told their behaviour is
> offensive and they persist with that behaviour, I think it is reasonable to
> assume that they intend to offend. Which is why the data showing there is a
> group of experienced users involved in numerous personal attacks demands
> some human investigation of their behaviour.
>

I think that looking at intent, rather than solely at the content of what
was said, sounds like a good idea. However, I'm not sure that I'd always
agree that if person X is told that statement A is offensive to person Y
that person X should necessarily stop, because what person X is saying may
be seem reasonable to person X (for example "It's OK to eat meat") but
highly offensive to person Y. I think maybe a more nuanced approach would
be to look at what person X's intent is in saying "It's OK to eat meat": is
the person expressing or arguing for their views in good faith, or are they
acting in bad faith and intentionally trying to provoke person Y?
Fortunately, in my experience, the cases where people are being malicious
are usually clearer, such that admins and others are not usually called on
to evaluate whether a statement was OK. "Calling names" in any language
seems to not go over very well, and I think that most of us who have a tool
to create blocks would be willing to use that tool if a conversation
degenerated to that point. Unfortunately, like you, my perception in the
past was that there were some experienced users on English Wikipedia (and
perhaps other languages as well) where needlessly provocative behavior was
tolerated; I would like to think that the standards for civility are being
raised.

I'm aware of WMF's research into the frequency of personal attacks; I
wonder whether there are charts of how the frequency is changing over time.


> Similarly for a person offended, if there is a genuinely innocent
> interpretation to something they found offensive and that is explained to
> them (perhaps by third parties), I think they need to be accepting that no
> offence was intended on that occasion. Obviously we need a bit of give and
> take. But I think there have to be limits on the repeated behaviour (either
> in giving the offence or taking the offence).
>

In general, I agree.

There are some actions for which I could support "one strike and you're
out"; I once kicked someone out of an IRC channel for uncivil behavior with
little (perhaps no) warning because the situation seemed so clear to me,
and no one complained about my decision. I think that in many cases that
it's clear whether someone is making a personal attack, but some cases are
not so clear, and I want to be careful about the degree to which WMF
encourages administrators to rely on an AI to make decisions. Even if an AI
is trained extensively in with native language speakers, there can be
significant differences in how a statement is interpreted.

Pine


>
> Kerry
>
>
>
>
>
>
>
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-24 Thread Jonathan Cardy
I would be interested to see how much of the offence and how many of the 
attacks are in Wikipedias known and usually obvious stress areas.

Wikipedia tries to neutrally cover every topic that would be considered 
controversial in real life, and it also brings together people from diverse 
parts of the globe who may not previously have encountered people of each 
other's views. It also has whole areas of contention itself, in particular the 
deletion process.

Many organisations that aim for a civil discourse discourage or ban discussion 
of contentious topics such as politics and religion. If anything we do the 
reverse. I'm not suggesting that we amend that, but it would be good to know 
whether the tactic of avoiding contentious topics is an effective way of 
avoiding toxic behaviours.

There's also the issue of collateral damage - snarkiness between editors might 
be based on previous encounters on a more contentious topic, or even on 
perceptions of one editor based on their interactions with others who they have 
clashed with in a contentious area. If so we'd expect relatively few incidents 
where regulars are toxic to newbies who haven't stumbled into a heated 
discussion about abortion, alternative medicine, the Armenian genocide etc.

Truly difficult to comment on this study without being able to see the attacks 
that they found. But one area I can evidence, Wikipedia is big, especially 
behind the scenes. Most user pages are very low audience, and an isolated 
attack on an individual editor in their user space might not be noticed or 
acted on by anyone. Tools that help manage and find that would be useful. I 
have in the past trawled user space and deleted swathes of attack pages. Some 
of it is venting by editors who have just had their article deleted, and it is 
unlikely that anyone but themselves actually reads what they write on their own 
talkpages - I very much doubt the tagger who dropped a deletion template on 
their talkpage will go back and read their response.



Regards

WereSpielChequers


> On 24 Jun 2017, at 10:49, Kerry Raymond  wrote:
> 
> No right to be offended? To say to someone "you don't have the right to be 
> offended" seems pretty offensive in itself. It seems to imply that their 
> cultural norms are somehow inferior or unacceptable. 
> 
> With the global reach of Wikipedia, there are obviously many points of view 
> on what is or isn't offensive in what circumstances. Offence may not be 
> intended at first, but, if after a person is told their behaviour is 
> offensive and they persist with that behaviour, I think it is reasonable to 
> assume that they intend to offend. Which is why the data showing there is a 
> group of experienced users involved in numerous personal attacks demands some 
> human investigation of their behaviour.
> 
> Similarly for a person offended, if there is a genuinely innocent 
> interpretation to something they found offensive and that is explained to 
> them (perhaps by third parties), I think they need to be accepting that no 
> offence was intended on that occasion. Obviously we need a bit of give and 
> take. But I think there have to be limits on the repeated behaviour (either 
> in giving the offence or taking the offence).
> 
> Kerry
> 
> 
> 
> 
> 
> 
> 
> ___
> Wiki-research-l mailing list
> Wiki-research-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l

___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-24 Thread Kerry Raymond
No right to be offended? To say to someone "you don't have the right to be 
offended" seems pretty offensive in itself. It seems to imply that their 
cultural norms are somehow inferior or unacceptable. 

With the global reach of Wikipedia, there are obviously many points of view on 
what is or isn't offensive in what circumstances. Offence may not be intended 
at first, but, if after a person is told their behaviour is offensive and they 
persist with that behaviour, I think it is reasonable to assume that they 
intend to offend. Which is why the data showing there is a group of experienced 
users involved in numerous personal attacks demands some human investigation of 
their behaviour.

Similarly for a person offended, if there is a genuinely innocent 
interpretation to something they found offensive and that is explained to them 
(perhaps by third parties), I think they need to be accepting that no offence 
was intended on that occasion. Obviously we need a bit of give and take. But I 
think there have to be limits on the repeated behaviour (either in giving the 
offence or taking the offence).

Kerry




 


___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-23 Thread Pine W
Kerry, I think that I agree with you. Awhile back, my impression from
English Wikipedia arbitration pages was that there is a relatively small
number of users who stir up trouble repeatedly and are sometimes sanctioned
but rarely blocked. I don't want to speak for the Arbitration Committee,
and since Arbcom changes membership periodically I'm reluctant to criticize
current arbcom members for decisions of the committee in prior years. My
impression is that over the years Arbcom has become more willing to
sanction administrators who use their admin tools in ways that Arbcom feels
are not okay, which I think is progress, but there's much more besides
dealing with problematic administrators that ideally would be done to
address incivility, personal attacks, and harassment.

That brings me to Chris' email, and unfortunately I don't have answers for
most of his points. Differing interpretations and values are likely to be a
fact of life in the Wikiverse regardless of good intentions. I think that
some of us have more emotional armor than others, and some of us are more
willing than others to participate in uncomfortable or contentious
discussions. Similarly, people have a variety of emotional triggers that,
from my perspective, have little to do with reason and a lot to do with
other factors, some of which we probably don't control any more than we
control our autonomic reflexes. I don't think it's other people's
responsibilities to try to delicately work around someone's reflexes (which
I would guess vary significantly from person to person and are often
unpredictable), but neither should one intentionally try to trigger someone
else, and people who accidentally overreact when triggered should apologize
for doing so (I can recall making such an apology myself on one occasion,
and I think I've gotten better over the years about handling myself in
difficult situations). Public discourse in the Wikiverse, in politics, and
in any number of other requirements requires one to have a certain amount
of willingness to take risks and hear things that we might not want to hear
and might find offensive. In attempting to reduce the frequency and
intensity of personal attacks and harassment, I think that we need to be
careful that we don't go so far as to say that people "have a right not to
be offended", since others' beliefs and statements are very likely to seem
different or strange or alienating from time to time. However, I also hope
that we can reduce some of the more aggressive behavior for which I think
there is consensus has no purpose in Wikimedia that could be compatible --
or at least not opposed to -- Wikimedia's goals.

That brings me back to the training of the AI, and what it will be flagging
for admins to review. I recall getting the impression from Maggie's
presentation at a metrics meeting that the AI was catching some edits that
come across to me as very likely to meet the ENWP definition of a personal
attack, and I think that having an AI that could help admins might indeed
be useful. However, there's another dimension to this problem which we
haven't addressed, which is the limited human resource capacity of the
admin corps, and the limited number of individuals who are willing to spend
their free time policing Wikimedia and dealing with controversial or even
dangerous situations. So I think that the AI, and attempts to detoxify
Wikimedia, if designed well, can indeed be good -- but I can't help but
wonder if they will be insufficient unless the capacity of the admin corps
with skilled and selfless administrators is also increased in proportion to
the need, and I'm not sure what the solution to that problem will be. Human
resources are constraints throughout the Wikiverse, and I think that they
may be a problem with detoxification efforts as well.

Chris, returning to your point about emotional literacy: I don't know how
to address that systemically, although perhaps training might be
beneficial. I get the impression that in the western world, police officers
and military personnel (who seem to be disproportionately male, although
perhaps lightly less so than Wikipedia's population) are increasingly
trained in emotional resilience, communications, and other psychological
issues. Perhaps training is something that we could think about doing on a
large scale, although that would be complicated. WMF has already started
some limited training for functionaries, and I think that expanding
training might indeed be useful. Training probably won't be a cure, but it
might help to move the needle a bit. I would encourage WMF to consider
doing research into what kind of training might be beneficial for
Wikimedia's social environment, and how best to deliver that training, on a
large scale.

Pine
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-22 Thread Chris Koerner
>but that
doesn't necessarily mean that we should use policy and admin tools instead
of persuasion and other tools (such as content policies about verifiability
and notability) to address them
...

>I had an
experience myself when I made a statement to someone which from my
perspective was a statement of fact, and the other party took it as an
insult. I don't apologize for what I said since from my perspective it was
valid, and the other party has not apologized for their reaction, but the
point is that defining what constitutes a personal attack or harassment can
be a very subjective business and I'm not sure to what extent I would trust
an AI to evaluate what constitutes a personal attack or harassment in a
wide range of contexts.

Hey Pine,
A little persuasive rhetoric from a friend here. :)

I do agree with you that talking about these things with one another is
probably more fruitful than Yet Another Policy. So how do we make space for
that? How do we encourage open, honest, emotionally available discussions
around what can be very hard conversations? Talking about feelings for many
cultures is still very difficult. Even here in the midwest United States,
guys talking about how they feel is still seen as effeminate by many.
Unfortunately.

How can we elevate the awareness that despite intent, sometimes we can
insult people. Knowing how to discuss feelings and that we comfortable
doing so may help greatly in the perception and actuality of harassment on
our projects.

I'm thinking of the example you give in my own experiences working with
folks in the movement. It's important to talk about these things and try to
figure out the nuance in our behaviors and how folks reading our often
public discourse can get an impression of us that isn't representative of
our individual selves or the movement as a whole.

Semi-related, I just read this interesting article about how to apologize.
I'm not trying to admonish you here! It just seemed relevant. :) How can we
build a toolkit of awareness for emotionally-connected responses like what
is expressed in this article around apologizing?

http://nymag.com/scienceofus/2017/06/these-apology-critics-want-to-teach-you-how-to-say-sorry.html


Yours,
Chris Koerner
Community Liaison - Discovery
Wikimedia Foundation
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-22 Thread Kerry Raymond
I agree you can probably never pin down these terms to everyone's satisfaction. 
But, at the end of the day, is the real issue here the definition of harassment 
or is it the issue of people leaving Wikipedia because of unpleasant 
interactions with other people or perhaps retaliating in some inappropriate 
way. Harassment may not even be occurring on a Talk page. If someone stalks you 
on-wiki and reverts each of your edits, you are probably being harassed without 
a word being said on Talk.

This is the problem. Two people can see the same set of events or the same 
commentary from very different points of view. The question of "harassment" 
isn't completely decidable in the real world for the same reasons. But if we 
train the algorithms based on human assessments (provided that a wide range of 
people were making those assessments), we do have something useful to work with 
to begin to test hypotheses in the lab before taking real-world action.

For example, I find it very interesting that a small group of experienced users 
appear responsible for a lot of apparently obvious personal attacks. It does 
indeed suggest that these people think themselves unstoppable, whether that is 
being they believe themselves "unblockable" or perhaps they feel safe in the 
knowledge that their less-experienced victim is unlikely to know how to 
complain. Or perhaps they are just bantering among themselves, like a bunch of 
mate at the pub? But it certainly seems to suggest that there is a way to start 
identifying potential problem users for a human-based investigation.

But does the "community" really care about harassment to investigate them? 
Would it really take action against experienced users who engaged in 
harassment? Past events suggest not. 

Kerry

-Original Message-
From: Wiki-research-l [mailto:wiki-research-l-boun...@lists.wikimedia.org] On 
Behalf Of Pine W
Sent: Thursday, 22 June 2017 10:04 AM
To: A mailing list for the Analytics Team at WMF and everybody who has an 
interest in Wikipedia and analytics. <analyt...@lists.wikimedia.org>; Wiki 
Research-l <wiki-research-l@lists.wikimedia.org>
Subject: Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our 
understanding of harassment on Wikipedia

I'm glad that work on detecting and addressing harassment are moving forward.

At the same time, I'd appreciate getting a more precise understanding of how 
WMF is defining the word "harassment". There are legal definitions and 
dictionary definitions, but I don't think that there is One Definition to Rule 
Them All. I'm hoping that WMF will be careful to distinguish debate and freedom 
to express opinions from harassment; we may disagree with minority or fringe 
views (even views that are offensive to some) but that doesn't necessarily mean 
that we should use policy and admin tools instead of persuasion and other tools 
(such as content policies about verifiability and notability) to address them 
(and in some cases Wikipedia may not be a good place for these discussions). 
Other distinctions include (1) the distinction between a personal attack and 
harassment ( 
https://blog.wikimedia.org/2017/02/07/scaling-understanding-of-harassment/
appears to have equivocated the two definitions, while English Wikipedia policy 
makes distinctions between them), and (2) the distinction between a personal 
attack and an evidence-based critique.

Also note that definitions of what constitutes an attack may vary between 
languages; for example an expression which sounds insulting to someone in one 
place, culture, or language may mean something very different or relatively 
benign in a different place, culture, or language. I had an experience myself 
when I made a statement to someone which from my perspective was a statement of 
fact, and the other party took it as an insult. I don't apologize for what I 
said since from my perspective it was valid, and the other party has not 
apologized for their reaction, but the point is that defining what constitutes 
a personal attack or harassment can be a very subjective business and I'm not 
sure to what extent I would trust an AI to evaluate what constitutes a personal 
attack or harassment in a wide range of contexts. I get the impression that WMF 
intends to flag potentially problematic edits for admins to review, which I 
think could be a good thing, but I hope that there is great care being invested 
in how the AI is being trained to define personal attacks and harassment, and I 
wouldn't necessarily want admins to be encouraged to substitute the opinion of 
an AI for their own.

I understand the desire to tone down some of the more heated discourse around 
Wikipedia for the sake of improving our user population statistics, and at the 
same time I'm hoping that we can continue to have very strong support for 
freedom of expression and differences of opinion. This is a difficult balancing 
act. I think t

Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-21 Thread Leila Zia
Hi Dan,

Thanks for your note. :)

On the Research end, Dario is still a big supporter of the efforts
around research to help us better understand harassment (as you
noticed in our commitments to the annual plan) and with Ellery's
departure, I've been helping him a bit to make sure we can move
forward on this front. More specifically, and while we're continuing
the research with Nithum and Lucas who were Ellery's collaborators on
the Detox project, we recently initiated
https://meta.wikimedia.org/wiki/Research:Study_of_harassment_and_its_impact
with Cristian and Yiqing from Cornell University. We are very excited
about this new collaboration as Cristian has years of experience in
spaces that are very relevant to the socio-technical problems related
to harassment. I think you will enjoy reading that page which signal
the early directions of the research.

The whole harassment research team meets every 2 weeks, if you're
curious what's going on on this front and on our end and you want to
listen in, please ping me. And, thank you for the offer to help. We
may take you up on that. :)

Best,
Leila

--
Leila Zia
Senior Research Scientist
Wikimedia Foundation


On Wed, Jun 21, 2017 at 7:55 PM, Toby Negrin  wrote:
> Hi Dan -- we are actually in touch with Detox as part of the Community
> Health initiative. They are doing their first quarterly check in this
> quarter so expect some updates then. Ping me offlist if you want more info.
>
> -Toby
>
> On Wed, Jun 21, 2017 at 10:48 AM, Dan Andreescu 
> wrote:
>>
>> I'm reflecting on this work and how awesome it was.  I see that it's
>> continued in our annual plan under the Community Health Initiative, but I
>> am afraid it's taking a secondary role without Ellery and others to drive
>> it.  On
>> https://meta.wikimedia.org/wiki/Community_health_initiative/AbuseFilter
>> it's only featured as a question under the #Functionality section.
>>
>> I just wanted to point this out and offer to help if I can be of use.
>>
>> On Tue, Feb 7, 2017 at 5:16 PM, Ellery Wulczyn 
>> wrote:
>>
>> > Today we are announcing
>> >
>> > 
>> > the
>> > first results of the collaboration between Wikimedia Research and Jigsaw
>> > on
>> > modeling personal attacks and other forms of harassment on English
>> > Wikipedia. We have released
>> >  a corpus of 95M
>> > user
>> > and article talk page comments as well as over 1M human labels produced
>> > by
>> > 4000 crowd-workers for a set of 100k comments. Documentation on our
>> > methodology and future work can be found in our paper Ex Machina:
>> > Personal Attacks Seen at Scale  (to
>> > appear at WWW2017) and on our project page on meta
>> > . If you are interested
>> > in contributing to the project, please get in touch via the project talk
>> > page . Another
>> > great
>> > way to get involved is to label a set of comment in the Wikilabels
>> > discussion quality campaign .
>> >
>> > ___
>> > Analytics mailing list
>> > analyt...@lists.wikimedia.org
>> > https://lists.wikimedia.org/mailman/listinfo/analytics
>> >
>> >
>> ___
>> Wiki-research-l mailing list
>> Wiki-research-l@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
>
>
>
> ___
> Analytics mailing list
> analyt...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
>

___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


Re: [Wiki-research-l] [Analytics] Wikipedia Detox: Scaling up our understanding of harassment on Wikipedia

2017-06-21 Thread Dan Andreescu
I'm reflecting on this work and how awesome it was.  I see that it's
continued in our annual plan under the Community Health Initiative, but I
am afraid it's taking a secondary role without Ellery and others to drive
it.  On
https://meta.wikimedia.org/wiki/Community_health_initiative/AbuseFilter
it's only featured as a question under the #Functionality section.

I just wanted to point this out and offer to help if I can be of use.

On Tue, Feb 7, 2017 at 5:16 PM, Ellery Wulczyn 
wrote:

> Today we are announcing
>  
> the
> first results of the collaboration between Wikimedia Research and Jigsaw on
> modeling personal attacks and other forms of harassment on English
> Wikipedia. We have released
>  a corpus of 95M user
> and article talk page comments as well as over 1M human labels produced by
> 4000 crowd-workers for a set of 100k comments. Documentation on our
> methodology and future work can be found in our paper Ex Machina:
> Personal Attacks Seen at Scale  (to
> appear at WWW2017) and on our project page on meta
> . If you are interested
> in contributing to the project, please get in touch via the project talk
> page . Another great
> way to get involved is to label a set of comment in the Wikilabels
> discussion quality campaign .
>
> ___
> Analytics mailing list
> analyt...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/analytics
>
>
___
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l