Re: [Foundation-l] [Wiki-research-l] Motivations to Contribute to Wikipedia

2012-03-19 Thread Dario Taraborelli
James,

I think I have replied consistently to your requests, both on wiki and by mail, 
stressing that this is the de facto standard procedure that was introduced with 
the creation of the RCom, pending a formal (as in voted) policy, and that the 
expectation is for whoever runs a survey or subject recruitment campaign to 
comply with this procedure. I appreciate that it implies a bit of bureaucracy 
but it's the best solution we can offer to help the community understand who 
runs a study and what for and help the researcher/investigator meet some basic 
requirements.

Dario 

On Mar 19, 2012, at 10:06 AM, James Salsman wrote:

 Lane,
 
 Thanks for your message:
 
 James: I made the edit stating the research should get approval,
 and I did that by jumping into the game and just making the edit
 based on what I read in discussion boards. I did not consider it
 to be a new requirement
 
 For the benefit of those who haven't clicked on the link, you edited
 [[meta:Research:Subject recruitment]] to read, at the top:
 
 If you are doing research which involves contacting Wikimedia project
 editors or users then you must first notify the Wikimedia Research
 Committee by describing your project. After your project gets approval
 then you may begin.
 
 How could that not be seen as a requirement?  Do you think there is a
 way to phrase it so that it would not be seen as a requirement?
 
 Certainly this is not your fault.  As you read, Dario Taraborelli
 stated on February 15, this is a policy that we're enforcing ...
 approval is required
 http://meta.wikimedia.org/w/index.php?title=Research_talk%3AFAQdiff=3441309oldid=3440848
 
 And after you made that edit, Dario thanked you for it, saying, I
 appreciate the documentation on the review procedure -- even though
 the Research Committee had explicitly rejected an approval policy
 requirement in September 2010, has not discussed it since, and neither
 the community or the Foundation has ever endorsed any of the earlier
 policy proposals.
 
 I would not be so upset about this if I hadn't been repeatedly accused
 of misconduct in failing to obtain RCom approval.
 
 Given the ease and lack of remorse with which Dr. Taraborelli, Mr.
 Walling, and Mr. Beaudette have all repeatedly lied about me while
 accusing me of misconduct, I have lost all confidence in the ability
 of Foundation staff to adhere to basic ethics. I intend to continue to
 raise this issue until it is addressed sufficiently.
 
 Sincerely,
 James Salsman
 
 ___
 Wiki-research-l mailing list
 wiki-researc...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wiki-research-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] [Wiki-research-l] Motivations to Contribute to Wikipedia

2012-03-19 Thread Dario Taraborelli
Hi Lane,

your proposed workflow is a good description of how I would like the SR 
procedure to function in an ideal world. I am not myself at the forefront of SR 
discussions, but I'd definitely like to see a more streamlined process and a 
better way of signaling to participants which projects are flagged as reviewed 
and which aren't. Part of the discussion that we had during the last RCom 
meeting of the RCom was precisely focused on this issue [1].

If you want to contribute to the SR discussion, I strongly recommend you post 
your proposal on this page [2] so it can be seen and discussed by others. It 
would also probably make sense to move the entire SR discussion to a dedicated 
list as I suspect many wiki-research-l subscribers are not interested in 
following this thread. I'll also forward this to the RCom members who have been 
involved in SR as they will be able to make a better judgment than mine on 
these matters


Dario

[1] http://etherpad.wikimedia.org/RComDec2011
[2] 
http://meta.wikimedia.org/wiki/Research_talk:Committee/Areas_of_interest/Subject_recruitment


 Is such a flagging system already in place? If not, shall we start one?
 
 This is what I imagine is what we have consensus to do - is this how it is 
 supposed to work?
 Researcher jumps on Wikipedia unannounced and starts recruiting for surveys
 Some Wikipedian tells the researcher to submit their project for review
 Researcher goes to landing page and completes a form for their proposal
 The proposal is posted publicly
 Any volunteer can check the proposal to see if all fields are completed
 Volunteers tag the form as being completed or incomplete - no quality review
 Completed forms eventually get reviewed by RCom according to criteria which 
 are currently undefined
 Approved projects get a template to stick on their project page. 
 Researchers must show their research page to all research recruitment 
 candidates, who would be able to see the completed form, the flagging by a 
 volunteer, and the approval by RCom. The approval template would also link to 
 more information about research on Wikipedia.
 Research subjects would only be able to agree to participate in research by 
 following instructions at the bottom of the research description form, so 
 they would see default notices like unflagged or unreviewed if no one has 
 checked it.

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] [Wikidata-l] introduction (community communications for Wikidata)

2012-03-09 Thread Dario Taraborelli
Welcome Lydia, it's an exciting time to work with WMDE, best of luck in your 
new role.

Dario

On Mar 9, 2012, at 9:13 AM, Lydia Pintscher wrote:

 Hi everyone!
 
 I wanted to take a moment to introduce myself. I'm Lydia and just
 started working for Wikimedia Germany. Some of you might know me from
 my work in Free Software projects.
 
 I'll be a part of the team working on Wikidata
 (http://meta.wikimedia.org/wiki/Wikidata) - the goal of the project is
 to create something similar to Wiki Commons for data). It's a huge
 undertaking for the German and global community. Wikidata is a project
 I am passionate about and I am even more passionate about doing it
 right. Doing it right in this case obviously means making sure
 everyone's input is heard and taken into consideration. My
 responsibility will be exactly that - working with all of you to make
 it a successful project. A lot of things concerning how, when and
 where this will be used in Wikipedia are still up for discussion and
 decisions need to be found in the community. I will be here to
 facilitate this.
 
 I assume many of you have not heard from me before so let me tell you
 a bit about myself. I studied computer science at the Karlsruhe
 Institute of Technology. There I worked on a program to plan
 robot-assisted laser surgeries on human skulls and wrote my diploma
 thesis on collaborative and transparent Free Software development. I'm
 passionate about enabling people to make awesome happen around Free
 Culture. I've spent most of my spare time in the last 7 years on
 community work in KDE (http://kde.org). This includes running its
 mentoring programs, co-founding its community working group, serving
 on the board of the non-profit behind it and generally making sure
 everything is running smoothly. I've also helped out other projects
 occasionally like Kubuntu, VLC/VideoLan and openSUSE in that position.
 Not long ago I released a free book called Open Advice
 (http://open-advice.org) that is a collaborative effort to make it
 easier for people to start contributing to Free Software. You probably
 know 4 of the authors from around Wikipedia. When it comes to
 MediaWiki I have done developer engagement for Semantic MediaWiki Plus
 for the last two years and am on the steering committee of the
 non-profit behind Semantic MediaWiki. Due to my day only having 24
 hours (even if some people claim otherwise) I have not had a chance to
 get into contributing to Wikipedia. Thankfully that's going to change
 now. (As a very regular user: Thank you!)
 
 For the next days/weeks my focus will be:
 * collecting ideas/doubts/other input for Wikidata that you already
 have for me now (I'll work through any existing discussions I can find
 - if you want to make sure I see something please do send me a link.)
 * creating some resources to explain the project better
 * setting up some infrastructure to keep everyone updated on the
 status and able to contribute
 * work on collecting input in a structured manner and addressing it together
 
 If you have any questions please let me know. I'll be around on the
 English and German Wikipedia, IRC, XMPP, Skype or whatever else you
 prefer ( http://en.wikipedia.org/wiki/User:Lydia_Pintscher_(WMDE) ).
 You can subscribe to the Wikidata mailing list at
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l and join the
 IRC channel #wikimedia-wikidata on freenode.
 
 
 Cheers
 Lydia, who is really looking forward to working with you
 
 -- 
 Lydia Pintscher - http://about.me/lydia.pintscher
 Community Communications for Wikidata
 
 Wikimedia Deutschland e.V.
 Eisenacher Straße 2
 10777 Berlin
 www.wikimedia.de
 
 Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
 
 Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
 unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
 Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
 
 ___
 Wikidata-l mailing list
 wikidat...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikidata-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Fw: Strike against the collection of personal data through edit links

2012-02-10 Thread Dario Taraborelli
I put together a short explanation of how clicktracking works, what data it 
stores and why we use it. I'll work with Oliver to make sure this is also 
captured in the AFT5 FAQ. Feel free to contact me off-list if you have specific 
questions that I haven't answered here.

Dario


* What is clicktracking?

Clicktracking is an extension developed by the Wikimedia Foundation during the 
Usability initiative [1]. It has been used since then to test a number of 
features or to run some small-scale usability experiments.

* How does it work?

The extension collects click-through data (e.g. it counts clicks on a call to 
action after posting article feedback) that is typically not stored in the 
database. An example of the data collected by this extension can be found here 
[2]
 
* Why do we use clicktracking in AFT?

We use clicktracking to measure aggregate click-through/completion rates as 
part of our analysis of AFT [3]. We randomly assign users to different 
buckets or experimental conditions (e.g. a specific AFT design) or to a 
control group. This allows us to measure how each condition performs with 
respect to each other. For example, we want to know how a specific AFT design 
affects editing behavior or how many people who see the AFT widget at a 
specific placement take a call-to-action. The two main  reasons why we use the 
clicktracking extension for this purpose are (1)  to capture bucket 
information, which is not stored in the database, and  (2) to measure drop-off 
rates for specific funnels (e.g how many users browse away after clicking on a 
button).

As such, the extension is used to count events for groups of users and it's not 
designed to track individuals, let alone store personally identifiable 
information. For example, it does NOT store user IDs or usernames for 
registered editors and it assigns and stores randomly generated tokens for 
every user.

* Why these ugly URLs when I click on a section link?

Clicktracking is usually implemented via javascript and session cookies, but in 
some cases it's easier to just pass a URL parameter when a form is submitted. 
We appreciate that the AFT5 implementation of clicktracking is not very elegant 
and we will disable it as soon as we've collected the data needed for the 
analysis.

* What is the status of data collected via clicktracking? 

Data collected via the clicktracking extension is subject to the privacy policy 
[4] and as such it's not publicly released, unless in a fully anonymized or 
aggregate form.

[1] http://www.mediawiki.org/wiki/Extension:ClickTracking
[2] 
http://meta.wikimedia.org/wiki/Research:Article_feedback/Clicktracking#Log_format_specification
[3] http://meta.wikimedia.org/wiki/Research:Article_feedback/Data_and_metrics
[4] http://wikimediafoundation.org/wiki/Privacy_policy

On Feb 5, 2012, at 9:32 PM, Howie Fung wrote:

 We would be able to look at just the edit summaries, but that would only
 provide us with analysis on edits that were successfully completed.  By
 including the actual clicks in the tracking, we can do analysis on the
 edit/save ratio (% of total edit attempts that were successfully saved).
 
 Howie
 
 On Sat, Feb 4, 2012 at 6:09 PM, Brandon Harris bhar...@wikimedia.orgwrote:
 
 
   I'm not sure why this couldn't be done if that were all that is
 being measured.  I suspect there's other behaviors being tracked.
 
   As I said, I'm not the person who knows most about this, so you
 have to take what I am saying with a grain of salt.
 
 
 
 On 2/4/12 5:21 PM, WereSpielChequers wrote:
 
 Hi Brandon, thanks for the explanation, but wouldn't it be easier to just
 analyse edit summaries? If you edit by section the edit summary defaults
 to
 start with the section heading...
 
 Were SpielChequers
 
 Message: 7
 
 Date: Sat, 04 Feb 2012 14:51:49 -0800
 From: Brandon Harrisbhar...@wikimedia.org
 To: foundation-l@lists.wikimedia.**orgfoundation-l@lists.wikimedia.org
 Subject: Re: [Foundation-l] Fw: Strike against the collection of
   personal data through edit links
 Message-ID:4F2DB685.7@**wikimedia.org4f2db685.70...@wikimedia.org
 
 Content-Type: text/plain; charset=ISO-8859-1; format=flowed
 
 
   (This may not be 100% accurate; the person who knows most about
 this is
 on vacation, but I'll try to explain to the best of my understanding.)
 
   Those weird URLs are part of a clicktracking process.  It's a test
 to
 see how people go about editing the page *most often* (by section, or by
 edit tab) and further to see how effective various calls-to-action (such
 as those given by Article Feedback) are.
 
   The longevity of the data isn't something I can comment to but I'd
 be
 surprised if it lasted even 3 months.  I do not know if there are
 identity markers connected to them but I wouldn't be surprised.
 
   To that end, the data is only useful in roll-ups, and wouldn't be
 something published anywhere except in aggregate.
 
 
 
 On 2/4/12 2:27 PM, Philippe Beaudette 

Re: [Foundation-l] Regarding Berkman/Sciences Po study

2011-12-11 Thread Dario Taraborelli
Kim,

what about we stop naming and shaming and start thinking of how to solve 
problems instead? Let's sit down and discuss how to fix the various issues that 
have been raised on the lists, obtain community feedback and allow the 
researchers to resume the collection of responses they need to complete this 
study, which I understand you are not substantially objecting to. 

I appreciate your contribution on the talk page of the project and I am happy 
to host a conference call with Jerome some time this week if you wish to help 
us out.

Dario

On Dec 11, 2011, at 11:48 AM, Kim Bruning wrote:

 On Mon, Dec 12, 2011 at 12:27:34AM +0400, Yaroslav M. Blanter wrote:
 I will do it right now, but to make it clear, we have 2 (TWO: twee, zwei,
 deux, dos ...) RCOM members in total who are involved: Dario and Mayo. I do
 not think anybody else would be able to answer any questions. 
 And last contribution of Mayo in en.wp, from what I see, is from June.
 So I guess it would be difficult to have two RCom members answering
 questions.  
 
 That is most unfortunate.
 
 Btw trolling on mailing lists is also not just a bad idea, it
 goes against a basic policy for beginners.
 
 With the greatest possible respect; I would suggest that the research
 committee does not have the kind of standing required to accuse others of
 disruptive behaviour, at this point in time.
 
 sincerely,
   Kim Bruning
 
 
 
 
 -- 
 
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


[Foundation-l] Regarding Berkman/Sciences Po study

2011-12-09 Thread Dario Taraborelli
 with a month notice for this reason. 

• Is this campaign running at 100% on the English Wikipedia?
No, the banner has been designed to target a subsample of the English Wikipedia 
registered editor population. Based on estimates by the research team, the 
eligibility criteria apply to about 10,000 very active contributors and about 
30,000 new editors of the English Wikipedia. The target number of completed 
responses is 1500.

• Why does the banner include logos of organizations not affiliated with 
Wikimedia?
The design of the banner was based on the decision to give participants as much 
information as possible about the research team running the project and to set 
accurate expectations about the study.


==What we are doing now==

We realize that despite an extensive review, the launch of this project was not 
fully advertised on community forums. We plan to shortly resume the campaign 
(for the time needed by the researchers to complete their responses) after a 
full redesign of the recruitment protocol in order to address the concerns 
raised by many of you over the last 24 hours. Here’s what we are doing:

• Provide you with better information about the project
We asked the research team to promptly set up a FAQ section on the project page 
on Meta [13], and to be available to address any concern about the study on the 
discussion page of this project. The project page on Meta will be linked  from 
the recruitment banner itself.

• Redesign the banner
We understand that the banner design has been interpreted by some as ad-like 
(even if the goal was to make clear that this study was not being run by WMF, 
as it implied a redirection to a third party website for performing the 
experiment). In coordination with the research team, we will come up with a 
banner design that will be more in line with the concerns expressed by the 
community (for instance by removing the logos from the banner). 

• Make privacy terms as transparent as possible
Upon clicking on the banner, participants accept to share their username, edit 
count and user privileges with the research team. The previous version didn’t 
make it explicit and we are working to address this problem. To make the 
process totally transparent we will make the acceptance of these terms explicit 
in the banner itself.

Once redirected to the landing page, participants will have to accept the terms 
of participation in order to enter the study. The project is funded by the 
European Research Council: the data collected in this study is subject to 
strict European privacy protocols. The research team will use this data for 
research purposes only. The research team is not exposed to and does not record 
participants’ IP addresses. 

==How you can help==

We would like to hear from you on the redesign of the banner to make sure it 
meets the expectations of the community and doesn’t lend itself to any kind of 
confusion. We will post the new banners to Meta and try to address all pending 
questions before we resume the campaign.

This is one of the first times we’re supporting a complex, important research 
initiative like this one, and I apologize for the bumps in the road. We believe 
that supporting research is part of our mission: it helps advance our 
understanding of ourselves. So thanks again for all support you can give in 
making this a success.


Dario Taraborelli
Senior Research Analyst, Wikimedia Foundation

[1] http://blog.wikimedia.org/2011/12/08/experiment-decision-making/ 
[2] 
http://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard/Incidents#Harvard.2FScience_Po_Adverts
[3] 
http://en.wikipedia.org/wiki/Wikipedia:Village_pump_%28technical%29#Search_banner_Wikipedia_Research_Committee
[4] http://lists.wikimedia.org/pipermail/foundation-l/2011-December/070742.html
[5] 
https://lists.wikimedia.org/mailman/private/internal-l/2011-December/018842.html
[6] 
http://en.wikipedia.org/wiki/Wikipedia:Administrators%27_noticeboard/Archive222#Researchers_requesting_administrators.E2.80.99_advices_to_launch_a_study
[7] 
http://meta.wikimedia.org/wiki/Research_talk:Dynamics_of_Online_Interactions_and_Behavior#RCom_review
[8] http://lists.wikimedia.org/pipermail/foundation-l/2011-May/065580.html
[9] http://lists.wikimedia.org/pipermail/foundation-l/2011-May/065558.html
[10] http://meta.wikimedia.org/wiki/CentralNotice_banner_guidelines
[11] 
http://meta.wikimedia.org/w/index.php?title=CentralNotice/Calendaroldid=3056067
[12] http://meta.wikimedia.org/wiki/Research:Subject_recruitment
[13] 
meta.wikimedia.org/wiki/Research:Dynamics_of_Online_Interactions_and_Behavior

___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Office Hours on the article feedback tool

2011-10-26 Thread Dario Taraborelli
The AFT v.4 data is documented here: 
http://www.mediawiki.org/wiki/Article_feedback/Data

Dario

On Oct 26, 2011, at 5:53 AM, Tom Morris wrote:

 On Wed, Oct 26, 2011 at 11:09, David Gerard dger...@gmail.com wrote:
 *slaps own forehead*
 
 So is the data to be thrown away too?
 
 (Is there anywhere to look up the data en masse?)
 
 
 It's all on the Toolserver and should be in the dumps too.
 
 If you have any specific requirements for retrieving certain subsets
 of the data, do ask and someone with Toolserver access can run queries
 against the data and provide the results.
 
 -- 
 Tom Morris
 http://tommorris.org/
 
 ___
 foundation-l mailing list
 foundation-l@lists.wikimedia.org
 Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Office Hours on the article feedback tool

2011-10-26 Thread Dario Taraborelli
Hi WereSpielChequers,

I worked on the data analysis for previous AFT versions and I believe I've 
already answered on a number of occasions your questions as to what we could 
test and what we couldn't in the previous phase, but I am happy to do this here 
and clarify what the research plans for the next version are.

Subjective ratings

We have definitely seen a lot of love/hate rating happen in the case of popular 
articles (e.g. Lady Gaga, Justin Bieber). Teasing apart ratings on the quality 
of the article and rater attitudes towards the topic of the article is pretty 
hard given the fact that an average enwiki article gets a very small number of 
ratings per day and articles that get a sufficient number of ratings tend to be 
attracting particularly opinionated or polarized visitors. 

To give you a measure of the problem: of the 3.7M articles in the main 
namespace of the English Wikipedia only 40 articles (0.001%) obtain 10 or more 
ratings per day. The vast majority of articles don't get any rating for several 
days or weeks or ever. FInding ways to increase the volume of ratings per 
article is one of the issues we're discussing in the context of v.5.

The second problem is that we don't have enough observations on multiple 
ratings by the same user. Only 0.02% of unique raters rate more than one 
article and that means that on a single article basis we cannot easily filter 
out users who only rated a topic they love or hate and still have enough good 
data to process. This is unfortunate: the more rating data we can get per 
rater, the more we can identify gaming or rating biases and control them in 
public article feedback reports.

Effects of AFT on participations

I ran a number of pre/post analyses comparing editing activity before and after 
AFT was activated on a random sample of English Wikipedia articles, controlling 
for page views before and after the activation and found no statistically 
significant difference in the volume  of edits. As I noted elsewhere the 
comparison between two random samples of articles is problematic because we 
cannot easily control for the multiple factors that affect editing activity in 
independent samples of articles so any result you may get out of this coarse 
analysis would be questionable. I agree that's a very important issue and the 
proper way to address it is by a/b testing different AFT interfaces (including 
no AFT widget whatsoever) for the same article and measuring the effects on 
edit activity for the same articles across different user groups: this is one 
of the plans we are considering for v.5

Another important limitation of AFT v.4 is that we only collected aggregate 
event counts for call to actions and we didn't mark edits or new accounts 
created via AFT, which means that we couldn't directly study the effects of AFT 
as an on-ramping tool for new editors (e.g. how many readers it is converting 
to registered users and what is the quality of edits generated via the AFT. 
i.e. how many users who create an account via AFT call to actions actually end 
up becoming editors? What is their survival compared to users who create an 
account in a standard way? And how many among the edits created via AFT are 
vandalism? How many are good faith tests that get reverted? These are all 
questions that we will be addressing as of v.5.

We'll be still working on analyzing the current AFT data to support the design 
of v.5. In particular, we will be focusing on (1) correlations between 
consistent low ratings and poor quality or vandalism or the likelihood of an 
article to be nominated for deletion and (2) the relation between ratings and 
changes in other quality-related metrics on a per-article basis.

I have also pitched the existing data to a number of external researchers 
interesting in article quality measurements and/or rating systems and I invite 
you to do the same.

Hope this helps. I look forward to a more in-depth discussion during the office 
hours.

Dario

On Oct 26, 2011, at 7:33 AM, WereSpielChequers wrote:

 --
 
 Message: 6
 Date: Wed, 26 Oct 2011 11:11:57 +0100
 From: Oliver Keyes scire.fac...@gmail.com
 Subject: Re: [Foundation-l] Office Hours on the article feedback tool
 To: Wikimedia Foundation Mailing List
   foundation-l@lists.wikimedia.org
 Message-ID:
   capyupwa34cujyan_vv_chgyxwfct3ejnb4d-nrav_u20qej...@mail.gmail.com
 
 Content-Type: text/plain; charset=ISO-8859-1
 
 No, the data will remain; you can find it at
 http://toolserver.org/~catrope/articlefeedback/ (we really need to
 advertise
 that more widely, actually).
 
 To be clear, we're not talking about junking the idea; we will still have
 an
 Article Feedback Tool that lets readers provide feedback to editors. The
 goal is more to move away from a subjective rating system, and towards
 something the editors can look at and go huh, that's a reasonable
 suggestion as to how to fix the article, I'll go do that or aw, that's
 

Re: [Foundation-l] Editor Survey, 2011

2011-03-11 Thread Dario Taraborelli
Nikola, Amir,

let me answer your points as I am one of the people behind the expert
barriers survey. The design with two blocks of questions with a different
framing is intentional and is based on the results of a long pilot that we
ran for one month (Dec 2010-Jan 2011) prior to the official launch. If you
check what is asked at the top of each block, you'll see that we are
expecting participants to answer different types of questions (A: the
perception of factors affecting WP participation among one's peer; B: one's
individual agreement/disagreement with these statements about WP
participation; C: the relation between one's agreement/disagreement and
one's motivation to contribute). This is designed to allow you in principle
to give 3 different answers to A, B, C and that's precisely what we want to
test for.  The design is in no way meant to ask the same question twice just
for the sake of it or because we assume respondents are lazy or inaccurate,
but to help us turn anecdotes into data we can actually study. I am sorry to
hear this didn't work for you and others, the vast majority of respondents
seem to have correctly understood the assignment, and we had a quite amazing
response rate so far from experts, scholars and research students from a
broad range of disciplines. We also have a surprising gender and age balance
among participants and respondents are almost perfectly split into two
groups of people with previous experience as Wikipedia contributors and
people who never edited a single page.

Those of you interested in following the developments and the early results
of this study should keep an eye on this page:
http://meta.wikimedia.org/wiki/Research_Committee/Areas_of_interest/Expert_involvement/2011_surveyor
get in touch for feedback or any other issue related to the survey at:
expert_barri...@nitens.org

Thanks,
Dario

On Fri, Mar 11, 2011 at 10:05 AM, Nikola Smolenski smole...@eunet.rswrote:

 On 03/11/2011 10:52 AM, Amir E. Aharoni wrote:
  I noticed the Take a WMF-sponsored survey on barriers to expert
  participation in Wikipedia. banner on the top of English Wiktionary
  the other day. I clicked it and answered a whole page of questions
  that were interesting and relevant. And the next page presented the
  same bunch of questions again, somewhat rephrased. I hate it when that
  happens and i immediately closed the survey; my answers to the

 This is sometimes done so that if someone is not seriously answering the
 form, the answers to similar questions will be different, and so they
 may be disregarded. But yeah, experts are probably not going to not
 seriously answer the form.

  relevant questions on the first page probably went to the drain.

 You could've just clicked 'Next' to the end.


___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l


Re: [Foundation-l] Editor Survey, 2011

2011-03-11 Thread Dario Taraborelli
 The simple answer: Maybe, but how could i know that?
 
 The smartass answer: Maybe, but how could i know that after clicking
 'Next' i wouldn't be presented with a stupid JavaScript error message,
 punishing me for clicking 'Next' before filling the required fields?

on the frontpage you can read in a prominent box:

Please note that you can skip any question (or select No answer) that you do 
not wish to answer or that you think does not apply.

there is not a single required field in this survey.

Dario
___
foundation-l mailing list
foundation-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l