Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Edward Galvez
Thanks Lodewijk for answering my questions. I don't find your feedback moot
and it's actually quite helpful. From what you're saying, it sounds like
opening up feedback to those who reported data would help to solidify the
content of the report before pushing the announcement publicly. We have 8
more program reports to publish and I'm starting to think of ways we might
include a window for this kind of feedback, but I would need to check with
the rest of the team and our timelines to know what is feasible. We have
been extra busy these last few months.

Also, to make a clarification, we don't assume that all Wiki Loves
Monuments are the same. The metrics we collect are fairly broad and not
exhaustive so that we can first know the collective impact of the program
and then we can dig deeper and learn about the contests in greater detail
afterward. In the coming months, we will be doing one-on-one interviews
with organizers to surface the processes and goals of several photo
contests, and learn what works and what doesn't in different contexts. This
would be the opportunity to explore these assumptions and questions.

Thanks again for your helpful suggestions,
Edward



On Thu, May 7, 2015 at 9:46 PM, Lodewijk 
wrote:

> Hi Edward,
>
> Thanks for the questions. The Wiki Loves Monuments mailing list would have
> made a very logical starting place to ask for initial feedback. But also
> sending an email to the people who shared their data with you to work with
> in the first place, or people who worked on internal evaluations in these
> projects before.
>
> The feeling has been created that right now, the 'damage is done': the
> report is published, you have done all you could to make sure that all
> community members are as much aware as possible of what you consider the
> conclusions. That means that any feedback now, becomes somewhat moot. We
> have seen this before with Foundation publications (i.e. statistics on the
> chapters), once it is announced to the community at large, feedback often
> doesn't get incorporated any more (I hope this time it does!) and even if
> it is, the "facts" already found their place into other publications like
> the signpost. Asking feedback is most valuable *before* you announce it,
> and proactively. You could (even better) consider involving those
> stakeholders even earlier in the process, which makes it less of a black
> box.
>
> I strongly believe that it would improve the quality of the work you do.
> Still some of the basic flaws will remain due to the basic setup of the
> evaluation framework (assumptions that all WLM are comparable etc.) but
> others could be managed better.
>
> Best,
> Lodewijk
>
> On Fri, May 8, 2015 at 3:47 AM, Edward Galvez 
> wrote:
>
> > >
> > >
> > >
> > Hi Lodewijk,
> >
> > Thanks for your feedback about the process. It's been very valuable.
> >
> > I have a few follow up questions below:
> >
> >
> > > Sure, the team did reach out in the collection phase - after all,
> without
> > > the data such evaluation would be impossible. But after that, the
> > > conclusions were drafted and shared with the wide community, rather
> than
> > > with the stakeholders involved to discuss interpretation.
> > >
> >
> > Can you say more about which stakeholders? Do you have ideas how we might
> > include them in the future, for example, through the Wiki Loves Monuments
> > mailing list, or were you thinking in some other way?
> >
> >
> > Either way, all communication seemed to be aimed to announce the
> > > evaluation, rather than to ask active input on whether the analysis
> made
> > > sense, whether there were misunderstandings, etc. But maybe you have
> had
> > a
> > > lot of follow-up discussions with the people you collected data from
> on a
> > > 1-to-1 level, which would be admirable.
> > >
> >
> > We tried to encourage input and questions through the next steps and in
> the
> > talk page, but it sounds like this might not have been enough. How do you
> > think we can do this better next time? Anything specific that stands out
> to
> > you, beyond sharing with stakeholders beforehand?
> >
> > Thanks so much,
> > Edward
> >
> >
> >
> >
> > > Again, I do appreciate the effort, I don't agree with the approach and
> > > process.
> > >
> > > Best,
> > > Lodewijk
> > > ___
> > > Wikimedia-l mailing list, guidelines at:
> > > https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> > > Wikimedia-l@lists.wikimedia.org
> > > <
> >
> https://meta.wikimedia.org/wiki/Mailing_lists/guidelineswikimedi...@lists.wikimedia.org
> > >
> > > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > > 
> > >
> >
> >
> >
> > --
> > Edward Galvez
> > Program Evaluation Associate
> > Wikimedia Foundation
> > ___
> > Wikimedia-l mailing list, guidelines at:
> > https://meta.wikimedia.org/wiki/Mailing_list

Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Pine W
Hi,

I wasn't involved in this evaluation, but I would like to say that, as
someone who recently worked for WMF Learning and Evaluation, I believe that
the L&E team is interested in producing useful and accurate reports. So, I
am optimistic that feedback from the community about methodology and
communications will be carefully considered in future work plans for the
L&E team.

Also, I will mention that Cascadia Wikimedians plans to participate in
Summer of Monuments, and we will look at L&E reports for ideas and data
about effective practices in Wiki Loves Monuments and other programmatic
work. These reports will be, I hope, not just about numerical
accountability but also about sharing stories, ideas, and qualitative
information.

Regards,

Pine
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Lodewijk
Hi Edward,

Thanks for the questions. The Wiki Loves Monuments mailing list would have
made a very logical starting place to ask for initial feedback. But also
sending an email to the people who shared their data with you to work with
in the first place, or people who worked on internal evaluations in these
projects before.

The feeling has been created that right now, the 'damage is done': the
report is published, you have done all you could to make sure that all
community members are as much aware as possible of what you consider the
conclusions. That means that any feedback now, becomes somewhat moot. We
have seen this before with Foundation publications (i.e. statistics on the
chapters), once it is announced to the community at large, feedback often
doesn't get incorporated any more (I hope this time it does!) and even if
it is, the "facts" already found their place into other publications like
the signpost. Asking feedback is most valuable *before* you announce it,
and proactively. You could (even better) consider involving those
stakeholders even earlier in the process, which makes it less of a black
box.

I strongly believe that it would improve the quality of the work you do.
Still some of the basic flaws will remain due to the basic setup of the
evaluation framework (assumptions that all WLM are comparable etc.) but
others could be managed better.

Best,
Lodewijk

On Fri, May 8, 2015 at 3:47 AM, Edward Galvez  wrote:

> >
> >
> >
> Hi Lodewijk,
>
> Thanks for your feedback about the process. It's been very valuable.
>
> I have a few follow up questions below:
>
>
> > Sure, the team did reach out in the collection phase - after all, without
> > the data such evaluation would be impossible. But after that, the
> > conclusions were drafted and shared with the wide community, rather than
> > with the stakeholders involved to discuss interpretation.
> >
>
> Can you say more about which stakeholders? Do you have ideas how we might
> include them in the future, for example, through the Wiki Loves Monuments
> mailing list, or were you thinking in some other way?
>
>
> Either way, all communication seemed to be aimed to announce the
> > evaluation, rather than to ask active input on whether the analysis made
> > sense, whether there were misunderstandings, etc. But maybe you have had
> a
> > lot of follow-up discussions with the people you collected data from on a
> > 1-to-1 level, which would be admirable.
> >
>
> We tried to encourage input and questions through the next steps and in the
> talk page, but it sounds like this might not have been enough. How do you
> think we can do this better next time? Anything specific that stands out to
> you, beyond sharing with stakeholders beforehand?
>
> Thanks so much,
> Edward
>
>
>
>
> > Again, I do appreciate the effort, I don't agree with the approach and
> > process.
> >
> > Best,
> > Lodewijk
> > ___
> > Wikimedia-l mailing list, guidelines at:
> > https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> > Wikimedia-l@lists.wikimedia.org
> > <
> https://meta.wikimedia.org/wiki/Mailing_lists/guidelineswikimedi...@lists.wikimedia.org
> >
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> > 
> >
>
>
>
> --
> Edward Galvez
> Program Evaluation Associate
> Wikimedia Foundation
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
>
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] Wikinews and free journalism

2015-05-07 Thread Aleksey Bilogur
There has been coverage in the Signpost. Try following through this link:
https://en.wikipedia.org/w/index.php?title=Wikipedia:Sandbox&action=edit§ion=new&preview=yes&preload=Wikipedia:Wikipedia_Signpost/Templates/Index_preload&preloadparams[]=wikinews

On Thu, May 7, 2015 at 11:11 AM, Rodrigo Tetsuo Argenton <
rodrigo.argen...@gmail.com> wrote:

> Hello guys,
>
> I'm struggling to find any kind of discussion/research about wikinews and
> the relation with free journalism (free as defined here:
> http://freedomdefined.org/Definition).
>
> Did you know any kind of paper, article about it? Or even articles about
> free journalism (free mean fre, providing sources, the result is under
> a free license...)
>
> Thanks for the attention
>
>
>
> --
> Rodrigo Tetsuo Argenton
> rodrigo.argen...@gmail.com
> +55 11 979 718 884
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


[Wikimedia-l] Organizational effectiveness research for Wikimedia organizations

2015-05-07 Thread Winifred Olliff
Greetings, Wikimedia colleagues!

In November 2014, we launched a pilot self-assessment tool for Wikimedia
organizations to help us identify areas where we as Wikimedia organizations
might leverage our strengths and address our challenges to achieve better
results. We have some aggregate data available from organizations that took
the questionnaire.

To learn more about organizational effectiveness:
https://meta.wikimedia.org/wiki/Organizational_effectiveness
Review the benchmarking research here:
https://meta.wikimedia.org/wiki/Organizational_effectiveness/Benchmarking
Read the case studies here:
https://meta.wikimedia.org/wiki/Organizational_effectiveness/Case_studies
Review the results from the tool here:
https://meta.wikimedia.org/wiki/Organizational_effectiveness/Tool/Results/2014/December

*A little background on organizational effectiveness*
We are looking at how organizations are achieving impact in the Wikimedia
movement. Organizational effectiveness includes all the things that make an
organization good at what it does, from strong leadership and systems, to
how an organization chooses and does programs that lead to results. Through
this lens, we are looking specifically at groups and organizations, so we
can understand how volunteers and staff work together when they are part of
a group or organization.

*Why we care*
Improving organizational effectiveness is a way for Wikimedia organizations
to achieve results, in ways that make sense in their local contexts.

*The tool and the results, in brief*
We are exploring this topic of organizational effectiveness in order to
launch a broader conversation among Wikimedia organizations (e.g. Chapters,
User Groups) to explore areas for building capacity. We have started by
doing some initial benchmarking research and case studies to understand how
effectiveness is understood within Wikimedia and similar movements, and we
launched a pilot organizational effectiveness self-assessment tool, to help
organizations identify their strengths and challenges in November 2014.
This tool includes a questionnaire that is part of a self-assessment.

While individual and organization results are kept confidential, our
partners at the TCC Group provided us with some aggregate data based on the
first round of answers to the questionnaire that is a part of the tool. The
primary purpose of the questionnaire is to aid organizations in their
self-assessments, and not to gather data about organizations; however, the
data the TCC Group collected may be useful in launching further
conversations because it highlights where Wikimedia organizations see their
own strengths and challenges.

*Next steps for organizational effectiveness for Wikimedia organizations*
We are hosting a discussion about the organizational effectiveness tool for
Wikimedia organizations at the upcoming Wikimedia Conference in Berlin, and
will also continue a discussion there about the future of organizational
effectiveness for Wikimedia organizations. Topics to explore within
organizational effectiveness might include areas like volunteer engagement,
raising funds and other resources, supporting online contributors, media
relations, financial management, governance, etc.

*Key questions:*
*What future work are we interested in doing together around organizational
effectiveness?
*Is a self-assessment tool useful for organizations? Should we gather
feedback and work toward improving the tool, or try another approach? Does
the tool need to remain confidential?
*Are there specific topics in the area of organizational effectiveness that
need more exploration? Are there areas where we need more resources?
*Is there anything important revealed by this research or the results of
the questionnaire that we can use to better understand how we can grow?

*Call to action:*
Please offer your feedback, thoughts, and questions on Meta, or participate
in the upcoming discussions at the Wikimedia Conference. If you are
interested in discussing this work, or becoming more involved, but are
unable to attend the session at the conference, we are happy to convene a
hangout or an IRC office hours. Please also feel welcome to contact me
directly if you have ideas about this work, or add your ideas to Meta.

Many thanks to our colleagues Marieke Spence, Rika Gorn, and Deepti Sood,
at the TCC Group (a consulting firm that specializes in organizational
assessments), who created the tool and analyzed the results. Many thanks to
the volunteers and staff who participated, and have provided feedback so
far. Thanks especially to those organizations participating in the pilot
group, the case studies, and in the ongoing conversations about
organizational development and effectiveness at Wikimania and other
movement events.

We look forward to more work together in this area!

Best regards,

Winifred Olliff

-- 
Winifred Olliff
Program Officer
Wikimedia Foundation
___
Wikimedia-l mailing list, guidel

Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Edward Galvez
>
>
>
Hi Lodewijk,

Thanks for your feedback about the process. It's been very valuable.

I have a few follow up questions below:


> Sure, the team did reach out in the collection phase - after all, without
> the data such evaluation would be impossible. But after that, the
> conclusions were drafted and shared with the wide community, rather than
> with the stakeholders involved to discuss interpretation.
>

Can you say more about which stakeholders? Do you have ideas how we might
include them in the future, for example, through the Wiki Loves Monuments
mailing list, or were you thinking in some other way?


Either way, all communication seemed to be aimed to announce the
> evaluation, rather than to ask active input on whether the analysis made
> sense, whether there were misunderstandings, etc. But maybe you have had a
> lot of follow-up discussions with the people you collected data from on a
> 1-to-1 level, which would be admirable.
>

We tried to encourage input and questions through the next steps and in the
talk page, but it sounds like this might not have been enough. How do you
think we can do this better next time? Anything specific that stands out to
you, beyond sharing with stakeholders beforehand?

Thanks so much,
Edward




> Again, I do appreciate the effort, I don't agree with the approach and
> process.
>
> Best,
> Lodewijk
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> 
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
>



-- 
Edward Galvez
Program Evaluation Associate
Wikimedia Foundation
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


[Wikimedia-l] [Wikimedia Announcements] This week on the Wikimedia Blog

2015-05-07 Thread Fabrice Florin
Hi folks,

Here are some of the stories featured this week on the Wikimedia Blog:

• It’s time for some #tastydata
https://blog.wikimedia.org/2015/05/07/time-for-tasty-data/

• The #100wikidays challenge
https://blog.wikimedia.org/2015/05/06/100wikidays-challenge/

• Editing Wikipedia as community service in Mexico
https://blog.wikimedia.org/2015/05/05/community-service-in-mexico/

• Wikimedians in Brussels map out key issues about the European Union’s digital 
future
https://blog.wikimedia.org/2015/05/04/european-union-issues/

• Wikimedia Research Newsletter, April 2015
https://blog.wikimedia.org/2015/05/03/research-newsletter-april-2015/

More stories on the Wikimedia Blog:
https://blog.wikimedia.org/

Enjoy,


Fabrice


___

Fabrice Florin
Movement Communications Manager
Wikimedia Foundation

https://en.wikipedia.org/wiki/User:Fabrice_Florin_(WMF)
___
Please note: all replies sent to this mailing list will be immediately directed 
to Wikimedia-l, the public mailing list of the Wikimedia community. For more 
information about Wikimedia-l:
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
___
WikimediaAnnounce-l mailing list
wikimediaannounc...@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikimediaannounce-l
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


[Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Mohammed Bachounda
I organized a contest Wiki Loves Monuments wiki and loves earth in algeria
and coordinate on the rest of each arabic country who has organised the
contest

I had a lot of fun to organize during 2013 2014 till 2015  now

In algeria ;with astonishment ;many do not know what that meant wikipedia;
those who knew wikipedia ;they were discovered commons and more
he was able to me; to establish relationships that are allowed me to create
WMUG Algeriait; is a great chalenge for me

all these offline activities help to enhance the experience and conaissance
of wikimedia projects

Wikipedia is an encyclopedia of knowledge; and yet it is a great unknown

Wiki loves what is good; May it must be renewed

I suggest only one thing:

a single contest each year with a commission that reflects, and proposes
the theme of the year

it is more focus; if the Commission believes that this year we need to
photograph flowers or rare plants or animals in commons then this will be
the case

International Commitee or Commission or the way you want and the weak link
was missing the comunity

wikimedia must help to the elaboration of this infrastructure without
influencing it


best
-- 

*Mohammed Bachounda*
Leader Wikimedia Algérie UG
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Lodewijk
On Thu, May 7, 2015 at 3:14 PM, Maria Cruz  wrote:

> 
>
> > All in all it is good to have something 'to shoot at' but I would prefer
> > that these reports are produces more in concert with the stakeholders
> > involved and affected, rather than 'announced' and 'presented' to the
> wide
> > community.
>
>
> This isn't true. We always reach out to program leaders to engage in data
> collection. Further, had you taken part of the event, or even watched it,
> or read the blog we wrote [6], you would have seen nothing is presented or
> announced, rather, open for discussion and conversation.
>
>
>
> 

Sure, the team did reach out in the collection phase - after all, without
the data such evaluation would be impossible. But after that, the
conclusions were drafted and shared with the wide community, rather than
with the stakeholders involved to discuss interpretation. And I do admit
for not watching the full video (the event itself was during working hours
in Europe - not compatible with my job) but only watching parts of it - and
it had a high presentation level to me. But maybe I was unlucky in that.
Either way, all communication seemed to be aimed to announce the
evaluation, rather than to ask active input on whether the analysis made
sense, whether there were misunderstandings, etc. But maybe you have had a
lot of follow-up discussions with the people you collected data from on a
1-to-1 level, which would be admirable.

Again, I do appreciate the effort, I don't agree with the approach and
process.

Best,
Lodewijk
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


[Wikimedia-l] Fwd: Invitation to WMF April 2015 Metrics & Activities Meeting: Thursday, May 7, 18:00 UTC

2015-05-07 Thread Praveena Maharaj
REMINDER: This meeting starts in 30 minutes.


-- Forwarded message --
From: Praveena Maharaj 
Date: Tue, May 5, 2015 at 1:55 PM
Subject: Invitation to WMF April 2015 Metrics & Activities Meeting:
Thursday, May 7, 18:00 UTC
To: Wikimedia Mailing List 

Dear all,

Apologies for the late invitation! The next WMF metrics and activities
meeting will take place on Thursday, May 7, 2015 at 6:00 PM UTC (11 AM
PDT). The IRC channel is #wikimedia-office on irc.freenode.net, and
the meeting will be broadcast as a live YouTube stream.

Each month at the metrics meeting, we will:

* Welcome recent hires
* Present reports/updates that are focused on a key theme or topic. For
May, we will do a Strategy & Operations update.
* Engage in questions/discussions

Please review
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings for
further information about how to participate.

We’ll post the video recording publicly after the meeting.

Thank you,
Praveena

-- 
Praveena Maharaj
Executive Assistant to the Vice President of Engineering
Wikimedia Foundation \\ www.wikimediafoundation.org
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


[Wikimedia-l] Wikinews and free journalism

2015-05-07 Thread Rodrigo Tetsuo Argenton
Hello guys,

I'm struggling to find any kind of discussion/research about wikinews and
the relation with free journalism (free as defined here:
http://freedomdefined.org/Definition).

Did you know any kind of paper, article about it? Or even articles about
free journalism (free mean fre, providing sources, the result is under
a free license...)

Thanks for the attention



-- 
Rodrigo Tetsuo Argenton
rodrigo.argen...@gmail.com
+55 11 979 718 884
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] [Wikitech-l] GRAPH extension is now live everywhere!

2015-05-07 Thread Maria Cruz
This is great, Yuri, thank you so much! =)



*María Cruz * \\  Community Coordinator, PE&D Team \\ Wikimedia Foundation,
Inc.
mc...@wikimedia.org  |  :  @marianarra_ 

On Wed, May 6, 2015 at 2:19 PM, Jonathan Morgan 
wrote:

> This is wicked exciting. Thanks to everyone involved!
>
> - J
>
> On Tue, May 5, 2015 at 1:24 PM, Yuri Astrakhan 
> wrote:
>
> > Starting today, editors can use ** tag to include complex graphs
> and
> > maps inside articles.
> >
> > *Demo:* https://www.mediawiki.org/wiki/Extension:Graph/Demo
> > *Vega's demo:*
> http://trifacta.github.io/vega/editor/?spec=scatter_matrix
> > *Extension info:* https://www.mediawiki.org/wiki/Extension:Graph
> > *Vega's docs:* https://github.com/trifacta/vega/wiki
> > *Bug reports:* https://phabricator.wikimedia.org/ - project tag #graph
> >
> > Graph tag support template parameter expansion. There is also a Graphoid
> > service to convert graphs into images. Currently, Graphoid is used in
> case
> > the browser does not support modern JavaScript, but I plan to use it for
> > all anonymous users - downloading large JS code needed to render graphs
> is
> > significantly slower than showing an image.
> >
> > Potential future growth (developers needed!):
> > * Documentation and better tutorials
> > * Visualize as you type - show changes in graph while editing its code
> > * Visual Editor's plugin
> > * Animation  >
> >
> > Project history: Exactly one year ago, Dan Andreescu (milimetric) and Jon
> > Robson demoed Vega visualization grammar <
> https://trifacta.github.io/vega/
> > >
> > usage in MediaWiki. The project stayed dormant for almost half a year,
> > until Zero team decided it was a good solution to do on-wiki graphs. The
> > project was rewritten, and gained many new features, such as template
> > parameters. Yet, doing graphs just for Zero portal seemed silly. Wider
> > audience meant that we now had to support older browsers, thus Graphoid
> > service was born.
> >
> > This project could not have happened without the help from Dan Andreescu,
> > Brion Vibber, Timo Tijhof, Chris Steipp, Max Semenik,  Marko Obrovac,
> > Alexandros Kosiaris, Jon Robson, Gabriel Wicke, and others who have
> helped
> > me develop,  test, instrument, and deploy Graph extension and Graphoid
> > service. I also would like to thank the Vega team for making this amazing
> > library.
> >
> > --Yurik
> > ___
> > Wikitech-l mailing list
> > wikitec...@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
>
>
>
> --
> Jonathan T. Morgan
> Community Research Lead
> Wikimedia Foundation
> User:Jmorgan (WMF) 
> jmor...@wikimedia.org
> ___
> Wikimedia-l mailing list, guidelines at:
> https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
> Wikimedia-l@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
> 
>
___
Wikimedia-l mailing list, guidelines at: 
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, 


Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Maria Cruz
On Thu, May 7, 2015 at 6:34 AM, Lodewijk 
wrote:

>
>
> I hope that at some point WLM organizers can be given the tools, enthusiasm
> and support to create their own evaluation on a larger scale. That way I
> hope that some of the flaws can be avoided thanks to a better understanding
> of the collaborations, structures and the projects in general.
>

The Evaluation portal on Meta [1] has all the resources we use, open to
organizers of any program. There is a guide to using the portal resources
[2]. We also host virtual meet ups regularly to develop capacity around
evaluation, that are recorded and available on our Youtube channel [3]
under CC license. The Learning and Evaluation team is open to have
conversations one on one as well! =)

We are always encouraging program leaders to engage in this conversation:
what metrics matter to this program, what is relevant to measure. Happily,
this is the conversation we had with some WLM organizers yesterday [4],
which is also taking place on WLM Report talk page [5].



> All in all it is good to have something 'to shoot at' but I would prefer
> that these reports are produces more in concert with the stakeholders
> involved and affected, rather than 'announced' and 'presented' to the wide
> community.


This isn't true. We always reach out to program leaders to engage in data
collection. Further, had you taken part of the event, or even watched it,
or read the blog we wrote [6], you would have seen nothing is presented or
announced, rather, open for discussion and conversation.



*María Cruz * \\  Community Coordinator, PE&D Team \\ Wikimedia Foundation,
Inc.
mc...@wikimedia.org  |  :  @marianarra_ 

[1] https://meta.wikimedia.org/wiki/Grants:Evaluation
[2] https://meta.wikimedia.org/wiki/Grants:Evaluation/Introduction
[3] https://www.youtube.com/user/WikiEvaluation/
[4] https://www.youtube.com/watch?v=PN3TN4wrFZs
[5]
https://meta.wikimedia.org/wiki/Grants_talk:Evaluation/Evaluation_reports/2015/Wiki_Loves_Monuments

[6]
http://blog.wikimedia.org/2015/04/22/first-2015-wikimedia-programs-evaluations/




>
> Best,
> Lodewijk (effeietsanders)
> member of the international coordinating team 2011-2013
>
> On Wed, May 6, 2015 at 4:40 PM, Samuel Klein  wrote:
>
> > Claudia, I share your concerns about reducing subtle things to a few
> > numbers.  Data can also be used in context-sensitive ways.  So I'm
> > wondering if there are any existing quantitative summaries that you find
> > useful? Or qualitative descriptions that draw from  more than one
> project?
> >
> > Figuring out what ideas are repeatable, scalable, or awesome but one-time
> > only, is complex. We probably need many different approaches, not one
> > central approach, to understand and compare.
> >
> > I'm glad to see data being shared, and again it might help to have many
> > different datasets, to limit conceptual bias in what sort of data is
> > relevant.
> >  On May 6, 2015 9:59 AM, "Claudia Garád" 
> > wrote:
> >
> > > Hi Sam,
> > >
> > > I am sure there are figures and stories that the various orgs collect
> and
> > > publish. But they are spread across different wikis and websites and/or
> > > languages. E.g. many of the FDC orgs are looking into ways to
> demonstrate
> > > these more qualitative aspects of our work (e.g. by storytelling) in
> > their
> > > reports.
> > > But these information does not get the same attention and publicity in
> > the
> > > wider community as the evaluation done by the WMF. Many WMAT volunteers
> > and
> > > I myself share the concerns expressed by Romaine that these
> > unidimensional
> > > numbers and lack of context foster misconceptions or even prejudices
> > > especially in the parts of the community that are not closely involved
> in
> > > the work of the respective groups and orgs.
> > >
> > > Best
> > > Claudia
> > >
> > >
> > >
> > > Am 06.05.2015 um 13:40 schrieb Sam Klein:
> > >
> > >> Hi Romaine,
> > >>
> > >> Are there other evals of WLM projects that capture the complexity you
> > >> want?
> > >>
> > >> Perhaps single-community evaluations done by the WLM organizers there?
> > >>
> > >> Sam
> > >>
> > >> On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki 
> > >> wrote:
> > >>
> > >>  Hi all,
> > >>>
> > >>> In the past months the Wikimedia Foundation has been writing an
> > >>> evaluation
> > >>> about Wiki Loves Monuments. [1]
> > >>>
> > >>> At such it is fine that WMF is writing an evaluation, however they
> fail
> > >>> in
> > >>> actual understanding Wiki Loves Monuments, and that is shown in the
> > >>> evaluation report.
> > >>>
> > >>> As a result on the Wiki Loves Monuments mailing list a discussion
> grows
> > >>> about the various problems the evaluation has.
> > >>>
> > >>> As the Learning and Evaluation team at the Wikimedia Foundation
> already
> > >>> had
> > >>> released the first Programs Reports for Wiki Loves Monuments, we are
> > now
> > >>> put as fait accompli with this evaluation report.
> > >>>
> > >>> 

Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Anders Wennersten

Editor retention really consists of three components
*new temporary contributors. WLM helps here, and even if they leave after a few 
edits this is of value for the projects. They have learned to edit, and will be 
more open to correct an error or complement an article very much later when 
using Wikipedia
*New regular contributors. Low impact from WLM, but this  is the key and only 
parameter being measured
*Make regular contributors stay on (longer). Here too WLM has a positive 
effect. It is a stimulus for longtimers to see the new images, the 
(IRL)activities around the WLM and that something of value is happening. This 
is of course impossible to measure. Personally I believe that making the work 
environment fun and stimulating is the most cost effective way to keep up the 
editor base. The Thanks notification is a wonderful example of high-effect on 
retention by a very limited investment in software

Anders


Tomasz Ganicz skrev den 2015-05-07 13:06:

Regaring measuerment of editor retention - this is tricky - as in fact many
participant created new accounts only to join the contest. Some of them had
accounts on Wikipedia (but different) - some others  - abandoned their
accounts and created a new ones for various reasons (the most trival - they
have forgoten passwords). There are also user who are active only during
contensts - also for various reasons - not only due to possibility to win
attractive prizes, but also because the normal upload process is too tricky
for them, or they don't know what to photograph if there is no easy to use
list of objects.

In fact measurement of editor retention is tricky even for workshops if it
is only based on list of nicknames. I saw this many times - that people
create the accounts during the workshop and then abandon them, but create
later a new ones. The only effective way to follow the retention of users
after workshop is to collect their e-mails and then survey them some time
after the workshop. It might produce completely different picture that
studies based on following the activity of accounts created during
workshops...



2015-05-07 11:34 GMT+02:00 Lodewijk :


Hi Sam,

The main misconception (which is understandable, but also often pointed out
already) is that Wiki Loves Monuments can be fundamentally different
projects from a goals-and-outcomes point of view, based on the interests
and strenghts of the local organizers and the local situation. In some
countries, the main outcome of the competition is that it brings together
organizers for a first project, that can then move on, and leverage their
collaboration in other projects. In other countries it fosters
collaborations with other organizations.

In some countries, it is a very grassroots competition, with low budget and
big focus on getting a lot of photos. In other countries, there is a lot of
effort (and funding) going into catching editors, setting up structures or
overcoming the local challenges or making concepts better aware.

Aside from the fact that many of these outcomes are qualitative, which
seems to get no attention in the (summaries of the) reports, but do get
described in the reports of the individual contests, the local competitions
are too diverse to try and catch as one group.

This is a fundamental flaw (pointed out before) in the approach. The work
is appreciated of course, the numbers can be useful - the way they are
presented is however very sensitive for major misunderstandings.

Besides this, there are several very specific flaws in the number crunching
that have been pointed out, which are for example messing up the numbers on
editor retention.

I hope that at some point WLM organizers can be given the tools, enthusiasm
and support to create their own evaluation on a larger scale. That way I
hope that some of the flaws can be avoided thanks to a better understanding
of the collaborations, structures and the projects in general.

All in all it is good to have something 'to shoot at' but I would prefer
that these reports are produces more in concert with the stakeholders
involved and affected, rather than 'announced' and 'presented' to the wide
community.

Best,
Lodewijk (effeietsanders)
member of the international coordinating team 2011-2013

On Wed, May 6, 2015 at 4:40 PM, Samuel Klein  wrote:


Claudia, I share your concerns about reducing subtle things to a few
numbers.  Data can also be used in context-sensitive ways.  So I'm
wondering if there are any existing quantitative summaries that you find
useful? Or qualitative descriptions that draw from  more than one

project?

Figuring out what ideas are repeatable, scalable, or awesome but one-time
only, is complex. We probably need many different approaches, not one
central approach, to understand and compare.

I'm glad to see data being shared, and again it might help to have many
different datasets, to limit conceptual bias in what sort of data is
relevant.
  On May 6, 2015 9:59 AM, "Claudia Garád" 
wrote:


Hi Sam,

I 

Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Tomasz Ganicz
Regaring measuerment of editor retention - this is tricky - as in fact many
participant created new accounts only to join the contest. Some of them had
accounts on Wikipedia (but different) - some others  - abandoned their
accounts and created a new ones for various reasons (the most trival - they
have forgoten passwords). There are also user who are active only during
contensts - also for various reasons - not only due to possibility to win
attractive prizes, but also because the normal upload process is too tricky
for them, or they don't know what to photograph if there is no easy to use
list of objects.

In fact measurement of editor retention is tricky even for workshops if it
is only based on list of nicknames. I saw this many times - that people
create the accounts during the workshop and then abandon them, but create
later a new ones. The only effective way to follow the retention of users
after workshop is to collect their e-mails and then survey them some time
after the workshop. It might produce completely different picture that
studies based on following the activity of accounts created during
workshops...



2015-05-07 11:34 GMT+02:00 Lodewijk :

> Hi Sam,
>
> The main misconception (which is understandable, but also often pointed out
> already) is that Wiki Loves Monuments can be fundamentally different
> projects from a goals-and-outcomes point of view, based on the interests
> and strenghts of the local organizers and the local situation. In some
> countries, the main outcome of the competition is that it brings together
> organizers for a first project, that can then move on, and leverage their
> collaboration in other projects. In other countries it fosters
> collaborations with other organizations.
>
> In some countries, it is a very grassroots competition, with low budget and
> big focus on getting a lot of photos. In other countries, there is a lot of
> effort (and funding) going into catching editors, setting up structures or
> overcoming the local challenges or making concepts better aware.
>
> Aside from the fact that many of these outcomes are qualitative, which
> seems to get no attention in the (summaries of the) reports, but do get
> described in the reports of the individual contests, the local competitions
> are too diverse to try and catch as one group.
>
> This is a fundamental flaw (pointed out before) in the approach. The work
> is appreciated of course, the numbers can be useful - the way they are
> presented is however very sensitive for major misunderstandings.
>
> Besides this, there are several very specific flaws in the number crunching
> that have been pointed out, which are for example messing up the numbers on
> editor retention.
>
> I hope that at some point WLM organizers can be given the tools, enthusiasm
> and support to create their own evaluation on a larger scale. That way I
> hope that some of the flaws can be avoided thanks to a better understanding
> of the collaborations, structures and the projects in general.
>
> All in all it is good to have something 'to shoot at' but I would prefer
> that these reports are produces more in concert with the stakeholders
> involved and affected, rather than 'announced' and 'presented' to the wide
> community.
>
> Best,
> Lodewijk (effeietsanders)
> member of the international coordinating team 2011-2013
>
> On Wed, May 6, 2015 at 4:40 PM, Samuel Klein  wrote:
>
> > Claudia, I share your concerns about reducing subtle things to a few
> > numbers.  Data can also be used in context-sensitive ways.  So I'm
> > wondering if there are any existing quantitative summaries that you find
> > useful? Or qualitative descriptions that draw from  more than one
> project?
> >
> > Figuring out what ideas are repeatable, scalable, or awesome but one-time
> > only, is complex. We probably need many different approaches, not one
> > central approach, to understand and compare.
> >
> > I'm glad to see data being shared, and again it might help to have many
> > different datasets, to limit conceptual bias in what sort of data is
> > relevant.
> >  On May 6, 2015 9:59 AM, "Claudia Garád" 
> > wrote:
> >
> > > Hi Sam,
> > >
> > > I am sure there are figures and stories that the various orgs collect
> and
> > > publish. But they are spread across different wikis and websites and/or
> > > languages. E.g. many of the FDC orgs are looking into ways to
> demonstrate
> > > these more qualitative aspects of our work (e.g. by storytelling) in
> > their
> > > reports.
> > > But these information does not get the same attention and publicity in
> > the
> > > wider community as the evaluation done by the WMF. Many WMAT volunteers
> > and
> > > I myself share the concerns expressed by Romaine that these
> > unidimensional
> > > numbers and lack of context foster misconceptions or even prejudices
> > > especially in the parts of the community that are not closely involved
> in
> > > the work of the respective groups and orgs.
> > >
> > > Best
> > > Clau

Re: [Wikimedia-l] Evaluation by WMF of Wiki Loves Monuments is failing to understand the community

2015-05-07 Thread Lodewijk
Hi Sam,

The main misconception (which is understandable, but also often pointed out
already) is that Wiki Loves Monuments can be fundamentally different
projects from a goals-and-outcomes point of view, based on the interests
and strenghts of the local organizers and the local situation. In some
countries, the main outcome of the competition is that it brings together
organizers for a first project, that can then move on, and leverage their
collaboration in other projects. In other countries it fosters
collaborations with other organizations.

In some countries, it is a very grassroots competition, with low budget and
big focus on getting a lot of photos. In other countries, there is a lot of
effort (and funding) going into catching editors, setting up structures or
overcoming the local challenges or making concepts better aware.

Aside from the fact that many of these outcomes are qualitative, which
seems to get no attention in the (summaries of the) reports, but do get
described in the reports of the individual contests, the local competitions
are too diverse to try and catch as one group.

This is a fundamental flaw (pointed out before) in the approach. The work
is appreciated of course, the numbers can be useful - the way they are
presented is however very sensitive for major misunderstandings.

Besides this, there are several very specific flaws in the number crunching
that have been pointed out, which are for example messing up the numbers on
editor retention.

I hope that at some point WLM organizers can be given the tools, enthusiasm
and support to create their own evaluation on a larger scale. That way I
hope that some of the flaws can be avoided thanks to a better understanding
of the collaborations, structures and the projects in general.

All in all it is good to have something 'to shoot at' but I would prefer
that these reports are produces more in concert with the stakeholders
involved and affected, rather than 'announced' and 'presented' to the wide
community.

Best,
Lodewijk (effeietsanders)
member of the international coordinating team 2011-2013

On Wed, May 6, 2015 at 4:40 PM, Samuel Klein  wrote:

> Claudia, I share your concerns about reducing subtle things to a few
> numbers.  Data can also be used in context-sensitive ways.  So I'm
> wondering if there are any existing quantitative summaries that you find
> useful? Or qualitative descriptions that draw from  more than one project?
>
> Figuring out what ideas are repeatable, scalable, or awesome but one-time
> only, is complex. We probably need many different approaches, not one
> central approach, to understand and compare.
>
> I'm glad to see data being shared, and again it might help to have many
> different datasets, to limit conceptual bias in what sort of data is
> relevant.
>  On May 6, 2015 9:59 AM, "Claudia Garád" 
> wrote:
>
> > Hi Sam,
> >
> > I am sure there are figures and stories that the various orgs collect and
> > publish. But they are spread across different wikis and websites and/or
> > languages. E.g. many of the FDC orgs are looking into ways to demonstrate
> > these more qualitative aspects of our work (e.g. by storytelling) in
> their
> > reports.
> > But these information does not get the same attention and publicity in
> the
> > wider community as the evaluation done by the WMF. Many WMAT volunteers
> and
> > I myself share the concerns expressed by Romaine that these
> unidimensional
> > numbers and lack of context foster misconceptions or even prejudices
> > especially in the parts of the community that are not closely involved in
> > the work of the respective groups and orgs.
> >
> > Best
> > Claudia
> >
> >
> >
> > Am 06.05.2015 um 13:40 schrieb Sam Klein:
> >
> >> Hi Romaine,
> >>
> >> Are there other evals of WLM projects that capture the complexity you
> >> want?
> >>
> >> Perhaps single-community evaluations done by the WLM organizers there?
> >>
> >> Sam
> >>
> >> On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki 
> >> wrote:
> >>
> >>  Hi all,
> >>>
> >>> In the past months the Wikimedia Foundation has been writing an
> >>> evaluation
> >>> about Wiki Loves Monuments. [1]
> >>>
> >>> At such it is fine that WMF is writing an evaluation, however they fail
> >>> in
> >>> actual understanding Wiki Loves Monuments, and that is shown in the
> >>> evaluation report.
> >>>
> >>> As a result on the Wiki Loves Monuments mailing list a discussion grows
> >>> about the various problems the evaluation has.
> >>>
> >>> As the Learning and Evaluation team at the Wikimedia Foundation already
> >>> had
> >>> released the first Programs Reports for Wiki Loves Monuments, we are
> now
> >>> put as fait accompli with this evaluation report.
> >>>
> >>> Therefore I am writing here so that the rest of the worldwide Wikimedia
> >>> community is informed that this is not going right.
> >>>
> >>> Wiki Loves Monuments is not just a bunch of uploads done in September,
> >>> the
> >>> report is too simplified without actual underst