[Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Roman Zaynetdinov
Hello, I am willing to participate in GSOC this year for the first time,
but I am a little bit worried about choosing the idea, I have one and I am
not sure if it suits this program. I will be very glad if you will take a
small look at my idea and tell your thoughts. Will be happy to every
feedback. Thank you.

Project Idea

What is the purpose?

Help people in reading complex texts by providing inline translation for
unknown words. For me as a non-native English speaker student sometimes is
hard to read complicated texts or articles, that's why I need to search for
translation or description every time. Why not to simplify this and change
the flow from translate and understand to translate, learn and understand?

How inline translation will appear?

While user is reading an article, he could find some unknown words or words
with confusing meaning for him. At this point he clicks on the selected
word and the inline translation appears.

What should be included in inline translation?

Thus it is not just a translator, it should include not only one
translation, but a couple or more. Also more data can be included such as
synonyms, which can be discussed during project completion.

From which source gather the data?

Wiktionary is the best candidate, it is an open source and it has a wide
database. It also suits for growing your project by adding different
languages.

Evaluation needs

There are two ways in my mind right now. First is to make a web-site built
on Node.js with open API for users. Parsoid could be used for parsing data
from Wiktionary API which is suitable for Node. A small JavaScript widget
is also required for front-end representation.

Second is to make a standalone library which can be used alone on other
resources as an add-on or in browser extensions. Unfortunately, last option
is more confusing for me at this point.

Growth opportunities

I am leaving in Finland right now and I don't know Finnish as I should to
understand locals, therefore this project can be expanded by adding more
languages support for helping people like me reading, learning and
understanding texts in foreign languages.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Brian Wolff
On 2/28/14, Roman Zaynetdinov romanz...@gmail.com wrote:
 Hello, I am willing to participate in GSOC this year for the first time,
 but I am a little bit worried about choosing the idea, I have one and I am
 not sure if it suits this program. I will be very glad if you will take a
 small look at my idea and tell your thoughts. Will be happy to every
 feedback. Thank you.

 Project Idea

 What is the purpose?

 Help people in reading complex texts by providing inline translation for
 unknown words. For me as a non-native English speaker student sometimes is
 hard to read complicated texts or articles, that's why I need to search for
 translation or description every time. Why not to simplify this and change
 the flow from translate and understand to translate, learn and understand?

 How inline translation will appear?

 While user is reading an article, he could find some unknown words or words
 with confusing meaning for him. At this point he clicks on the selected
 word and the inline translation appears.

 What should be included in inline translation?

 Thus it is not just a translator, it should include not only one
 translation, but a couple or more. Also more data can be included such as
 synonyms, which can be discussed during project completion.

 From which source gather the data?

 Wiktionary is the best candidate, it is an open source and it has a wide
 database. It also suits for growing your project by adding different
 languages.

 Evaluation needs

 There are two ways in my mind right now. First is to make a web-site built
 on Node.js with open API for users. Parsoid could be used for parsing data
 from Wiktionary API which is suitable for Node. A small JavaScript widget
 is also required for front-end representation.

 Second is to make a standalone library which can be used alone on other
 resources as an add-on or in browser extensions. Unfortunately, last option
 is more confusing for me at this point.

 Growth opportunities

 I am leaving in Finland right now and I don't know Finnish as I should to
 understand locals, therefore this project can be expanded by adding more
 languages support for helping people like me reading, learning and
 understanding texts in foreign languages.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Interesting.

I actually did something kind of like this a long time ago, where the
user could double click on a word, and the definition would pop up
from wiktionary. (The thing I made was very hacky and icky, and
stopped working quite some time ago. Some people might like to have a
similar tool, but a version that doesn't suck). You can see a
screenshot at https://meta.wikimedia.org/wiki/Wiktionary/Look_Up_tool

 Parsoid could be used for parsing data
 from Wiktionary API which is suitable for Node

Just as a warning, parsing data from wiktionary into usable form is a
lot harder then it looks, so don't underestimate this step. (Or at
least it was several years ago when I last tried)

--bawolff

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Antoine Musso
Le 28/02/2014 01:36, Mark Holmquist a écrit :
 I don't see the code getting checked out on Gallium, and the jobs are all
 marked LOST with no logs. I'm hopeful that this is an issue related to
 the repository still being empty, but this may be too-wishful thinking.
 
 https://gerrit.wikimedia.org/r/116008

A job reported as LOST is always because Zuul could not find the build
result in Jenkins.  This can be caused by various situations:

 - Jenkins died and thus not reporting anything back

 - The job is not registered in the Gearman bus (the server is
integrated in Zuul and Jenkins is a client of it, Jenkins is supposed to
register jobs to Zuul Gearman server)


In this case, the created jobs have not been properly registered.  Which
is because the job creation via Jenkins Job Builder did not work as
expected.  Although the job did get created, the Jenkins hook to
register the job in Gearman did not trigger :-(



-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki Language Extension Bundle 2014.02 release

2014-02-28 Thread Kartik Mistry
Hello all,

I would like to announce the release of MediaWiki Language Extension
Bundle 2014.02. This bundle is compatible with MediaWiki 1.22.2 and
MediaWiki 1.21.5 releases.

* Download: 
https://translatewiki.net/mleb/MediaWikiLanguageExtensionBundle-2014.02.tar.bz2
* sha256sum: 5c5636332b38a7ce9ac12fac74f0402afdc592aa58795b51dc4747877db340da

Quick links:
* Installation instructions are at: https://www.mediawiki.org/wiki/MLEB
* Announcements of new releases will be posted to a mailing list:
https://lists.wikimedia.org/mailman/listinfo/mediawiki-i18n
* Report bugs to: https://bugzilla.wikimedia.org
* Talk with us at: #mediawiki-i18n @ Freenode

Release notes for each extension are below.

-- Kartik Mistry

== Babel, CLDR, CleanChanges ==
* Only localisation updates.

== LocalisationUpdate ==
* README was updated to include better installation instructions.

== Translate ==
=== Noteworthy changes ===
* Allow capital letters in MediaWiki style variables (insertables)
* Bug 60500: Added AppleFFS module for iOS/Mac OS X Localizable.strings files
* Remove shortcut activated from paste source. It's similar to
revert changes button, which does not have the insertable class and
thus the number indicating shortcut key won't be visible.
* Added new hook TranslateMessageGroupPathVariables
* Bug 61459: Removed $wgTranslateExtensionDirectory option.
* The magic-export.php was updated to handle failures more gracefully.
* Bug 50954: In the translation interface, 'Add documentation' link
now changes to 'Edit documentation' as soon as documentation is added.
* Bug 54194: The ApiQueryMessageCollection module no longer throws
exceptions on invalid input.

== UniversalLanguageSelector ==
=== Noteworthy changes ===
* Detect tofu before applying any default fonts. See:
https://www.mediawiki.org/wiki/Universal_Language_Selector/WebFonts#How_webfonts_are_applied.3F
for technical documentation about how tofu detection works in ULS.
* Bug 60304: Added enableWebfonts preference. Each wiki can be
configured to load the fonts by default using the new global variable
$wgULSWebfontsEnabled. Its default value is true (to load fonts).
* ULS is now much lighter for the browser thanks to many changes:
** Bug 56292: All SVG images were optimized to reduce their size even
50% in some cases.
** I18n related jquery.i18n and messages code is now loaded only later
after user interacts with ULS.
** We removed a dependency to a big JavaScript module which was no
longer needed to support anonymous preferences.
* Bug 60815: Add Marwari (rwr) and Ottoman Turkish (ota) to the
languages supported by ULS.

=== Fonts ===
* Add Iranian Serif and Iranian Sans Bold fonts.
* Removed Amiri font from Persian.
* Replaced Xerxes font with Artaxerxes.

=== Input methods ===
* Bug 53695: For languages which have no input methods, the Use
native keyboard option is now shown as selected by default.
* Added Venetian input method.

-- 
Kartik Mistry/કાર્તિક મિસ્ત્રી | IRC: kart_
{kartikm, 0x1f1f}.wordpress.com

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Eure Teilnahme wird bezahlt

2014-02-28 Thread Leonie Ehrl

















Liebe Community
Ich heisse Leonie, bin 26 Jahre und Studentin der Medien- und
Kommunikationswissenschaft an der Université de Fribourg (Schweiz). In meiner
Masterarbeit beschäftige ich mich mit EUCH,
den deutschsprachigen Wikipedianerinnen und Wikipedianern aus Deutschland,
Liechtenstein, Österreich und der Deutschschweiz. 

 

Mein wissenschaftliches Interesse besteht darin, die Zusammensetzung der
deutschsprachigen Community besser zu verstehen. Aus diesem Grund bin ich auf 
eure
Mithilfe angewiesen und wäre dankbar für eine rege Beteiligung bis Freitag, 28. 
März 2014.

 

Der folgende Link zum Online-Fragebogen führt euch direkt zur Umfrage: 
https://student.unifr.ch/survey/go/index.php/341639/lang-de-informal

Die Beantwortung dauert rund 10 Minuten. Deine Anonymität ist
selbstverständlich gewährleistet.

 

Ganz im Sinne der unbeschränkten Zugänglichkeit von Informationen stelle ich
meine Arbeit im Juli 2014 unter eine freie Lizenz. Als Dankeschön für eure Zeit
und Unterstützung spende ich ausserdem für jeden auswertbaren Fragebogen einen
Euro bzw. einen Schweizerfranken an das jeweilige Wikimedia-Chapter. 

 

Gerne könnt ihr euch bei Fragen an mich wenden: leonie.e...@outlook.com
Viele GrüsseLeonie

  
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Eure Teilnahme wird bezahlt

2014-02-28 Thread Andre Klapper
Hi,

just for your interest, you sent this to wikitech-l@lists.wikimedia.org
which is an English language mailing list.

andre


On Fri, 2014-02-28 at 14:26 +0100, Leonie Ehrl wrote:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Liebe Community
 Ich heisse Leonie, bin 26 Jahre und Studentin der Medien- und
 Kommunikationswissenschaft an der Université de Fribourg (Schweiz). In meiner
 Masterarbeit beschäftige ich mich mit EUCH,
 den deutschsprachigen Wikipedianerinnen und Wikipedianern aus Deutschland,
 Liechtenstein, Österreich und der Deutschschweiz. 
 
  
 
 Mein wissenschaftliches Interesse besteht darin, die Zusammensetzung der
 deutschsprachigen Community besser zu verstehen. Aus diesem Grund bin ich auf 
 eure
 Mithilfe angewiesen und wäre dankbar für eine rege Beteiligung bis Freitag, 
 28. März 2014.
 
  
 
 Der folgende Link zum Online-Fragebogen führt euch direkt zur Umfrage: 
 https://student.unifr.ch/survey/go/index.php/341639/lang-de-informal
 
 Die Beantwortung dauert rund 10 Minuten. Deine Anonymität ist
 selbstverständlich gewährleistet.
 
  
 
 Ganz im Sinne der unbeschränkten Zugänglichkeit von Informationen stelle ich
 meine Arbeit im Juli 2014 unter eine freie Lizenz. Als Dankeschön für eure 
 Zeit
 und Unterstützung spende ich ausserdem für jeden auswertbaren Fragebogen einen
 Euro bzw. einen Schweizerfranken an das jeweilige Wikimedia-Chapter. 
 
  
 
 Gerne könnt ihr euch bei Fragen an mich wenden: leonie.e...@outlook.com
 Viele GrüsseLeonie
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Eure Teilnahme wird bezahlt

2014-02-28 Thread Leonie Ehrl
Hi Andre,
thanks for your message. Indeed, I didn´t know that this is an international 
mailing list. Rookie mistake! Wikimedia remains to be discovered :)
CheersLeonie

 From: aklap...@wikimedia.org
 To: wikitech-l@lists.wikimedia.org
 Date: Fri, 28 Feb 2014 15:14:17 +0100
 Subject: Re: [Wikitech-l] Eure Teilnahme wird bezahlt
 
 Hi,
 
 just for your interest, you sent this to wikitech-l@lists.wikimedia.org
 which is an English language mailing list.
 
 andre
 
 
 On Fri, 2014-02-28 at 14:26 +0100, Leonie Ehrl wrote:
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  Liebe Community
  Ich heisse Leonie, bin 26 Jahre und Studentin der Medien- und
  Kommunikationswissenschaft an der Université de Fribourg (Schweiz). In 
  meiner
  Masterarbeit beschäftige ich mich mit EUCH,
  den deutschsprachigen Wikipedianerinnen und Wikipedianern aus Deutschland,
  Liechtenstein, Österreich und der Deutschschweiz. 
  
   
  
  Mein wissenschaftliches Interesse besteht darin, die Zusammensetzung der
  deutschsprachigen Community besser zu verstehen. Aus diesem Grund bin ich 
  auf eure
  Mithilfe angewiesen und wäre dankbar für eine rege Beteiligung bis Freitag, 
  28. März 2014.
  
   
  
  Der folgende Link zum Online-Fragebogen führt euch direkt zur Umfrage: 
  https://student.unifr.ch/survey/go/index.php/341639/lang-de-informal
  
  Die Beantwortung dauert rund 10 Minuten. Deine Anonymität ist
  selbstverständlich gewährleistet.
  
   
  
  Ganz im Sinne der unbeschränkten Zugänglichkeit von Informationen stelle ich
  meine Arbeit im Juli 2014 unter eine freie Lizenz. Als Dankeschön für eure 
  Zeit
  und Unterstützung spende ich ausserdem für jeden auswertbaren Fragebogen 
  einen
  Euro bzw. einen Schweizerfranken an das jeweilige Wikimedia-Chapter. 
  
   
  
  Gerne könnt ihr euch bei Fragen an mich wenden: leonie.e...@outlook.com
  Viele GrüsseLeonie
  

  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 -- 
 Andre Klapper | Wikimedia Bugwrangler
 http://blogs.gnome.org/aklapper/
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
  
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki Security and Maintenance Releases: 1.22.3, 1.21.6 and 1.19.12

2014-02-28 Thread Chris Steipp
That was a mistake this release. We'll continue those going forward.
On Feb 27, 2014 7:56 PM, Matthew Walker mwal...@wikimedia.org wrote:

 I note that there are security fixes in these release's -- did I miss
 Chris' email about these patches or are we moving away from the model where
 we send out an email to the list a couple of days before release?

 ~Matt Walker
 Wikimedia Foundation
 Fundraising Technology Team


 On Thu, Feb 27, 2014 at 6:55 PM, Brian Wolff bawo...@gmail.com wrote:

   * (bug 61346) SECURITY: Make token comparison use constant time. It
 seems
   like
 our token comparison would be vulnerable to timing attacks. This will
   take
 constant time.
 
  Not to be a grammar nazi, but that should presumably be something
  along the lines of Using constant time comparison will prevent this
  instead of This will take constant time, as that could be
  interpreted as the attack would take constant time.
 
  --bawolff
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Eure Teilnahme wird bezahlt

2014-02-28 Thread Daniel Kinzler
Am 28.02.2014 15:27, schrieb Leonie Ehrl:
 Hi Andre,
 thanks for your message. Indeed, I didn´t know that this is an international 
 mailing list. Rookie mistake! Wikimedia remains to be discovered :)
 CheersLeonie

Not only is it international, it's also about MediaWiki, the software that runs
Wikimedia-Wikis like Wikipedia.

If you want the German language Wikipedia community, try the wikide-l list.

-- daniel


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Labs migration starts on Tuesday

2014-02-28 Thread Andrew Bogott
Starting on Tuesday, March 4th, the new Labs install in the eqiad data 
center will be open for business.  Two dramatic things will happen on 
that day:  Wikitech will gain the ability to create instances in eqiad, 
and wikitech will lose the ability to create new instances in pmtpa.


About a month from Tuesday, the pmtpa labs install will be shut down. 
If you want your project to still be up and running in April, you must 
take action!


We are committed to not destroying any instances or data during the 
shutdown, but projects that remain untouched by human hands during the 
next few weeks will be mothballed by staff: the data will be preserved 
but most likely compressed and archived, and instances will be left in a 
shutdown state.


(Note:  Toollabs users can sit tight for a bit; Coren will provide 
specific migration instructions for you shortly.)


I've written a migration guide, here: 
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Howto It's a 
work in progress, so check back frequently.  Please don't hesitate to 
ask questions on IRC, make suggestions as to guide improvements, or 
otherwise question this process.  Quite a few of the suggested steps in 
that guide require action on the part of a Labs op -- for that purpose 
we've created a bugzilla tracking bug, 62042.  To add a migration bug 
that links to the tracker, use this link: 
https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labscomponent=Infrastructureblocked=62042



At the very least, please visit this page and edit it with your project 
migration plans: 
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Progress 
Projects that have no activity on that page will be early candidates for 
mothballing.  If you want me to delete your project, please note that as 
well -- that will allow us to free up resources for future projects.


I am cautiously optimistic about this migration.  Most of our testing 
has gone fairly well, so a lot of you should find the process smooth and 
easy.  That said, we're all going to be early adopters of this tech, so 
I appreciate your patience and understanding when inevitable bugs shake 
out.  I look forward to hearing about them on IRC!


-Andrew

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Antoine Musso
Le 28/02/2014 01:28, Matthew Walker a écrit :
 Hey all,
 
 I recently had a new repository created; and I wanted to create some jobs
 for it.
 
 I dutifully created and had merged:
 https://gerrit.wikimedia.org/r/#/c/115968/
 https://gerrit.wikimedia.org/r/#/c/115967/
 
 Hashar told me I then needed to follow the instructions on [1] to push the
 jobs to jenkins. Running the script myself was only pain; it kept erroring
 out while trying to create the job. Marktraceur managed to create the jobs
 after much kicking down the door aka running the script multiple times.
 
 It appears that the problem is that
 https://integration.mediawiki.org/ci/createItem?name=mwext-FundraisingChart-lint301s
 to
 https://integration.mediawiki.org/?...
 
 So that's a problem? We're still not sure why Mark was able to create the
 jobs with perseverance though.
snip


The proper URL is https://integration.wikimedia.org/ci/ , the
integration.mediawiki.org redirects to the / (though it does not discard
the query string which is a bug).

I have updated the wiki page, the jenkins_jobs.ini file should have:

 [jenkins]
 url=https://integration.wikimedia.org/ci/
 user=...
 password=...  # actually an user API token


While deploying some job to day, I have been hit by the issue of the
jobs being created but not registered in Gearman.  When posting to the
Jenkins API, it issue a redirect to a status page which is cached by the
misc Varnish.   So we need to send headers to prevent page caching :/


-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Antoine Musso
Le 28/02/2014 01:28, Matthew Walker wrote:
 
 Would it make sense to have QChris / ^demon create the standard jobs when
 they create the repository?


Hello,

That is a good idea.  Moreover we could ensure Bugzilla has a component.
 We might want to automatize a lot of the workflow as well.

Two things that could help a bit is to run actions after a merge to the
Zuul and Jenkins Jobs configuration repositories.  Ie reload Zuul
automatically and generate jobs on post merge.

cheers,

-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Gabriel Wicke
Hi Roman!

On 02/28/2014 01:24 AM, Brian Wolff wrote:
 On 2/28/14, Roman Zaynetdinov romanz...@gmail.com wrote:
 Help people in reading complex texts by providing inline translation for
 unknown words. For me as a non-native English speaker student sometimes is
 hard to read complicated texts or articles, that's why I need to search for
 translation or description every time. Why not to simplify this and change
 the flow from translate and understand to translate, learn and understand?

This sounds like a great idea.

 There are two ways in my mind right now. First is to make a web-site built
 on Node.js with open API for users. Parsoid could be used for parsing data
 from Wiktionary API which is suitable for Node. A small JavaScript widget
 is also required for front-end representation.

You could basically write a node service that pulls in the Parsoid HTML
for a given wiktionary term and extracts the info you need from the DOM
and returns it in a JSON response to a client-side library.
Alternatively (or as a first step), you could download the Parsoid HTML
of the wiktionary article on the client and extract the info there. This
could even be implemented as a gadget. We recently set liberal CORS
headers to make this easy.

 Parsoid could be used for parsing data
 from Wiktionary API which is suitable for Node
 
 Just as a warning, parsing data from wiktionary into usable form is a
 lot harder then it looks, so don't underestimate this step. (Or at
 least it was several years ago when I last tried)

The Parsoid rendering (e.g. [1]) has pretty much all semantic
information in the DOM. There might still be wiktionary-specific issues
that we don't know about yet, but tasks like extracting template
parameters or the rendering of specific templates (IPA,..) are already
straightforward. Also see the DOM spec [2] for background.

Gabriel

[1]: http://parsoid-lb.eqiad.wikimedia.org/enwiktionary/foo
 Other languages via frwiktionary, fiwiktionary, ...
[2]: https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] captcha idea: proposal for gnome outreach for women 14

2014-02-28 Thread Mansi Gokhale
hello,

These are some approaches i can think of instead of a text based captcha.

The image idea where users are asked to spot the odd one out like
demonstrated or find all the similar images like mentioned in
herehttps://www.mediawiki.org/wiki/CAPTCHA
.

Also a picture with a part chipped in could be shown and chipped pictures
could be given as options

like find the missing part from a jigsaw puzzle.

The image which would be shown is http://imgur.com/uefeb08

http://imgur.com/KEJqCg3 is the picture which would be the correct option.

The other options could be rotated versions of this , which would not be so
easy for the bot to match. (unless it somehow worked some digital
processing algorithm and matched the color gradients or something like
that).

This is a good option for people who do not know english or are illiterate
and maybe would not understand questions like : is this a bird , plane ,
superman? after being shown a picture.

Tell me what you think

(Sorry to upload those images on imgur. i dont know how to put them on the
wiki .Hope that is ok)


have posted this on the CAPTCHA
pagehttps://www.mediawiki.org/wiki/Talk:CAPTCHAalso
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] captcha idea: proposal for gnome outreach for women 14

2014-02-28 Thread Arthur Richards
I think this is an intriguing approach - particularly for use cases on
mobile devices. We display captchas as necessary through MobileFrontend
when they are triggered, but the mobile experience is horrible (arguably
the whole captcha experience is horrible regardless of the medium, but
that's another conversation). As long as we need to surface captchas,
something non-text based, especially if it didn't require typing, would be
preferable.


On Fri, Feb 28, 2014 at 10:07 AM, Mansi Gokhale gokhalemans...@gmail.comwrote:

 hello,

 These are some approaches i can think of instead of a text based captcha.

 The image idea where users are asked to spot the odd one out like
 demonstrated or find all the similar images like mentioned in
 herehttps://www.mediawiki.org/wiki/CAPTCHA
 .

 Also a picture with a part chipped in could be shown and chipped pictures
 could be given as options

 like find the missing part from a jigsaw puzzle.

 The image which would be shown is http://imgur.com/uefeb08

 http://imgur.com/KEJqCg3 is the picture which would be the correct option.

 The other options could be rotated versions of this , which would not be so
 easy for the bot to match. (unless it somehow worked some digital
 processing algorithm and matched the color gradients or something like
 that).

 This is a good option for people who do not know english or are illiterate
 and maybe would not understand questions like : is this a bird , plane ,
 superman? after being shown a picture.

 Tell me what you think

 (Sorry to upload those images on imgur. i dont know how to put them on the
 wiki .Hope that is ok)


 have posted this on the CAPTCHA
 pagehttps://www.mediawiki.org/wiki/Talk:CAPTCHAalso
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Arthur Richards
Software Engineer, Mobile
[[User:Awjrichards]]
IRC: awjr
+1-415-839-6885 x6687
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Labs migration starts on Tuesday

2014-02-28 Thread Petr Bena
I am confused about /data mounpoint

You say:

The contents of your shared /data/project or /home directories will
not be immediately available in eqiad.

Does it mean that if I decide to move the content by hand, using SCP,
it will be overwritten anyway sooner or later? How do I decide if I
want to have this content moved by ops or by myself? What if I want to
move just some items from /data/project and remaining data can be
safely nuked?

On Fri, Feb 28, 2014 at 4:59 PM, Andrew Bogott abog...@wikimedia.org wrote:
 Starting on Tuesday, March 4th, the new Labs install in the eqiad data
 center will be open for business.  Two dramatic things will happen on that
 day:  Wikitech will gain the ability to create instances in eqiad, and
 wikitech will lose the ability to create new instances in pmtpa.

 About a month from Tuesday, the pmtpa labs install will be shut down. If you
 want your project to still be up and running in April, you must take action!

 We are committed to not destroying any instances or data during the
 shutdown, but projects that remain untouched by human hands during the next
 few weeks will be mothballed by staff: the data will be preserved but most
 likely compressed and archived, and instances will be left in a shutdown
 state.

 (Note:  Toollabs users can sit tight for a bit; Coren will provide specific
 migration instructions for you shortly.)

 I've written a migration guide, here:
 https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Howto It's a work
 in progress, so check back frequently.  Please don't hesitate to ask
 questions on IRC, make suggestions as to guide improvements, or otherwise
 question this process.  Quite a few of the suggested steps in that guide
 require action on the part of a Labs op -- for that purpose we've created a
 bugzilla tracking bug, 62042.  To add a migration bug that links to the
 tracker, use this link:
 https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labscomponent=Infrastructureblocked=62042


 At the very least, please visit this page and edit it with your project
 migration plans:
 https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Progress Projects
 that have no activity on that page will be early candidates for mothballing.
 If you want me to delete your project, please note that as well -- that will
 allow us to free up resources for future projects.

 I am cautiously optimistic about this migration.  Most of our testing has
 gone fairly well, so a lot of you should find the process smooth and easy.
 That said, we're all going to be early adopters of this tech, so I
 appreciate your patience and understanding when inevitable bugs shake out.
 I look forward to hearing about them on IRC!

 -Andrew

 ___
 Labs-l mailing list
 lab...@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/labs-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Niklas Laxström
2014-02-28 11:09 GMT+02:00 Roman Zaynetdinov romanz...@gmail.com:
 From which source gather the data?

 Wiktionary is the best candidate, it is an open source and it has a wide
 database. It also suits for growing your project by adding different
 languages.

It's not obvious why you have reached this conclusion.

1) There are many Wiktionaries, and they do not all work the same or
have the same content.
2) The Wiktionary data is relatively free form text, so it is hard to
parse to find the relevant bits.
3) Dozens of people have mined Wiktionary already. It would make sense
to see if they have put the resulting database available.
4) There are many sources of data, some of them also open, which can
have better coverage, or coverage on speciality areas where
Wiktionaries are lacking.
5) I expect that best results will be achieved by using multiple data sources.

 Growth opportunities

 I am leaving in Finland right now and I don't know Finnish as I should to
 understand locals, therefore this project can be expanded by adding more
 languages support for helping people like me reading, learning and
 understanding texts in foreign languages.

I hope you enjoyed your stay in here. I do not how much Finnish you
have learned, but after a while it should be obvious that just
searching for the exact string the user clicked or selected will not
work because of the agglutinative nature of the language. I advocate
for features which work in all languages (at least in many :). If you
implement this for English only first, it is likely that you will have
to rewrite it to support other languages.

  -Niklas

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Mark Holmquist
On Fri, Feb 28, 2014 at 04:58:51PM +0100, Antoine Musso wrote:
 The proper URL is https://integration.wikimedia.org/ci/ , the
 integration.mediawiki.org redirects to the / (though it does not discard
 the query string which is a bug).
 
 I have updated the wiki page, the jenkins_jobs.ini file should have:
 
  [jenkins]
  url=https://integration.wikimedia.org/ci/
  user=...
  password=...  # actually an user API token

I have had this the entire time we were trying to create the jobs - it
did not help, I still saw the issue.

-- 
Mark Holmquist
Software Engineer, Multimedia
Wikimedia Foundation
mtrac...@member.fsf.org
https://wikimediafoundation.org/wiki/User:MHolmquist


signature.asc
Description: Digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Antoine Musso
Le 28/02/2014 18:39, Mark Holmquist a écrit :
 I have had this the entire time we were trying to create the jobs - it
 did not help, I still saw the issue.

Got any trace to share?  On job creation, a POST is sent which is then
redirected to a GET which has been cached by varnish previously and says
the job does not work.  That cause Jenkins Job Builder to choke with an
error saying the created job does not exist :-(

Workaround: disable Varnish caching entirely..

-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Mark Holmquist
On Fri, Feb 28, 2014 at 06:43:04PM +0100, Antoine Musso wrote:
 Got any trace to share?

marktraceur@midvalley-the-hornfreak:~/projects/wikimedia/integration/jenkins-job-builder$
 jenkins-jobs --conf etc/jenkins_jobs.ini update config/ 
'mwext-MultimediaViewer-do-something'
INFO:root:Updating jobs in config/ (['mwext-MultimediaViewer-do-something'])
INFO:jenkins_jobs.builder:Creating jenkins job 
mwext-MultimediaViewer-do-something
https://integration.wikimedia.org/ci/createItem?name=mwext-MultimediaViewer-do-something
Traceback (most recent call last):
  File /usr/local/bin/jenkins-jobs, line 9, in module
load_entry_point('jenkins-job-builder==0.0.584.07fa712', 'console_scripts', 
'jenkins-jobs')()
  File 
/home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/cmd.py,
 line 127, in main
jobs = builder.update_job(options.path, options.names)
  File 
/home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/builder.py,
 line 581, in update_job
self.jenkins.update_job(job.name, job.output())
  File 
/home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/builder.py,
 line 476, in update_job
self.jenkins.create_job(job_name, xml)
  File 
/usr/local/lib/python2.7/dist-packages/python_jenkins-0.2.1-py2.7.egg/jenkins/__init__.py,
 line 400, in create_job
raise JenkinsException('create[%s] failed' % (name))
jenkins.JenkinsException: create[mwext-MultimediaViewer-do-something] failed
marktraceur@midvalley-the-hornfreak:~/projects/wikimedia/integration/jenkins-job-builder$
 

I just made a dummy job - commit here:
https://gerrit.wikimedia.org/r/116123

Obviously nothing special, but the issue is in the HTTP request code
anyway.

Cheers,

-- 
Mark Holmquist
Software Engineer, Multimedia
Wikimedia Foundation
mtrac...@member.fsf.org
https://wikimediafoundation.org/wiki/User:MHolmquist


signature.asc
Description: Digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Captcha Idea Proposal for GSOC 2014

2014-02-28 Thread Aalekh Nigam
1)Alphabetical order captcha:We can use Html5's drag and drop Api to list a 
particular Set of images into one category .for example in the example 
mentoinednbsp;in the demo herenbsp;,i made a collection of diff words 
starting with letters A,B,C as an output i grouped up words with starting 
letter A diff from words with starting letter B,CAs,i used text in this 
example we can use images of diff animals such as cat's and dog.and by drag 
and drop we can group images of cat and that of dog in diff 
categories.2)Annotation captcha:We can usenbsp;Images With 
annotationsnbsp;from commons determine the subcategoriy the annotations 
belongs to and then give relevant options to the usersnbsp;;for example in 
thenbsp;filenbsp;we can search from names of different annotations to which 
they corresponds to from wikipedia(names given here are those of mountain) and 
then give the the option's much relevant to the image.3)Effect captcha:We can 
use image as a question which are changed by the effect produced php's gd 
library and the use the same file with another effect and then ask user to 
match the two files.for example:thenbsp;image1nbsp;can be used as a 
question asking user to click on the image that matches with the question image 
and as an answer we can give thisnbsp;spiral imagenbsp;of the original 
image.Similarly we can give filters to different images producing different 
options asking user for right answer.4)Direct captcha:We can ask to user direct 
questions like ask for selecting cat out of options consisting of images of 
cats and humans.an example by pginer demonstrate thisnbsp;example5)Ask 
User to click on given effect: Asking user to click on images consisting of 
spiral effect's out of options which consist of images with spiral and other 
effects(example:greyscale).6)Drag and Drop character in Correct Place: We can 
use drag and drop api of html5 to ask user to form an particular alphabet or no 
out of the pieces of character provided .Herenbsp;is an example to form an 
character A and an digit 8 out of the same pieces of character.This drag 
and drop capability can be further enhance to form a particular shapes.For 
example form a clip art from a particular set of piece of shapesfor example 
the image givennbsp;hereinserts the correct nose as asked in the in the 
questions out of the possible options provided.Most,Importantly i think 
creation of an index system would be fruitful since it would rank the 
inappropriate images on the basis of users response (rank is negative for an 
image if user needs to reload a captcha) to a provided captcha.This as the time 
passes will provide us with relevant images which are user friendly and 
equivalently secure to use...In addition i sincerely appreciate a point 
mentioned by Gmansi of creation of jigsaw puzzle for the images but in my view 
point there will be listing of some particular category of images and those 
ranked higher in indexing system to be used as jigsaw puzzle.as an additonal 
help we can usenbsp;Extension Assiranbsp;to make our extension smarter.please 
give your valuable suggestions as we can work to improve this amazing project. 
nbsp;at nbsp;https://www.mediawiki.org/wiki/Talk:CAPTCHA nbsp;:)
nbsp;Thank YouAalekh NigamaalekhN
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha Idea Proposal for GSOC 2014

2014-02-28 Thread Sumana Harihareswara
Hi and thanks for being interested in Wikimedia!

Please take a look at how your email looked to a lot of people:
http://imgur.com/4OuPSyN

(You can see it in our mailing list archives:
http://lists.wikimedia.org/pipermail/wikitech-l/2014-February/074812.html )

Could you re-send it with your numbered points separated better, so we can
read it?  Thanks!

Sumana Harihareswara
Engineering Community Manager
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha Idea Proposal for GSOC 2014

2014-02-28 Thread Aalekh Nigam
I figured out following way we can approach the project:1)Alphabetical order 
captcha:We can use Html5's drag and drop Api to list a particular Set of images 
into one category .for example in the example mentoined in the demo here ,i 
made a collection of diff words starting with letters A,B,C as an output i 
grouped up words with starting letter A diff from words with starting letter 
B,CAs,i used text in this example we can use images of diff animals such as 
cat's and dog.and by drag and drop we can group images of cat and that of 
dog in diff categories.
2)Annotation captcha:We can use Images With annotations from commons determine 
the subcategoriy the annotations belongs to and then give relevant options to 
the users ;for example in the file we can search from names of different 
annotations to which they corresponds to from wikipedia(names given here are 
those of mountain) and then give the the option's much relevant to the image.
3)Effect captcha:We can use image as a question which are changed by the effect 
produced php's gd library and the use the same file with another effect and 
then ask user to match the two files.for example:the image1 can be used as 
a question asking user to click on the image that matches with the question 
image and as an answer we can give this spiral image of the original 
image.Similarly we can give filters to different images producing different 
options asking user for right answer.
4)Direct captcha:We can ask to user direct questions like ask for selecting cat 
out of options consisting of images of cats and humans.an example by pginer 
demonstrate this example
5)Ask User to click on given effect: Asking user to click on images consisting 
of spiral effect's out of options which consist of images with spiral and other 
effects(example:grey scale).
6)Drag and Drop character in Correct Place: We can use drag and drop api of 
html5 to ask user to form an particular alphabet or no out of the pieces of 
character provided .Here is an example to form an character A and an digit 
8 out of the same pieces of character.
This drag and drop capability can be further enhance to form a particular 
shapes.For example form a clip art from a particular set of piece of 
shapesfor example the image given here inserts the correct nose as asked in 
the in the questions out of the possible options provided.
Most,Importantly i think creation of an index system would be fruitful since it 
would rank the inappropriate images on the basis of users response (rank is 
negative for an image if user needs to reload a captcha) to a provided 
captcha.This as the time passes will provide us with relevant images which are 
user friendly and equivalently secure to use...
In addition i sincerely appreciate a point mentioned by Gmansi of creation of 
jigsaw puzzle for the images but in my view point there will be listing of some 
particular category of images and those ranked higher in indexing system to be 
used as jigsaw puzzle.
As an additonal help we can use Extension Assira to make our extension smarter.
please give your valuable suggestions as we can work to improve this amazing 
project. nbsp;at nbsp;https://www.mediawiki.org/wiki/Talk:CAPTCHA nbsp;:)
nbsp;Thank YouAalekh NigamaalekhN
From: Aalekh Nigamlt;aalekh1...@rediffmail.comgt;
Sent: Fri, 28 Feb 2014 23:32:16 
To: wikitech-l@lists.wikimedia.orglt;wikitech-l@lists.wikimedia.orggt;
Subject: Captcha Idea Proposal for GSOC 2014
1)Alphabetical order captcha:We can use Html5's drag and drop Api to list a 
particular Set of images into one category .for example in the example 
mentoinednbsp;in the demo herenbsp;,i made a collection of diff words 
starting with letters A,B,C as an output i grouped up words with starting 
letter A diff from words with starting letter B,CAs,i used text in this 
example we can use images of diff animals such as cat's and dog.and by drag 
and drop we can group images of cat and that of dog in diff 
categories.2)Annotation captcha:We can usenbsp;Images With 
annotationsnbsp;from commons determine the subcategoriy the annotations 
belongs to and then give relevant options to the usersnbsp;;for example in 
thenbsp;filenbsp;we can search from names of different annotations to which 
they corresponds to from wikipedia(names given here are those of mountain) and 
then give the the option's much relevant to the image.3)Effect captcha:We can 
use image as a question which are changed by the effect produced php's gd 
library and the use the same file with another effect and then ask user to 
match the two files.for example:thenbsp;image1nbsp;can be used as a 
question asking user to click on the image that matches with the question image 
and as an answer we can give thisnbsp;spiral imagenbsp;of the original 
image.Similarly we can give filters to different images producing different 
options asking user for right answer.4)Direct captcha:We can ask to user direct 
questions 

Re: [Wikitech-l] Drop support for PHP 5.3

2014-02-28 Thread Matthew Flaschen

On 02/25/2014 10:05 PM, Brad Jorsch (Anomie) wrote:

Namespaces do have opportunity to allow for shortened references within the
extension. Although potentially with confusion, particularly if the
shortened reference is hiding a global class of the same name (e.g.
aliasing Extension\User to User).


Yes, that's the advantage.  I wouldn't be so contrary as to make 
MyExtension\User, though.  But I do have a PageFilter class safely 
namespaced like this, which could easily end up used as a name by core 
(but currently isn't) or an extension.


Matthew Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] captcha idea: proposal for gnome outreach for women 14

2014-02-28 Thread Brad Jorsch (Anomie)
On Fri, Feb 28, 2014 at 12:07 PM, Mansi Gokhale gokhalemans...@gmail.comwrote:

 The image idea where users are asked to spot the odd one out like
 demonstrated or find all the similar images like mentioned in
 herehttps://www.mediawiki.org/wiki/CAPTCHA


If you display 8 images and the user has to pick one, then even by random
guessing the attacker has a 12.5% chance of passing the captcha. That's not
good at all. Finding all matching is slightly better since it reduces the
guessability (1/256 for 8 images), but still not very good. A traditional
captcha using only A-Z is 1/308915776. To do as well with image picking,
you'd need to ask the user to choose the matches from a set of about 28.
Adding in numbers 2-9 is 1/1544804416, needing a set of about 31 images.

The set of possible images also needs to be very large and the
categorization private.
https://www.mediawiki.org/wiki/Talk:Requests_for_comment/CAPTCHA#Issue:_image_classification_CAPTCHAs_need_a_secret_corpusgoes
into much more detail on this issue.

Then there's the issue of different interpretation. Take for example
https://www.mediawiki.org/wiki/File:Find-all-captcha-idea.png. Is the
second image wearing glasses? Or is that a lorgnette or something like
opera glasses, both of which are held in front of the eyes rather than worn?

https://www.mediawiki.org/wiki/File:Find-the-different-captcha-idea.png has
a similar problem. The first image is the only one with a cigarette, and
the only one with non-realistic coloring. The second is the only bald one,
and the only one with something resembling a lorgnette, and the only one
not looking in the general direction of the camera, and the only one with a
book. The fourth is the only child. The sixth is the only obvious female
(I'm not sure about the cat). The eighth is the only one smiling, and the
only one with visible teeth.

Also a picture with a part chipped in could be shown and chipped pictures
 could be given as options like find the missing part from a jigsaw puzzle.


 The image which would be shown is http://imgur.com/uefeb08

 http://imgur.com/KEJqCg3 is the picture which would be the correct option.

 The other options could be rotated versions of this , which would not be
 so easy for the bot to match. (unless it somehow worked some digital
 processing algorithm and matched the color gradients or something like
 that).


That seems very simple for a computer to solve. Just find the option with
minimal difference along the join edges, which is probably easier than what
they already do for OCRing text captchas.


As far as captchas, I still think https://xkcd.com/810/ is the way to go.


-- 
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] captcha idea: proposal for gnome outreach for women 14

2014-02-28 Thread Brad Jorsch (Anomie)
On Fri, Feb 28, 2014 at 1:29 PM, Brad Jorsch (Anomie) bjor...@wikimedia.org
 wrote:

 A traditional captcha using only A-Z is 1/308915776.


That should be a traditional *6 letter* captcha using only A-Z.

Sorry for the noise.

-- 
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha Idea Proposal for GSOC 2014

2014-02-28 Thread Brad Jorsch (Anomie)
Your links didn't work at all, so I can't give specific comments.

On Fri, Feb 28, 2014 at 1:02 PM, Aalekh Nigam aalekh1...@rediffmail.comwrote:

 1)Alphabetical order captcha:We can use Html5's drag and drop Api to list
 a particular Set of images into one category .for example in the example
 mentoinednbsp;in the demo herenbsp;,i made a collection of diff words
 starting with letters A,B,C as an output i grouped up words with
 starting letter A diff from words with starting letter B,CAs,i used
 text in this example we can use images of diff animals such as cat's and
 dog.and by drag and drop we can group images of cat and that of dog in
 diff categories.


What if someone thinks your picture of a dog is wolf, or puppy, or
hound, or terrier, or animal, etc? Or what if the user identifies
your images in Spanish or Chinese rather than English, resulting in a
different order?

Also, how easy would it be for the spambot to download the entire list of
images+names and just brute force it?

And what are the bot's chances by randomly guessing? If there are 8 images
to sort, it's a 1/40320 chance. Which isn't very good as far as captchas
go, 6 letters A-Z is 1/308915776.


 2)Annotation captcha:We can usenbsp;Images With annotationsnbsp;from
 commons determine the subcategoriy the annotations belongs to and then give
 relevant options to the usersnbsp;;for example in thenbsp;filenbsp;we
 can search from names of different annotations to which they corresponds to
 from wikipedia(names given here are those of mountain) and then give the
 the option's much relevant to the image.


What's to stop the spambot from finding the image on Commons?

And looking at that category, are users really going to be able to reliably
identify the Fiat Grande Punto in
https://commons.wikimedia.org/wiki/File:%22_01_-_ITALY_-_ALFA_ROMEO_SPIDER_SILVER_15.jpg,
or figure out WTF UP 5 and UP 6 are supposed to be in
https://commons.wikimedia.org/wiki/File:%22_12_-_ITALY_-_Serie_UP_di_Gaetano_Pesce_UP_5_e_6_al_Triennale_Design_Museum_di_Milano_4.jpg,
or Colli Euganei in
https://commons.wikimedia.org/wiki/File:%22_12_-_ITALY-_Sunset_in_Cavarzere_08.JPG,
or identify the birds by scientific name in
https://commons.wikimedia.org/wiki/File:-_Plastic_boxes_-.jpg, or guess
which chloroplast (in German!) to pick in
https://commons.wikimedia.org/wiki/File:03-10_Mnium2.jpg?



 3)Effect captcha:We can use image as a question which are changed by the
 effect produced php's gd library and the use the same file with another
 effect and then ask user to match the two files.for
 example:thenbsp;image1nbsp;can be used as a question asking user to click
 on the image that matches with the question image and as an answer we can
 give thisnbsp;spiral imagenbsp;of the original image.Similarly we can
 give filters to different images producing different options asking user
 for right answer.


Spambots already solve this sort of thing when OCRing text captchas.


 4)Direct captcha:We can ask to user direct questions like ask for
 selecting cat out of options consisting of images of cats and humans.an
 example by pginer demonstrate thisnbsp;example


I just replied to this idea at
http://lists.wikimedia.org/pipermail/wikitech-l/2014-February/074816.html


 5)Ask User to click on given effect: Asking user to click on images
 consisting of spiral effect's out of options which consist of images with
 spiral and other effects(example:greyscale).


That requires people actually know what the effects names are, which
doesn't seem particularly accessible. And again, OCRing is probably harder
for bots.


 6)Drag and Drop character in Correct Place: We can use drag and drop api
 of html5 to ask user to form an particular alphabet or no out of the pieces
 of character provided .Herenbsp;is an example to form an character A and
 an digit 8 out of the same pieces of character.This drag and drop
 capability can be further enhance to form a particular shapes.For example
 form a clip art from a particular set of piece of shapesfor example the
 image givennbsp;hereinserts the correct nose as asked in the in the
 questions out of the possible options provided.Most,Importantly i think
 creation of an index system would be fruitful since it would rank the
 inappropriate images on the basis of users response (rank is negative for
 an image if user needs to reload a captcha) to a provided captcha.This as
 the time passes will provide us with relevant images which are user
 friendly and equivalently secure to use...In addition i sincerely
 appreciate a point mentioned by Gmansi of creation of jigsaw puzzle for the
 images but in my view point there will be listing of some particular
 category of images and those ranked higher in indexing system to be used as
 jigsaw puzzle.as an additonal help we can usenbsp;Extension
 Assiranbsp;to make our extension smarter.please give your valuable
 suggestions as we can work to improve this amazing project. nbsp;at 

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Roman Zaynetdinov
Hi Niklas, I know that in Finnish each word should be changed the same as
in Russian, that's why it causes the problems with translation. Right now I
am looking for solutions which can help find the original word. I put this
language as an example, which shows the purpose of using, of course after
implementing English others languages could be added with wider support.


2014-02-28 19:30 GMT+02:00 Niklas Laxström niklas.laxst...@gmail.com:

 2014-02-28 11:09 GMT+02:00 Roman Zaynetdinov romanz...@gmail.com:
  From which source gather the data?
 
  Wiktionary is the best candidate, it is an open source and it has a wide
  database. It also suits for growing your project by adding different
  languages.

 It's not obvious why you have reached this conclusion.

 1) There are many Wiktionaries, and they do not all work the same or
 have the same content.
 2) The Wiktionary data is relatively free form text, so it is hard to
 parse to find the relevant bits.
 3) Dozens of people have mined Wiktionary already. It would make sense
 to see if they have put the resulting database available.
 4) There are many sources of data, some of them also open, which can
 have better coverage, or coverage on speciality areas where
 Wiktionaries are lacking.
 5) I expect that best results will be achieved by using multiple data
 sources.

  Growth opportunities
 
  I am leaving in Finland right now and I don't know Finnish as I should to
  understand locals, therefore this project can be expanded by adding more
  languages support for helping people like me reading, learning and
  understanding texts in foreign languages.

 I hope you enjoyed your stay in here. I do not how much Finnish you
 have learned, but after a while it should be obvious that just
 searching for the exact string the user clicked or selected will not
 work because of the agglutinative nature of the language. I advocate
 for features which work in all languages (at least in many :). If you
 implement this for English only first, it is likely that you will have
 to rewrite it to support other languages.

   -Niklas

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Matthew Walker
Yep; that's what I was saying above; the REST call to create the job 301
redirects back to integration.wikimedia.org/ when it should redirect to
integration.wikimedia.org/ci.

~Matt Walker
Wikimedia Foundation
Fundraising Technology Team


On Fri, Feb 28, 2014 at 9:55 AM, Mark Holmquist mtrac...@member.fsf.orgwrote:

 On Fri, Feb 28, 2014 at 06:43:04PM +0100, Antoine Musso wrote:
  Got any trace to share?

 marktraceur@midvalley-the-hornfreak:~/projects/wikimedia/integration/jenkins-job-builder$
 jenkins-jobs --conf etc/jenkins_jobs.ini update config/
 'mwext-MultimediaViewer-do-something'
 INFO:root:Updating jobs in config/
 (['mwext-MultimediaViewer-do-something'])
 INFO:jenkins_jobs.builder:Creating jenkins job
 mwext-MultimediaViewer-do-something

 https://integration.wikimedia.org/ci/createItem?name=mwext-MultimediaViewer-do-something
 Traceback (most recent call last):
   File /usr/local/bin/jenkins-jobs, line 9, in module
 load_entry_point('jenkins-job-builder==0.0.584.07fa712',
 'console_scripts', 'jenkins-jobs')()
   File
 /home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/cmd.py,
 line 127, in main
 jobs = builder.update_job(options.path, options.names)
   File
 /home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/builder.py,
 line 581, in update_job
 self.jenkins.update_job(job.name, job.output())
   File
 /home/marktraceur/projects/wikimedia/integration/jenkins-job-builder/jenkins_jobs/builder.py,
 line 476, in update_job
 self.jenkins.create_job(job_name, xml)
   File
 /usr/local/lib/python2.7/dist-packages/python_jenkins-0.2.1-py2.7.egg/jenkins/__init__.py,
 line 400, in create_job
 raise JenkinsException('create[%s] failed' % (name))
 jenkins.JenkinsException: create[mwext-MultimediaViewer-do-something]
 failed
 marktraceur@midvalley-the-hornfreak
 :~/projects/wikimedia/integration/jenkins-job-builder$

 I just made a dummy job - commit here:
 https://gerrit.wikimedia.org/r/116123

 Obviously nothing special, but the issue is in the HTTP request code
 anyway.

 Cheers,

 --
 Mark Holmquist
 Software Engineer, Multimedia
 Wikimedia Foundation
 mtrac...@member.fsf.org
 https://wikimediafoundation.org/wiki/User:MHolmquist

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.10 (GNU/Linux)

 iQIbBAEBAgAGBQJTEM2sAAoJEEPl+wghkjzxctAP+MR4obFT1V2rhj0cCr2fs2sI
 /HqB4/3jT2berFZSrQROmS8BKKneNZa/15tvSE3CBOxna5dBAu5LPaR8oLSiD6G1
 Ncd3zC5gt8jxsf7CGDi4Op9hUvZ4jFpq7YqunFO84wns9kEO6BTwGZrpR8pLDJ87
 I6QyFPQmPE/oS9gib1q8c1VMSBgQOyonlWjk91uI1UkRpDO/v2QE3zteWmhTgfWj
 wp0XziQ4prbiwTHP2lXe+4GcOF0XFbTalqUbBrkbjPhqBeEEbBY4ER5lYsn+swS5
 sd6yGKM4WlCgYIjIBZDa4amoAUoj9hu5DvS3LtYdcWptS5NmlWMYkp1utlnNjdpa
 dzekPueAsDr7DcBvQ26O1XnwJ80zC6czZGvNIVAhC92i1rIv2ufAWZeqPYsU29Xb
 VvRtplq5DC3iiblWqyTuinHOUPcU/tXNsH093NkSfwYilSyF8+AbOm4C8ncejOO/
 FT0RbVjosHsBrxzVEMi7FBTBVqssj2dNuiloC//qu8GSaRD/V+B/Ma+SWOgfSjC5
 vacRZvcU8m/lZ13cdTndEKSD6q2a9C2Ld2+iUd3GNayXEnughhi7VIbMFF3ogFR5
 VQ1hKOpvAoGMEykm9tflVaDVsK+O4s/cj1zaDcxnUwaLOvYevojSNLgjZTNHPSZH
 LI11xv7omFaNz6caYvk=
 =z2n8
 -END PGP SIGNATURE-

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Roman Zaynetdinov
Thanks a lot for feedback, I think I can discuss these options with my
mentor, I hope :).


2014-02-28 18:51 GMT+02:00 Gabriel Wicke gwi...@wikimedia.org:

 Hi Roman!

 On 02/28/2014 01:24 AM, Brian Wolff wrote:
  On 2/28/14, Roman Zaynetdinov romanz...@gmail.com wrote:
  Help people in reading complex texts by providing inline translation for
  unknown words. For me as a non-native English speaker student sometimes
 is
  hard to read complicated texts or articles, that's why I need to search
 for
  translation or description every time. Why not to simplify this and
 change
  the flow from translate and understand to translate, learn and
 understand?

 This sounds like a great idea.

  There are two ways in my mind right now. First is to make a web-site
 built
  on Node.js with open API for users. Parsoid could be used for parsing
 data
  from Wiktionary API which is suitable for Node. A small JavaScript
 widget
  is also required for front-end representation.

 You could basically write a node service that pulls in the Parsoid HTML
 for a given wiktionary term and extracts the info you need from the DOM
 and returns it in a JSON response to a client-side library.
 Alternatively (or as a first step), you could download the Parsoid HTML
 of the wiktionary article on the client and extract the info there. This
 could even be implemented as a gadget. We recently set liberal CORS
 headers to make this easy.

  Parsoid could be used for parsing data
  from Wiktionary API which is suitable for Node
 
  Just as a warning, parsing data from wiktionary into usable form is a
  lot harder then it looks, so don't underestimate this step. (Or at
  least it was several years ago when I last tried)

 The Parsoid rendering (e.g. [1]) has pretty much all semantic
 information in the DOM. There might still be wiktionary-specific issues
 that we don't know about yet, but tasks like extracting template
 parameters or the rendering of specific templates (IPA,..) are already
 straightforward. Also see the DOM spec [2] for background.

 Gabriel

 [1]: http://parsoid-lb.eqiad.wikimedia.org/enwiktionary/foo
  Other languages via frwiktionary, fiwiktionary, ...
 [2]: https://www.mediawiki.org/wiki/Parsoid/MediaWiki_DOM_spec

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: [Wikimedia-l] Call for Individual Engagement Grant proposals and committee members

2014-02-28 Thread Quim Gil
fyi


 Original Message 
Subject: [Wikimedia-l] Call for Individual Engagement Grant proposals
and committee members
Date: Fri, 28 Feb 2014 11:04:47 -0800
From: Siko Bouterse sboute...@wikimedia.org
Reply-To: Wikimedia Mailing List wikimedi...@lists.wikimedia.org
To: wikimedi...@lists.wikimedia.org

Hi all,

The Wikimedia Foundation and the Individual Engagement Grants Committee
invite you to submit and review proposals for community-led experiments to
improve Wikimedia!

Individual Engagement Grants support individuals and small teams to
organize projects for 6 months. You can get funding to turn your idea for
improving Wikimedia projects into action, with a grant for online community
organizing, outreach and partnerships, tool-building, or research. Funding
is available for a few hundred dollars up to $30,000.

Proposals for this round are due 31 March 2014:

https://meta.wikimedia.org/wiki/Grants:IEG

We're also seeking new committee members to help review and recommend
proposals for funding. Candidates are invited to sign up by 9 March 2014:

https://meta.wikimedia.org/wiki/Grants:IEG/Committee

Some examples of projects we've funded in the past:

*Organizing social media for Chinese Wikipedia ($350 for materials)[1]

*Improving gadgets for Visual Editor ($4500 for developers)[2]

*Coordinating free access to reliable sources for Wikipedians ($7500 for
project management, consultants and materials)[3]

*Building community and strategy for Wikisource (EURO 1 for
organizing and
travel)[4]

You can read more on the WMF blog:

https://blog.wikimedia.org/tag/individual-engagement-grants/

Hope to have your participation in this round!

Best wishes,

Siko


[1]
https://meta.wikimedia.org/wiki/Grants:IEG/Build_an_effective_method_of_publicity_in_PRChina

[2]
https://meta.wikimedia.org/wiki/Grants:IEG/Visual_editor-_gadgets_compatibility

[3] https://meta.wikimedia.org/wiki/Grants:IEG/The_Wikipedia_Library
[4]
https://meta.wikimedia.org/wiki/Grants:IEG/Elaborate_Wikisource_strategic_vision

-- 
Siko Bouterse
Wikimedia Foundation, Inc.

sboute...@wikimedia.org

*Imagine a world in which every single human being can freely share in the
sum of all knowledge. *
*Donate https://donate.wikimedia.org or click the edit button today,
and help us make it a reality!*
___
Wikimedia-l mailing list
wikimedi...@lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
mailto:wikimedia-l-requ...@lists.wikimedia.org?subject=unsubscribe




signature.asc
Description: OpenPGP digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Roadmap and Deployment highlight - week of March 3rd

2014-02-28 Thread Greg Grossmeier
Hello and welcome to the latest edition of the WMF Roadmap and
Deployments update.

See the full roadmap for next week and beyond here:
https://wikitech.wikimedia.org/wiki/Deployments#Week_of_March_3rd

Some important call outs:

== Monday ==

The migration of WMF Labs from pmtpa to eqiad begins
* new instance creation disabled in pmtpa, only available in eqiad
* See the emails from Andrew and Marc for more details:
** http://lists.wikimedia.org/pipermail/labs-l/2014-February/002152.html
** http://lists.wikimedia.org/pipermail/labs-l/2014-February/002153.html


We will be disabling ArticleFeedBack on all wikis.
* https://bugzilla.wikimedia.org/show_bug.cgi?id=61163


== Tuesday ==

MediaWiki upgrades
* group1 to 1.23wmf16: All non-Wikipedia sites (Wiktionary, Wikisource,
  Wikinews, Wikibooks, Wikiquote, Wikiversity, and a few other sites)
* see also:
** 
https://www.mediawiki.org/wiki/MediaWiki_1.23/Roadmap#Schedule_for_the_deployments
** https://www.mediawiki.org/wiki/MediaWiki_1.23/wmf16


== Wednesday ==

The new search cluster will be upgraded (to ElasticSearch 1.0.1).
* This will begin at 0:00 UTC March 6th/4pm Pacific March 5th and will
  take a few hours to complete.
* All wikis currently using the new search (CirrusSearch) will be
  temporarily switched back to the old serach (lsearchd)
* You shouldn't see much of a change in search behavior (CirrusSearch is
  mostly feature parity to lsearchd) if your wiki is on new search, but
  to see a list of wikis that currently have CirrusSearch enabled (and
  in what way: Beta Feature or Primary), see:
** https://www.mediawiki.org/wiki/Search#Wikis


== Thursday ==

MediaWiki upgrades
* group2 to 1.23wmf16 (all Wikipedias)
* group0 to 1.23wmf17 (test/test2/testwikidata/mediawiki)


As always, questions welcome,

Greg

-- 
| Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |


signature.asc
Description: Digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSOC 2014 idea

2014-02-28 Thread Brian Wolff
On Feb 28, 2014 12:52 PM, Gabriel Wicke gwi...@wikimedia.org wrote:

 The Parsoid rendering (e.g. [1]) has pretty much all semantic
 information in the DOM. There might still be wiktionary-specific issues
 that we don't know about yet, but tasks like extracting template
 parameters or the rendering of specific templates (IPA,..) are already
 straightforward. Also see the DOM spec [2] for background.

 Gabriel


Last time I tried doing anything like this was before parsoid existed, and
i'll admit my approach was probably the worst possible. However, the issue
was that each language formatted their pages differently, and some
languages did not format things consistently. I think there is a limit to
how much parsoid (or anything thats not AI) can help with that situation.

-bawolff
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Matthew Walker
Wikitech admin peoples!

I was doing bad things to my phone last night (reflashing it) and I lost
the 2 factor auth metadata for my authentication app. Because of this I can
no longer log in to wikitech.

I wasn't able to find any documentation on wikitech about how to reset it
-- so I need your help to do that I think? I still know my password; so I'm
not looking to reset that -- maybe just temporarily disable two factor auth
on my account (Mwalker) and I'll re-enroll myself?

~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Jeremy Baron
On Fri, Feb 28, 2014 at 9:15 PM, Matthew Walker mwal...@wikimedia.org wrote:
 I wasn't able to find any documentation on wikitech about how to reset it
 -- so I need your help to do that I think? I still know my password; so I'm
 not looking to reset that -- maybe just temporarily disable two factor auth
 on my account (Mwalker) and I'll re-enroll myself?

I don't know that much about the process but I believe step one is to
find the slips of paper that you wrote down the codes that you're
supposed to use in this very situation.

-Jeremy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Matthew Walker
Don't have them :p

~Matt Walker
Wikimedia Foundation
Fundraising Technology Team


On Fri, Feb 28, 2014 at 1:23 PM, Jeremy Baron jer...@tuxmachine.com wrote:

 On Fri, Feb 28, 2014 at 9:15 PM, Matthew Walker mwal...@wikimedia.org
 wrote:
  I wasn't able to find any documentation on wikitech about how to reset it
  -- so I need your help to do that I think? I still know my password; so
 I'm
  not looking to reset that -- maybe just temporarily disable two factor
 auth
  on my account (Mwalker) and I'll re-enroll myself?

 I don't know that much about the process but I believe step one is to
 find the slips of paper that you wrote down the codes that you're
 supposed to use in this very situation.

 -Jeremy

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Chris Steipp
Correct, the scratch codes are the only way to login.

If you don't have this, you'll have to get someone to remove your
preference in the db.
On Feb 28, 2014 1:32 PM, Matthew Walker mwal...@wikimedia.org wrote:

 Don't have them :p

 ~Matt Walker
 Wikimedia Foundation
 Fundraising Technology Team


 On Fri, Feb 28, 2014 at 1:23 PM, Jeremy Baron jer...@tuxmachine.com
 wrote:

  On Fri, Feb 28, 2014 at 9:15 PM, Matthew Walker mwal...@wikimedia.org
  wrote:
   I wasn't able to find any documentation on wikitech about how to reset
 it
   -- so I need your help to do that I think? I still know my password; so
  I'm
   not looking to reset that -- maybe just temporarily disable two factor
  auth
   on my account (Mwalker) and I'll re-enroll myself?
 
  I don't know that much about the process but I believe step one is to
  find the slips of paper that you wrote down the codes that you're
  supposed to use in this very situation.
 
  -Jeremy
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Adventures in creating new repos / jenkins jobs

2014-02-28 Thread Antoine Musso
Le 28/02/2014 18:55, Mark Holmquist a écrit :
 marktraceur@midvalley-the-hornfreak:~/projects/wikimedia/integration/jenkins-job-builder$
  jenkins-jobs --conf etc/jenkins_jobs.ini update config/ 
 'mwext-MultimediaViewer-do-something'
 INFO:root:Updating jobs in config/ (['mwext-MultimediaViewer-do-something'])
 INFO:jenkins_jobs.builder:Creating jenkins job 
 mwext-MultimediaViewer-do-something
 https://integration.wikimedia.org/ci/createItem?name=mwext-MultimediaViewer-do-something
snip stack trace
 jenkins.JenkinsException: create[mwext-MultimediaViewer-do-something] failed
 marktraceur@midvalley-the-hornfreak:~/projects/wikimedia/integration/jenkins-job-builder$
  
 
 I just made a dummy job - commit here:
 https://gerrit.wikimedia.org/r/116123
 
 Obviously nothing special, but the issue is in the HTTP request code
 anyway.

Hi,

Jenkins jobs builder use the python-jenkins module check for the
existence of jobs using a simple GET request:

 GET /ci/job/mwext-MultimediaViewer-do-something/api/json?tree=name

That throws a 404 and JJB then create the job:

 POST /ci/createItem?name=mwext-MultimediaViewer-do-something

python-jenkins then verify the job got created using the GET request
above.  I found out tonight that our misc Varnish caches the 404 error
for up to a minute, and hence the second GET is being served the cached
404 by Varnish.  End result JJB consider the job hasn't been created and
bails out with the above stack trace.

The way to fix it: stop caching 404 on the misc varnish, at least when
using the gallium backend.  There is a cache4xx parameters to the
varnish::instance puppet class. Will have to checkout out with some
Varnish guru how to best fix it.

Meanwhile I am entering sleep() mode..

-- 
Antoine hashar Musso


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] CirrusSearch outage Feb 28 ~19:30 UTC

2014-02-28 Thread Nikolas Everett
CirrusSearch flaked out Feb 28 around 19:30 UTC and I brought it back from
the dead around 21:25 UTC.  During the time it was flaking out searches
that used it (mediawiki.org, wikidata.org, ca.wikipedia.org, and everything
in Italian) took a long, long time or failed immediately with a message
about this being a temporary problem we're working on fixing.

Events:
We added four new Elasticserach servers on Rack D (yay) around 18:45 UTC
The Elasticsearch cluster started serving simple requests very slowly
around 19:30 UTC
I was alerted to a search issue on IRC at 20:45 UTC
I fixed the offending Elasticsearch servers around 21:25 UTC
Query times recovered shortly after that

Explanation:
We very carefully installed the same version of Elasticsearch and Java as
we use on the other machines then used puppet to configure the
Elasticsearch machines to join the cluster.  It looks like they only picked
up half the configuration provided by puppet
(/etc/elasticsearch/elasticsearch.yml but not
/etc/defaults/elasticsearch).  Unfortunately for us that is the bad half to
miss because /etc/default/elasticsearch contains the JVM heap settings.

The servers came online with the default amount of heap which worked fine
until Elasticsearch migrated a sufficiently large index to them.  At that
point the heap filled up and Java does what it does in that case and spun
forever trying to free garbage.  It pretty much pegged one CPU and rendered
the entire application unresponsive.  Unfortunately (again) pegging one CPU
isn't that weird for Elasticsearch.  It'll do that when it is merging.  The
application normally stays responsive because the rest of the JVM keeps
moving along.  That doesn't happen when heap is full.

Knocking out one of those machines caused tons of searches to block,
presumably waiting for those machine to respond.  I'll have to dig around
to see if I can find the timeout but we're obviously using the default
which in our case is way way way to long.  We then filled the pool queue
and started rejecting requests to search altogether.

When I found the problem all I had to do was kill -9 the Elasticsearch
servers and restart them.  -9 is required because JVMs don't catch the
regular signal if they are too busy garbage collecting.

What we're doing to prevent it from happening again:
* We're going to monitor the slow query log and have icinga start
complaining if it grows very quickly.  We normally get a couple of slow
queries per day so this shouldn't be too noisy.  We're going to also have
to monitor error counts, especially once we get more timeouts.  (
https://bugzilla.wikimedia.org/show_bug.cgi?id=62077)
* We're going to sprinkle more timeouts all over the place.  Certainly in
Cirrus while waiting on Elasticsearch and figure out how to tell
Elasticsearch what the shard timeouts should be as well.(
https://bugzilla.wikimedia.org/show_bug.cgi?id=62079)
* We're going to figure out why we only got half the settings.  This is
complicated because we can't let puppet restart Elasticsearch because
Elasticsearch restarts must be done one node at a time.

Nik
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Matthew Walker
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Please reset my 2 factor auto preference in the wikitech database.

My GPG key is available from the MIT keyserver [0]. Establishment of
ownership of the Mwalker LDAP account by this email can occur via
gerrit [1], or the edit history of my user page on [2].

(Incidently; I should probably get more signatures on my key...
anyone in the office want to sign it?)

Thanks,
Matt Walker

[0] D731C1C0 -- available from
http://pgp.mit.edu/pks/lookup?search=mwalker%40wikimedia.orgop=indexi
[1] https://gerrit.wikimedia.org/r/#/admin/groups/28,members
[2] https://wikitech.wikimedia.org/wiki/User:Mwalker

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.11 (GNU/Linux)

iQIcBAEBAgAGBQJTEQxXAAoJEM++CSTXMcHA2oAP/2J+O/MiF1TiF0QYGiGxyeUr
i7JIEvlU29GxaLiSg6BSsnlXOyZbXUcqWMY2tVKoqWM+YCy9QacboPOsrNHZ0tEo
QyVbCohrlk5RCeG24APx7rqh40RUAjzbkE2OQvVK5mqLEdK7cmA09q6hUYPnj1wT
ghIPI7FU9AHkfkRQiizVsOOVq4A8L+lQcspPRgHhATLE/K1mEsqsSBLw9hp2yWwf
5Hh9lO7L4sph7z+gkEJaAFqqnMbSKwsazN4MVjLaandnKDtteLsRZvIgkyjDBJ6s
DNc3DVQpMi+xjKnYd5wtfwhsn9BHJdxRpqSnKvo91G9nqvsnQb8UAosTLJvmeDIl
49dEarqQMHmEE/gEwbLj9I6RhDC9y5ScbfuA6CUHEBbIBqaB3nrRdJoZvlDLXlrd
8ulv8v6ym9gRsdM/RA3jQdoj25f5dDS8+e0NNG8d1oyPmR/L7Qb6fZ1RDslBq56F
Pjy6bULR51lSzvjQhmi8oH2+FEFXprUiYbs8IgAXZYA96UFJA+r3h9q7vCOXl8HG
uqzZdmKfuSP76rHrij3FYr+VDZaDNMdL+gc8Msu8cFZixiBf0LEGYlvNqaWwg6E7
OG02ydwiNwjHrMmeNUrmpmoB/YTR/X02+LzBc1LK33jPEi/9DdDdEJKy6J+HZZdM
xmVsW91PrGfxWCXG/qAB
=bgH5
-END PGP SIGNATURE-
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Jeremy Baron
On Fri, Feb 28, 2014 at 10:23 PM, Matthew Walker mwal...@wikimedia.org wrote:
 Please reset my 2 factor auto preference in the wikitech database.

 My GPG key is available from the MIT keyserver [0]. Establishment of
 ownership of the Mwalker LDAP account by this email can occur via
 gerrit [1], or the edit history of my user page on [2].

I'm not sure how any of that establishes anything?

 (Incidently; I should probably get more signatures on my key...
 anyone in the office want to sign it?)

The simplest option if you're in the office is to just tell an op in
person. (who can verify who you are because they know you)

 [0] D731C1C0 -- available from
 http://pgp.mit.edu/pks/lookup?search=mwalker%40wikimedia.orgop=indexi

Please don't use short key IDs. Also, any other user could make a key
with the same address you used and submit it to the keyserevers and
then it would also show up in search results for your address. (plus
we shouldn't trust the keyservers themselves so much)

More about short key IDs:
http://www.asheesh.org/note/debian/short-key-ids-are-bad-news.html

 [1] https://gerrit.wikimedia.org/r/#/admin/groups/28,members
 [2] https://wikitech.wikimedia.org/wiki/User:Mwalker

[2] redirects (somehow??) to another domain. Maybe better to link
straight to the history page.
https://wikitech.wikimedia.org/w/index.php?title=user:mwalkeraction=history

-Jeremy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] CirrusSearch outage Feb 28 ~19:30 UTC

2014-02-28 Thread Andrew Otto
 * We're going to figure out why we only got half the settings.  This is
 complicated because we can't let puppet restart Elasticsearch because
 Elasticsearch restarts must be done one node at a time.
 
Ah, I think I see it in elasticsearch/init.pp.  If you don’t want to subscribe 
the service to its config files, you should at the very least require them, so 
that they config files are put in place before the service is started by puppet 
during the first install.

e.g.  
https://github.com/wikimedia/puppet-kafka/blob/master/manifests/server.pp#L207



On Feb 28, 2014, at 5:11 PM, Nikolas Everett never...@wikimedia.org wrote:

 CirrusSearch flaked out Feb 28 around 19:30 UTC and I brought it back from
 the dead around 21:25 UTC.  During the time it was flaking out searches
 that used it (mediawiki.org, wikidata.org, ca.wikipedia.org, and everything
 in Italian) took a long, long time or failed immediately with a message
 about this being a temporary problem we're working on fixing.
 
 Events:
 We added four new Elasticserach servers on Rack D (yay) around 18:45 UTC
 The Elasticsearch cluster started serving simple requests very slowly
 around 19:30 UTC
 I was alerted to a search issue on IRC at 20:45 UTC
 I fixed the offending Elasticsearch servers around 21:25 UTC
 Query times recovered shortly after that
 
 Explanation:
 We very carefully installed the same version of Elasticsearch and Java as
 we use on the other machines then used puppet to configure the
 Elasticsearch machines to join the cluster.  It looks like they only picked
 up half the configuration provided by puppet
 (/etc/elasticsearch/elasticsearch.yml but not
 /etc/defaults/elasticsearch).  Unfortunately for us that is the bad half to
 miss because /etc/default/elasticsearch contains the JVM heap settings.
 
 The servers came online with the default amount of heap which worked fine
 until Elasticsearch migrated a sufficiently large index to them.  At that
 point the heap filled up and Java does what it does in that case and spun
 forever trying to free garbage.  It pretty much pegged one CPU and rendered
 the entire application unresponsive.  Unfortunately (again) pegging one CPU
 isn't that weird for Elasticsearch.  It'll do that when it is merging.  The
 application normally stays responsive because the rest of the JVM keeps
 moving along.  That doesn't happen when heap is full.
 
 Knocking out one of those machines caused tons of searches to block,
 presumably waiting for those machine to respond.  I'll have to dig around
 to see if I can find the timeout but we're obviously using the default
 which in our case is way way way to long.  We then filled the pool queue
 and started rejecting requests to search altogether.
 
 When I found the problem all I had to do was kill -9 the Elasticsearch
 servers and restart them.  -9 is required because JVMs don't catch the
 regular signal if they are too busy garbage collecting.
 
 What we're doing to prevent it from happening again:
 * We're going to monitor the slow query log and have icinga start
 complaining if it grows very quickly.  We normally get a couple of slow
 queries per day so this shouldn't be too noisy.  We're going to also have
 to monitor error counts, especially once we get more timeouts.  (
 https://bugzilla.wikimedia.org/show_bug.cgi?id=62077)
 * We're going to sprinkle more timeouts all over the place.  Certainly in
 Cirrus while waiting on Elasticsearch and figure out how to tell
 Elasticsearch what the shard timeouts should be as well.(
 https://bugzilla.wikimedia.org/show_bug.cgi?id=62079)
 * We're going to figure out why we only got half the settings.  This is
 complicated because we can't let puppet restart Elasticsearch because
 Elasticsearch restarts must be done one node at a time.
 
 Nik
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread Matthew Walker
On Fri, Feb 28, 2014 at 2:43 PM, Jeremy Baron jer...@tuxmachine.com wrote:

 On Fri, Feb 28, 2014 at 10:23 PM, Matthew Walker mwal...@wikimedia.org
 wrote:
  Please reset my 2 factor auto preference in the wikitech database.
 
  My GPG key is available from the MIT keyserver [0]. Establishment of
  ownership of the Mwalker LDAP account by this email can occur via
  gerrit [1], or the edit history of my user page on [2].

 I'm not sure how any of that establishes anything?


I'm attempting to establish, I think the term is, a preponderance of truth
from less trusted authorities. Beyond this point though the argument
becomes silly; because if I own those accounts (and I do); I can submit, +2
things, deploy to the site (because I'm part of the deployment group), etc.


  (Incidently; I should probably get more signatures on my key...
  anyone in the office want to sign it?)

 The simplest option if you're in the office is to just tell an op in
 person. (who can verify who you are because they know you)


I'm assuming that not all ops people know how to do this / or are willing
to find out. And not all opsens are located in the office. Additionally, we
submit SSH key revocation requests via email -- I'm just doing the same
thing in a public list because this a more public resource and I started
with the assumption that I didn't need a root to do this.


 [2] redirects (somehow??) to another domain. Maybe better to link
 straight to the history page.

 https://wikitech.wikimedia.org/w/index.php?title=user:mwalkeraction=history


It's using #REDIRECT; you're correct though in that it should be a soft
redirect. I'd change it; but... I can't... :p
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Roadmap and Deployment highlight - week of March 3rd

2014-02-28 Thread MZMcBride
Greg Grossmeier wrote:
We will be disabling ArticleFeedBack on all wikis.
* https://bugzilla.wikimedia.org/show_bug.cgi?id=61163

ArticleFeedbackv5, rather. ArticleFeedback was already disabled on
Wikimedia wikis (cf. https://bugzilla.wikimedia.org/43892).

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Two factor auth reset needed on wikitech

2014-02-28 Thread MZMcBride
Jeremy Baron wrote:
On Fri, Feb 28, 2014 at 10:23 PM, Matthew Walker mwal...@wikimedia.org
wrote:
 [2] https://wikitech.wikimedia.org/wiki/User:Mwalker

[2] redirects (somehow??) to another domain.

That page contains #REDIRECT [[meta:User:Mwalker (WMF)]]. Presumably the
wiki at wikitech.wikimedia.org has $wgDisableHardRedirects set to false.

https://www.mediawiki.org/wiki/Manual:$wgDisableHardRedirects

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Labs-l] Labs migration starts on Tuesday

2014-02-28 Thread Andrew Bogott

On 3/1/14 1:25 AM, Petr Bena wrote:

I am confused about /data mounpoint

OK, with luck I will not confuse further.


You say:

The contents of your shared /data/project or /home directories will
not be immediately available in eqiad.

Yep.  eqiad labs is, for now, a blank slate.


Does it mean that if I decide to move the content by hand, using SCP,
it will be overwritten anyway sooner or later?
No.  Indeed, you are encouraged to move that content by hand -- just 
please coordinate with us so we know what you're doing.



How do I decide if I
want to have this content moved by ops or by myself? What if I want to
move just some items from /data/project and remaining data can be
safely nuked?
The next two weeks are designated for you to do exactly that -- move 
files by hand, and select which things to abandon.  This is strongly 
encouraged!  Once you're done and ready to abandon other files please 
make a note to that effect on the migration progress page ( 
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Progress ).


If in two weeks there's no note on that page and I see that your eqiad 
shared dirs are still empty then I'll make a unilateral copy of everything.


(There's one caveat here:  Because the file copies are going to take a 
super long time, I've already started a job that will haphazardly copy 
files over to eqiad and stow them in obviously-named subdirs, e.g. 
'glustercopy' or 'nfscopy'.  Those are there to save time as part of a 
future migration... you should leave them be but otherwise ignore them.  
If you opt for self-migration then you or I can just erase those dirs 
later on if you have them.)


I hope this makes sense!  Please let me know if I'm still being unclear.

-Andrew


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l