Re: [Wikitech-l] Running database tests with jenkins

2014-05-05 Thread Moritz Schubotz
Hi,

I'm trying to resend this message that got lost.

Best
Physikerwelt

On Tue, Apr 15, 2014 at 1:13 PM, Moritz Schubotz schub...@tu-berlin.de wrote:
 Dear all,

 I had some trouble to run the database tests that worked well locally on 
 jenkins.
 In the onLoadExtensionSchemaUpdates hook I check for mysql/sqllite via
 $type = $updater-getDB()-getType();
 if ( $type == 'sqlite' ) {
 $type = 'mysql'; // The commands used from the updater are the 
 same
 }
 if ( $type == 'mysql' ) {

 jenkins, that uses sqllite modifies
   -- Timestamp of the last update
   math_timestamp timestamp NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE
 CURRENT_TIMESTAMP,
 to
 math_timestamp TEXT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE 
 CURRENT_TIMESTAMP,
 which does not look like sqllite
 (http://stackoverflow.com/questions/6578439/on-update-current-timestamp-with-sqlite).

 I think the assumption that mysql and sqllite is wrong and also not a
 good style. Is there an example of an extension that runs database
 tests with jenkins?

 Is there any usage statistics on how frequent the database types are.
 Does it make sense to mainatain

 1) mysql
 2) sqllite
 3) mssql
 4) oracle
 5) pg

 or are some of those databases seldomly used like db2 that was removed
 a while ago?

 Best
 Physikerwelt

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Ori Livneh
On Sun, May 4, 2014 at 9:09 PM, Tyler Romeo tylerro...@gmail.com wrote:

 Just wondering, but has any performance testing been done on different
 socket.io implementations? IIRC, Python is pretty good, so I definitely
 approve, but I'm wondering if there are other implementations are are more
 performant (specifically, servers that have better parallelism and no GIL).


You still get the parallelism here, it just happens outside the language,
by having Nginx load-balance across multiple application instances. The
Puppet class, Upstart job definitions, and supporting shell scripts were
all designed to manage a process group of rcstream instances.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Daniel Kinzler
Am 05.05.2014 07:20, schrieb Jeremy Baron:
 On May 4, 2014 10:24 PM, Ori Livneh o...@wikimedia.org wrote:
 an implementation for a recent changes
 stream broadcast via socket.io, an abstraction layer over WebSockets that
 also provides long polling as a fallback for older browsers.

[...]

 How could this work overlap with adding pubsubhubbub support to existing
 web RC feeds? (i.e. atom/rss. or for that matter even individual page
 history feeds or related changes feeds)
 
 The only pubsubhubbub bugs I see atm are
 https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=38970%2C30245

There is a Pubsubhubbub implementation in the pipeline, see
https://git.wikimedia.org/summary/mediawiki%2Fextensions%2FPubSubHubbub. It's
pretty simple and painless. We plan to have this deployed experimentally for
wikidata soon, but there is no reason not to roll it out globally.

This implementation uses the job queue - which in production means redis, but
it's pretty generic.

As to an RC *stream*: Pubsubhubbub is not really suitable for this, since it
requires the subscriber to run a public web server. It's really a
server-to-server protocol. I'm not too sure about web sockets for this either,
because the intended recipient is usually not a web browser. But if it works,
I'd be happy anyway, the UDP+IRC solution sucks.

Some years ago, I started to implement an XMPP based RC stream, see
https://www.mediawiki.org/wiki/Extension:XMLRC. Have a look and steal some
ideas :)

-- daniel



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Image scaling proposal: server-side mip-mapping

2014-05-05 Thread Gilles Dubuc

 Buttons is French: Suiv. - Make it English


That's a bug in SurveyMonkey, the buttons are in French because I was using
the French version of the site at the time the survey was created, and now
that text on those buttons can't be fixed. I'll make sure to switch
SurveyMoney to English before creating the next one.

No swap or overlay function for being able to compare


SurveyMonkey is quite limited, it can't do that, unfortunately. The
alternative would be to build my own survey from scratch, but that would be
require a lot of resources for little benefit. This is really a one-off
need.


 I wonder if the mip-mapping approach could somehow be combined with tiles?
 If we want proper zooming for large images, we will have to split them up
 into tiles of various sizes, and serve only the tiles for the visible
 portion when the user zooms on a small section of the image. Splitting up
 an image is a fast operation, so maybe it could be done on the fly (with
 caching for a small subset based on traffic), in which case having a chain
 of scaled versions of the image would take care of the zooming use case as
 well.


Yes we could definitely have the reference thumbnail sizes be split up on
the fly to generate tiles, when we get around to implementing proper
zooming. It's as simple as making Varnish cache the tiles and the php
backend generate them on the fly by splitting the reference thumbnails.

Regarding the survey I ran on wikitech-l, so far there are 26 respondents.
It seems that on the images with a lot of edges (the test images provided
by Rob) at least 30% of people can tell the difference in terms of
quality/sharpness. On regular images people can't really tell. Thus, I
wouldn't venture to do the full chaining, as a third of visitors will be
able to tell that there's a quality degradation. I'll run another survey
later in the week where instead of full chaining all the thumbs are
generated based on the biggest thumb.




On Sat, May 3, 2014 at 1:25 AM, Gergo Tisza gti...@wikimedia.org wrote:

 On Thu, May 1, 2014 at 7:02 AM, Gilles Dubuc gil...@wikimedia.org wrote:

  Another point about picking the one true bucket list: currently Media
  Viewer's buckets have been picked based on the most common screen
  resolutions, because Media Viewer tries to always use the entire width of
  the screen to display the image, so trying to achieve a 1-to-1 pixel
  correspondence makes sense, because it should give the sharpest result
  possible to the average user.
 

 I'm not sure the current size list is particularly useful for MediaViewer,
 since we are fitting images into the screen, and the huge majority of
 images are constrained by height, so the width of the image on the screen
 will be completely unrelated to the width bucket size. Having common screen
 sizes as width buckets would be useful if we would be filling instead of
 fitting (something that might make sense for paged media).

 --

 I wonder if the mip-mapping approach could somehow be combined with tiles?
 If we want proper zooming for large images, we will have to split them up
 into tiles of various sizes, and serve only the tiles for the visible
 portion when the user zooms on a small section of the image. Splitting up
 an image is a fast operation, so maybe it could be done on the fly (with
 caching for a small subset based on traffic), in which case having a chain
 of scaled versions of the image would take care of the zooming use case as
 well.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Petr Bena
Given the current specifications I can only support this change as
long as current IRC feed is preserved as IRC is IMHO, as much as evil
it looks, more suitable for this than WebSockets.

I am not saying that IRC is suitable for this and I know that people
really wanted to get rid of it or replace it with something better,
but I just can't see how is this better.

On Mon, May 5, 2014 at 10:37 AM, Daniel Kinzler dan...@brightbyte.de wrote:
 Am 05.05.2014 07:20, schrieb Jeremy Baron:
 On May 4, 2014 10:24 PM, Ori Livneh o...@wikimedia.org wrote:
 an implementation for a recent changes
 stream broadcast via socket.io, an abstraction layer over WebSockets that
 also provides long polling as a fallback for older browsers.

 [...]

 How could this work overlap with adding pubsubhubbub support to existing
 web RC feeds? (i.e. atom/rss. or for that matter even individual page
 history feeds or related changes feeds)

 The only pubsubhubbub bugs I see atm are
 https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=38970%2C30245

 There is a Pubsubhubbub implementation in the pipeline, see
 https://git.wikimedia.org/summary/mediawiki%2Fextensions%2FPubSubHubbub. 
 It's
 pretty simple and painless. We plan to have this deployed experimentally for
 wikidata soon, but there is no reason not to roll it out globally.

 This implementation uses the job queue - which in production means redis, but
 it's pretty generic.

 As to an RC *stream*: Pubsubhubbub is not really suitable for this, since it
 requires the subscriber to run a public web server. It's really a
 server-to-server protocol. I'm not too sure about web sockets for this either,
 because the intended recipient is usually not a web browser. But if it works,
 I'd be happy anyway, the UDP+IRC solution sucks.

 Some years ago, I started to implement an XMPP based RC stream, see
 https://www.mediawiki.org/wiki/Extension:XMLRC. Have a look and steal some
 ideas :)

 -- daniel



 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Please welcome Filippo Giunchedi to Wikimedia TechOps

2014-05-05 Thread Mark Bergsma
I'm very happy to announce that Filippo Giunchedi is joining us as an 
Operations Engineer in the Technical Operations team. Filippo is Italian, but 
he lives in Dublin where he interned at Google and worked at Amazon before 
coming to Wikimedia. He's gained a lot of experienced working with large scale 
distributed systems and infrastructure there.

Filippo will be working with us remotely. Today is his start day, but we were 
lucky to have him join us at our Ops off-site meeting in Athens a few weeks 
ago, where he helped improve our monitoring of system metrics with Graphite.

Fiddling with machines has always been his passion - it led to being fascinated 
by computers in the late 90s. He got involved in free software projects (e.g. 
Debian, as a Debian Developer) in the mid-2000s. System level technologies, 
infrastructure, distributed systems and networking are his main interests. On a 
different level, he's also interested in online privacy and secure/anonymous 
communications (e.g. Tor).

You can find Filippo on IRC (Freenode), using the nick name godog.

Please join me in welcoming Filippo!

— 
Mark Bergsma m...@wikimedia.org
Lead Operations Architect
Director of Technical Operations
Wikimedia Foundation


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help! Phabricator and our code review process

2014-05-05 Thread Quim Gil
Hi, please check this draft plan for the next steps in the Phabricator RfC
at

https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator/Plan

This aims to be a starting point for the next round of discussion to be
held online and at the Wikimedia hackathon in Zürich this weekend. Edits,
questions, and feedback welcome.


On Friday, May 2, 2014, C. Scott Ananian canan...@wikimedia.org wrote:


 [cscott] James_F: I'm arguing for a middle path. devote *some*
 resources, implement *some* interoperability, decide at *some later*
 point when we have a more functional instance.


This is basically the same as Decide now on a plan identifying the the
blockers, commit resources to fix them, proceed with the plan unless we get
stuck with a blocker. We have identified blockers, but we are not seeing
any that could not be solved with some work (from the very active upstream
and/or ourselves).

We need a RfC approval to go confidently from http://fab.wmflabs.org to a
production-like Wikimedia Phabricator. If that happens, the Platform
Engineering team will commit resources to plan, migrate, and maintain the
Phabricator instance that will deprecate five tools or more.

The Labs instance has been setup and is being fine-tuned basically on a
volunteering basis, which tells a lot about Phabricator's simplicity of
administration and maintenance. As it is now, it is good enough to run
simple projects with a short term deadline e.g.

Chemical Markup for Wikimedia Commons
http://fab.wmflabs.org/project/view/26/ (a GSoC project -- hint, hint)

Analytics-EEVS
http://fab.wmflabs.org/project/board/15/

Please play with it and provide feedback. Other contributors critic with
Phabricator are doing this, and it is being extremely helpful for everybody.


-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help! Phabricator and our code review process

2014-05-05 Thread Tyler Romeo
OK, so I'm sorry if this information is duplicated anywhere, but between
the Project Management Tools review page, the Phabricator RFC, the various
sub-pages of the RFC, and the content on the Phabricator instance itself,
it would take me at least a couple of hours to organize my thoughts. So
I'll just ask directly:

Phabricator still does not work directly with Git, right? Or has that been
implemented since I last checked? If not, what is the planned workaround
for Phabricator? The default workflow is to use arcanist to merge the code
into Git directly. Does that handle merge conflicts? What is the rebase
process?

It's not that I'm opposed to the new system. I'm just confused as to what
the new workflow would actually be.

*-- *
*Tyler Romeo*
Stevens Institute of Technology, Class of 2016
Major in Computer Science


On Mon, May 5, 2014 at 12:02 PM, Quim Gil q...@wikimedia.org wrote:

 Hi, please check this draft plan for the next steps in the Phabricator RfC
 at

 https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator/Plan

 This aims to be a starting point for the next round of discussion to be
 held online and at the Wikimedia hackathon in Zürich this weekend. Edits,
 questions, and feedback welcome.


 On Friday, May 2, 2014, C. Scott Ananian canan...@wikimedia.org wrote:

 
  [cscott] James_F: I'm arguing for a middle path. devote *some*
  resources, implement *some* interoperability, decide at *some later*
  point when we have a more functional instance.
 

 This is basically the same as Decide now on a plan identifying the the
 blockers, commit resources to fix them, proceed with the plan unless we get
 stuck with a blocker. We have identified blockers, but we are not seeing
 any that could not be solved with some work (from the very active upstream
 and/or ourselves).

 We need a RfC approval to go confidently from http://fab.wmflabs.org to a
 production-like Wikimedia Phabricator. If that happens, the Platform
 Engineering team will commit resources to plan, migrate, and maintain the
 Phabricator instance that will deprecate five tools or more.

 The Labs instance has been setup and is being fine-tuned basically on a
 volunteering basis, which tells a lot about Phabricator's simplicity of
 administration and maintenance. As it is now, it is good enough to run
 simple projects with a short term deadline e.g.

 Chemical Markup for Wikimedia Commons
 http://fab.wmflabs.org/project/view/26/ (a GSoC project -- hint, hint)

 Analytics-EEVS
 http://fab.wmflabs.org/project/board/15/

 Please play with it and provide feedback. Other contributors critic with
 Phabricator are doing this, and it is being extremely helpful for
 everybody.


 --
 Quim Gil
 Engineering Community Manager @ Wikimedia Foundation
 http://www.mediawiki.org/wiki/User:Qgil
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Erik Bernhardson
I think we need to be clearer about what the goal is here, as is I think we
are all taking our personal idea of what we want to do with a feed and
applying that to this implementation.  Personally I have been working on an
external watchlist service that i would love to hook up to a feed, but
without any guarantees of receiving every single event my particular use
case is better off continuously scanning the xml feeds of 800 wikis.  I'm
certain other people are thinking of completely different things as well.

Erik B.


On Mon, May 5, 2014 at 2:29 AM, Petr Bena benap...@gmail.com wrote:

 Given the current specifications I can only support this change as
 long as current IRC feed is preserved as IRC is IMHO, as much as evil
 it looks, more suitable for this than WebSockets.

 I am not saying that IRC is suitable for this and I know that people
 really wanted to get rid of it or replace it with something better,
 but I just can't see how is this better.

 On Mon, May 5, 2014 at 10:37 AM, Daniel Kinzler dan...@brightbyte.de
 wrote:
  Am 05.05.2014 07:20, schrieb Jeremy Baron:
  On May 4, 2014 10:24 PM, Ori Livneh o...@wikimedia.org wrote:
  an implementation for a recent changes
  stream broadcast via socket.io, an abstraction layer over WebSockets
 that
  also provides long polling as a fallback for older browsers.
 
  [...]
 
  How could this work overlap with adding pubsubhubbub support to existing
  web RC feeds? (i.e. atom/rss. or for that matter even individual page
  history feeds or related changes feeds)
 
  The only pubsubhubbub bugs I see atm are
  https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=38970%2C30245
 
  There is a Pubsubhubbub implementation in the pipeline, see
  https://git.wikimedia.org/summary/mediawiki%2Fextensions%2FPubSubHubbub.
 It's
  pretty simple and painless. We plan to have this deployed experimentally
 for
  wikidata soon, but there is no reason not to roll it out globally.
 
  This implementation uses the job queue - which in production means
 redis, but
  it's pretty generic.
 
  As to an RC *stream*: Pubsubhubbub is not really suitable for this,
 since it
  requires the subscriber to run a public web server. It's really a
  server-to-server protocol. I'm not too sure about web sockets for this
 either,
  because the intended recipient is usually not a web browser. But if it
 works,
  I'd be happy anyway, the UDP+IRC solution sucks.
 
  Some years ago, I started to implement an XMPP based RC stream, see
  https://www.mediawiki.org/wiki/Extension:XMLRC. Have a look and steal
 some
  ideas :)
 
  -- daniel
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Please welcome Filippo Giunchedi to Wikimedia TechOps

2014-05-05 Thread Sumana Harihareswara
On 05/05/2014 10:48 AM, Mark Bergsma wrote:
 I'm very happy to announce that Filippo Giunchedi is joining us as an 
 Operations Engineer in the Technical Operations team. Filippo is Italian, but 
 he lives in Dublin where he interned at Google and worked at Amazon before 
 coming to Wikimedia. He's gained a lot of experienced working with large 
 scale distributed systems and infrastructure there.
 
 Filippo will be working with us remotely. Today is his start day, but we were 
 lucky to have him join us at our Ops off-site meeting in Athens a few weeks 
 ago, where he helped improve our monitoring of system metrics with Graphite.
 
 Fiddling with machines has always been his passion - it led to being 
 fascinated by computers in the late 90s. He got involved in free software 
 projects (e.g. Debian, as a Debian Developer) in the mid-2000s. System level 
 technologies, infrastructure, distributed systems and networking are his main 
 interests. On a different level, he's also interested in online privacy and 
 secure/anonymous communications (e.g. Tor).
 
 You can find Filippo on IRC (Freenode), using the nick name godog.
 
 Please join me in welcoming Filippo!
 
 — 
 Mark Bergsma m...@wikimedia.org
 Lead Operations Architect
 Director of Technical Operations
 Wikimedia Foundation

Welcome, Filippo! Enjoying the palindromic IRC nick. :-) And it's great
to have another engineer joining us who's interested in secure and
private online communications!

-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help! Phabricator and our code review process

2014-05-05 Thread Andre Klapper
On Mon, 2014-05-05 at 12:18 -0400, Tyler Romeo wrote:
 Phabricator still does not work directly with Git, right?

This topic is covered in http://fab.wmflabs.org/T207

andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help! Phabricator and our code review process

2014-05-05 Thread Quim Gil
On Monday, May 5, 2014, Tyler Romeo tylerro...@gmail.com wrote:

 OK, so I'm sorry if this information is duplicated anywhere, but between
 the Project Management Tools review page, the Phabricator RFC, the various
 sub-pages of the RFC, and the content on the Phabricator instance itself,
 it would take me at least a couple of hours to organize my thoughts.


This is perfectly understandable. In just 2-3 weeks there has been an
explosion of content in addition to all the content that was compiled
before the RfC. There is a high % of signal, not much noise. Things will
evetually settle.

I created
https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator/versus_Bugzillato
consolidate the relevant information for bug reporters. It would be
useful to do the same for code contributors and reviewers, but I'm not
qualified. Any volunteers?


 So I'll just ask directly:


 Phabricator still does not work directly with Git, right? Or has that been
 implemented since I last checked? If not, what is the planned workaround
 for Phabricator?


Relevant discussion at

Find way to use Differential with plain git (i.e.: without requiring arc)
http://fab.wmflabs.org/T207



 The default workflow is to use arcanist to merge the code
 into Git directly. Does that handle merge conflicts? What is the rebase
 process?

 It's not that I'm opposed to the new system. I'm just confused as to what
 the new workflow would actually be.



-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Victor Vasiliev
On 05/05/2014 05:29 AM, Petr Bena wrote:
 I am not saying that IRC is suitable for this and I know that people
 really wanted to get rid of it or replace it with something better,
 but I just can't see how is this better.
 

Most programming languages have an implementation of WebSockets, and, well,
those who don't will eventually have it.  I heard C++ has plenty of them,
since most browsers are written in C++.  Almost any reasonable programming
language will have an implementation of JSON, some of them even have it
in standard library.

(If that's really an issue, and the language you are writing in is not INTERCAL
or Perl, I can probably even write a client for you)

I don't see how a well-defined standardized exchange format is better than
awkwardly screenscrapping colored lines of text from IRC feed, which works
as long as you don't exceed IRC message size limit.

-- Victor.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] recent changes stream

2014-05-05 Thread Petr Bena
I said this once in a gerrit comment and I will say it here as well:
most of people have different opinion on what is good for them as RC
stream. We should go for anything specific, but rather for a very
abstract solution that could be multiplexed into multiple RC feed
providers using a number of popular formats (including this IRC format
just for backward compatibility). So in the end, users would be able
to pick what format and protocol they want, just as they can do that
with api.php

Ideal RC stream would be so flexible that it could match any possible use case.

On Mon, May 5, 2014 at 6:45 PM, Erik Bernhardson
ebernhard...@wikimedia.org wrote:
 I think we need to be clearer about what the goal is here, as is I think we
 are all taking our personal idea of what we want to do with a feed and
 applying that to this implementation.  Personally I have been working on an
 external watchlist service that i would love to hook up to a feed, but
 without any guarantees of receiving every single event my particular use
 case is better off continuously scanning the xml feeds of 800 wikis.  I'm
 certain other people are thinking of completely different things as well.

 Erik B.


 On Mon, May 5, 2014 at 2:29 AM, Petr Bena benap...@gmail.com wrote:

 Given the current specifications I can only support this change as
 long as current IRC feed is preserved as IRC is IMHO, as much as evil
 it looks, more suitable for this than WebSockets.

 I am not saying that IRC is suitable for this and I know that people
 really wanted to get rid of it or replace it with something better,
 but I just can't see how is this better.

 On Mon, May 5, 2014 at 10:37 AM, Daniel Kinzler dan...@brightbyte.de
 wrote:
  Am 05.05.2014 07:20, schrieb Jeremy Baron:
  On May 4, 2014 10:24 PM, Ori Livneh o...@wikimedia.org wrote:
  an implementation for a recent changes
  stream broadcast via socket.io, an abstraction layer over WebSockets
 that
  also provides long polling as a fallback for older browsers.
 
  [...]
 
  How could this work overlap with adding pubsubhubbub support to existing
  web RC feeds? (i.e. atom/rss. or for that matter even individual page
  history feeds or related changes feeds)
 
  The only pubsubhubbub bugs I see atm are
  https://bugzilla.wikimedia.org/buglist.cgi?quicksearch=38970%2C30245
 
  There is a Pubsubhubbub implementation in the pipeline, see
  https://git.wikimedia.org/summary/mediawiki%2Fextensions%2FPubSubHubbub.
 It's
  pretty simple and painless. We plan to have this deployed experimentally
 for
  wikidata soon, but there is no reason not to roll it out globally.
 
  This implementation uses the job queue - which in production means
 redis, but
  it's pretty generic.
 
  As to an RC *stream*: Pubsubhubbub is not really suitable for this,
 since it
  requires the subscriber to run a public web server. It's really a
  server-to-server protocol. I'm not too sure about web sockets for this
 either,
  because the intended recipient is usually not a web browser. But if it
 works,
  I'd be happy anyway, the UDP+IRC solution sucks.
 
  Some years ago, I started to implement an XMPP based RC stream, see
  https://www.mediawiki.org/wiki/Extension:XMLRC. Have a look and steal
 some
  ideas :)
 
  -- daniel
 
 
 
  ___
  Wikitech-l mailing list
  Wikitech-l@lists.wikimedia.org
  https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Vagrant CentralAuth role

2014-05-05 Thread Chris Steipp
Hi all,

I'm planning to spend some time in Zurich getting a centralauth role for
vagrant working (part of
https://www.mediawiki.org/wiki/Z%C3%BCrich_Hackathon_2014/Topics#Production-like_Vagrant).
I wanted to get opinions (probably more bikeshed) about how you would like
to access multiple wikis on a single vagrant instance. If anyone is
interested in using CentralAuth on vagrant for development/testing, please
chime in!

We can either use a different subdirectory per wiki, or different domain
per wiki.

Different domains is closer to how we run thing in production, but it would
require copying dns settings to your host (there doesn't seem to be a good,
cross-platform way to do this from vagrant itself, but if anyone has a
solution, that would make the decision easy). So you would work on,

http://localhost:8080/ (main vagrant wiki)
http://loginwiki.dev:8080/ (loginwiki)
etc.

Different subdirectories is how I currently do development and I personally
don't mind it, but it makes turning CentralAuth on and off more of a
challenge, since the current wiki is in the web root. So

http://localhost:8080/wiki/ (main vagrant wiki)
http://localhost:8080/loginwiki/ (loginwiki)
etc.

Preferences?

Chris
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Vagrant CentralAuth role

2014-05-05 Thread Bryan Davis
On Mon, May 5, 2014 at 1:17 PM, Chris Steipp cste...@wikimedia.org wrote:
 Different domains is closer to how we run thing in production, but it would
 require copying dns settings to your host (there doesn't seem to be a good,
 cross-platform way to do this from vagrant itself, but if anyone has a
 solution, that would make the decision easy).

We have a public wildcard DNS record for *.local.wmftest.net that
resolves to 127.0.0.1 for just this sort of thing. The Wikimania
Scholarships role uses it to setup a named vhost for
http://scholarships.local.wmftest.net:8080/.

Bryan
-- 
Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
[[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Vagrant CentralAuth role

2014-05-05 Thread Chris Steipp
I just found out about that from Ori too. Problem solved. Thanks!


On Mon, May 5, 2014 at 12:42 PM, Bryan Davis bd...@wikimedia.org wrote:

 On Mon, May 5, 2014 at 1:17 PM, Chris Steipp cste...@wikimedia.org
 wrote:
  Different domains is closer to how we run thing in production, but it
 would
  require copying dns settings to your host (there doesn't seem to be a
 good,
  cross-platform way to do this from vagrant itself, but if anyone has a
  solution, that would make the decision easy).

 We have a public wildcard DNS record for *.local.wmftest.net that
 resolves to 127.0.0.1 for just this sort of thing. The Wikimania
 Scholarships role uses it to setup a named vhost for
 http://scholarships.local.wmftest.net:8080/.

 Bryan
 --
 Bryan Davis  Wikimedia Foundationbd...@wikimedia.org
 [[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
 irc: bd808v:415.839.6885 x6855

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Help! Phabricator and our code review process

2014-05-05 Thread Matthew Flaschen

On 05/05/2014 01:21 PM, Quim Gil wrote:

I created
https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator/versus_Bugzillato
consolidate the relevant information for bug reporters.


https://www.mediawiki.org/wiki/Requests_for_comment/Phabricator/versus_Bugzilla


Phabricator still does not work directly with Git, right? Or has that been
implemented since I last checked? If not, what is the planned workaround
for Phabricator?


arc/arcanist is a wrapper around git (it also uses a Phabricator API 
called Conduit for a few things) that is used mainly for network 
operations (e.g. pushing a new patch).


git is still used for local operations, and the repo is cloneable 
without needing arc.


We have also talked about having a GitHub-Phabricator bridge, so 
drive-by contributors could make a GitHub pull request without learning 
arc right away.



The default workflow is to use arcanist to merge the code
into Git directly. Does that handle merge conflicts? What is the rebase
process?


I'm not sure exactly how conflicts are handled.  However, what I do know 
is that you can amend a differential (which is essentially similar to a 
Gerrit change) with a new diff.  A diff, using Phabricator terminology, 
is one or more commits.


So if there's a conflict, you should be able to amend locally then 
update the differential, similar to Gerrit.  I don't know if they have a 
rebase button on the site similar to Gerrit.


Matt Flaschen

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Help! Phabricator and our code review process

2014-05-05 Thread Matthew Flaschen

On 05/02/2014 03:56 PM, C. Scott Ananian wrote:

[greg-g] cscott: James_F crazy idea here: can some teams use it for
real (I think growth is, kinda?) and export/import to a future real
instance?
frontend...


No, we're not using it for real currently.  We (Growth) have talked 
about potentially being an early adopter, but have not committed to this 
yet.


Matt Flaschen


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Mozilla's Wiki Working Group meeting Tues 1600 UTC

2014-05-05 Thread Sumana Harihareswara
Mozilla has a new Wiki Working Group[0] to develop and drive a roadmap
of improvements to http://wiki.mozilla.org;. They're currently on
MediaWiki 1.19, and I predict they will want to get onto 1.23 once that
is released (since it's going to be the long-term support
release[1][2]), and that they're going to want VisualEditor and Flow as
soon as possible. However, they have a custom theme to port over.

It would be cool if MediaWiki experts could help out. They meet every
two weeks, and the next meeting is tomorrow at 1600 UTC/9am PT.[3] You
can participate.[4] Check out the agenda.[5]


[0] https://wiki.mozilla.org/Contribute/Education/Wiki_Working_Group
[1] https://www.mediawiki.org/wiki/LTS
[2] https://www.mediawiki.org/wiki/MediaWiki_1.23
[3]
http://arewemeetingyet.com/Los%20Angeles/2014-03-25/09:00/w/Wiki%20Working%20Group
[4]
https://wiki.mozilla.org/Contribute/Education/Wiki_Working_Group#How_to_Participate
[5] https://cbt.etherpad.mozilla.org/wwg-2014-05-06
-- 
Sumana Harihareswara
Senior Technical Writer
Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread David Gerard
On 5 May 2014 22:59, Sumana Harihareswara suma...@wikimedia.org wrote:

 Mozilla has a new Wiki Working Group[0] to develop and drive a roadmap
 of improvements to http://wiki.mozilla.org;. They're currently on
 MediaWiki 1.19, and I predict they will want to get onto 1.23 once that
 is released (since it's going to be the long-term support
 release[1][2]), and that they're going to want VisualEditor and Flow as
 soon as possible. However, they have a custom theme to port over.


Instant thread derail! So ... how is 1.23 and Visual Editor?

* Has anyone sat down and written out how to add VE magic to a 1.23
tarball install?
* VE is big and complicated. I'm not clear on what it needs. Parsoid
as a daemon or something?
* Are our esteemed packagers at Debian and Fedora in the loop?

I ask out of interest for my intranet stuff, where I would *love* a VE
to wave at users who basically can't work computers and are presently
just doing stuff as Google Docs.


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread James Forrester
On 5 May 2014 15:05, David Gerard dger...@gmail.com wrote:

 So ... how is 1.23 and Visual Editor?

 * Has anyone sat down and written out how to add VE magic to a 1.23
 tarball install?


​I do not believe so, no.​



 * VE is big and complicated. I'm not clear on what it needs. Parsoid
 as a daemon or something?


​VE master needs:

   - MW 1.23​
   - Parsoid running as a nodejs daemon

​VE users will be happier if you also have the TemplateData extension
installed, but it's not required.​



 * Are our esteemed packagers at Debian and Fedora in the loop?


​I have no idea (which probably means no).​

​J.
-- 
James D. Forrester
Product Manager, VisualEditor
Wikimedia Foundation, Inc.

jforres...@wikimedia.org | @jdforrester
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread David Gerard
On 5 May 2014 23:08, James Forrester jforres...@wikimedia.org wrote:
 On 5 May 2014 15:05, David Gerard dger...@gmail.com wrote:

 So ... how is 1.23 and Visual Editor?
 * Has anyone sat down and written out how to add VE magic to a 1.23
 tarball install?
 I do not believe so, no.


If someone could do that, I hereby promise to beta-test their instructions!

(Personally, I think VE is pretty much ready for all users except
English Wikipedia.)


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread James Forrester
On 5 May 2014 15:11, David Gerard dger...@gmail.com wrote:

 On 5 May 2014 23:08, James Forrester jforres...@wikimedia.org wrote:
  On 5 May 2014 15:05, David Gerard dger...@gmail.com wrote:

  So ... how is 1.23 and Visual Editor?
  * Has anyone sat down and written out how to add VE magic to a 1.23
  tarball install?
  I do not believe so, no.

 If someone could do that, I hereby promise to beta-test their instructions!


​To clarify, the existing
instructionshttps://www.mediawiki.org/wiki/Extension:VisualEditor#Basic_setup_instructions
should
work just fine, but we've not tested them.​ If someone says these work for
them, I'm happy to upgrade my reply to yes. :-)

J.
-- 
James D. Forrester
Product Manager, VisualEditor
Wikimedia Foundation, Inc.

jforres...@wikimedia.org | @jdforrester
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikisource core feature is broken for more than 50 hours

2014-05-05 Thread Luiz Augusto
So sorry for the cross-posting and for this shout for help that some can
read as a forum shopping, but this is really annoying.

https://bugzilla.wikimedia.org/show_bug.cgi?id=64622

In short, we on all Wikisource wikis are unable to start working on new
pages from digitized books (or for newly overwritten uploads) without
poking the server N times in order to generate a single resized image.

Please fix it ASAP. Please see also #c18 on the mentioned bug.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikisource core feature is broken for more than 50 hours

2014-05-05 Thread Raylton P. Sousa
Luís.. Pode me explicar melhor esse bug?
Estou no IRC.


2014-05-05 19:35 GMT-03:00 Luiz Augusto lugu...@gmail.com:

 So sorry for the cross-posting and for this shout for help that some can
 read as a forum shopping, but this is really annoying.

 https://bugzilla.wikimedia.org/show_bug.cgi?id=64622

 In short, we on all Wikisource wikis are unable to start working on new
 pages from digitized books (or for newly overwritten uploads) without
 poking the server N times in order to generate a single resized image.

 Please fix it ASAP. Please see also #c18 on the mentioned bug.
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] VE in 1.23? (was: Mozilla's Wiki Working Group meeting Tues 1600 UTC)

2014-05-05 Thread Gerard Meijssen
Hoi,
This does pique my interest, what makes the en.wp so special ?
Thanks,
 GerardM


On 6 May 2014 00:11, David Gerard dger...@gmail.com wrote:

 On 5 May 2014 23:08, James Forrester jforres...@wikimedia.org wrote:
  On 5 May 2014 15:05, David Gerard dger...@gmail.com wrote:

  So ... how is 1.23 and Visual Editor?
  * Has anyone sat down and written out how to add VE magic to a 1.23
  tarball install?
  I do not believe so, no.


 If someone could do that, I hereby promise to beta-test their instructions!

 (Personally, I think VE is pretty much ready for all users except
 English Wikipedia.)


 - d.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l