Re: [Wikitech-l] Video Uploads to Commons

2015-02-04 Thread Nkansah Rexford
Thanks for all the ideas and suggestions. Will start with enabling oauth
right away.

I'm dead in php so I always look for python related hooks. I think this
will do for the oauth? http://pythonhosted.org/mwoauth/

As regards the conversion to webm instead of ogv, I tried but couldn't get
avconv to process files into the webm format. Perhaps I'm not putting
things together right. Will read more on that.

Its very interesting for me to know similar tools exist in the system. I
wish a tool like https://tools.wmflabs.org/videoconvert/ was listed on the
ways to upload videos on commons page. I would have used it right away.

Really appreciate all your thoughts. With MediaWiki I wish I was a bit good
in php. Thus I always look around for python related packages to interact
with the API. But with the oauth module found, I think authentication won't
be the 'crude' way I'm doing it now.

Will reach out to 'real' users on VP soon too.
Hi,

there is the tool by Holger which does that using WebM - but it is
hosted on WMF Labs. It uses OAuth for unified login and moving the files
to Commons.

Then, a more elaborate tool for
* storing raw material at the Internet Archive
* generating patent free WebM proxy clips for editing
* rendering high-quality videos
* moving these rendered videos to Commons directly

is the Video Editing Server, developed by some Wikipedians and a MLT
developer, hosted by the Internet Archive:

https://wikimedia.meltvideo.com/

It also uses OAuth for login and moving files to Commons.

The workflow with this:

# upload all your raw files to the server for
## long-term storage
## to make them available to other editors
## to let the server use them in the rendering process

# the server transcodes all files into WebM "proxy clips"

# editors download the WebM proxy clips
## do the editing on your computer
## create an MLT project file (eg. using kdenlive or another MLT-based
video editor)

# upload the project file
## server will replace proxy clips with raw material
## server will render video project
## server will move generated file to Commons

It comes with a search engine, meta data forms... it's still pretty new
(development started in December '14) but can be used.
We plan to add some more features like tagging using Wikidata QIDs
(hence allowing multilingual / localised tagging / searching, adding
more project file formats and renderer, making old project file
revisions available for download, give it a nice vector-based theme,
give it a better domain name and SSL certificate...

Play with it and have fun!

For source code or any issues refer to GitHub:
https://github.com/ddennedy/wikimedia-video-editing-server

also there see the wiki for the specs and a deployment guide:
https://github.com/ddennedy/wikimedia-video-editing-server/wiki


/Manuel
--
Wikimedia CH - Verein zur Förderung Freien Wissens
Lausanne, +41 (21) 34066-22 - www.wikimedia.ch

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Wikimedia Hackathon 2015 in Phabricator

2015-02-04 Thread Quim Gil
Hi, just a note to say that the Phabricator project for the Wikimedia
Hackathon 2015 (Lyon, 23-25 May) has been created.

https://phabricator.wikimedia.org/tag/wikimedia-hackathon-2015/

You are invited to ask questions, provide feedback, and get involved.

Important note: even if we are just bootstrapping the project there, the
volunteers at Wikimedia France have done a ton of work already (i.e. the
venue and main services are secured).

-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Alexandros Kosiaris
> Good point. Ideally, what we would need to do is provide the right tools to
> developers to create services, which can then be placed "strategically"
> around DCs (in cooperation with Ops, ofc).

Yes. As an organization we should provide good tools that allow
developers to create services. I do fail to understand the
"strategically" around DCs part though.

> For v1, however, we plan to
> provide only logical separation (to a certain extent) via modules which can
> be dynamically loaded/unloaded from RESTBase.

modules ? Care to explain a bit more ? AFAIK RESTBase is a revision
storage service and to be honest I am fighting to understand what
modules you are referring to and the architecture behind those
modules.

> In return, RESTBase will
> provide them with routing, monitoring, caching and authorisation out of the
> box. The good point here is that this 'modularisation' eases the transition
> to a more-decomposed orchestration SOA model. Going in that direction,
> however, requires some prerequisites to be fulfilled, such as [1].

While revision caching can very well be done by RESTBase (AFAIK, that
is one reason it is being created for), authorization (It's not
revision authorization, but generic authentication/authorization I am
referring to) and monitoring should not be provided by RESTBase to any
service. Especially monitoring. Services (whatever their nature)
should provide discoverable (REST if you like, as I suspect you do)
endpoints that allow monitoring via third party tools and not depend
on an another service for that. My take is that there should be a
swagger manifest that describes a basic monitoring framework and
services should each independently implement it (including RESTBase)

I am also a bit unclear on the routing aspect. Care to point out an up
to date architectural diagram ? I have been told in person that the
one https://www.npmjs.com/package/restbase is not up to date so I
can't comment on that.


-- 
Alexandros Kosiaris 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] SOA in .NET, or Microsoft is going open source MIT style

2015-02-04 Thread Yuri Astrakhan


For those not adicted to slashdot, see here

.

Licenced under MIT
, plus an
additional patents promise
.

If Microsoft continues to go in the direction of the OSS as before, I
suspect we just might benefit from some good quality components as
individual services on a completelly open source stack.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] SOA in .NET, or Microsoft is going open source MIT style

2015-02-04 Thread Nikolas Everett
On Wed, Feb 4, 2015 at 5:09 AM, Yuri Astrakhan 
wrote:

> 
>
> For those not adicted to slashdot, see here
> <
> http://news.slashdot.org/story/15/02/04/0332238/microsoft-open-sources-coreclr-the-net-execution-engine
> >
> .
>
> Licenced under MIT
> , plus an
> additional patents promise
> .
>

I'm not sure how relevant it is, but are promises legally binding?
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] SOA in .NET, or Microsoft is going open source MIT style

2015-02-04 Thread David Gerard
Functionally. If you make a loud public declaration "WE SHALL NOT SUE"
then you sue, judges *tend* to look upon it very unfavourably. YMMV of
course.

On 4 February 2015 at 13:42, Nikolas Everett  wrote:
> On Wed, Feb 4, 2015 at 5:09 AM, Yuri Astrakhan 
> wrote:
>
>> 
>>
>> For those not adicted to slashdot, see here
>> <
>> http://news.slashdot.org/story/15/02/04/0332238/microsoft-open-sources-coreclr-the-net-execution-engine
>> >
>> .
>>
>> Licenced under MIT
>> , plus an
>> additional patents promise
>> .
>>
>
> I'm not sure how relevant it is, but are promises legally binding?
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Brad Jorsch (Anomie)
On Wed, Feb 4, 2015 at 2:33 AM, Erik Moeller  wrote:

> If not, then I think one thing to keep in mind is how to organize the
> transformation code in a manner that it doesn't just become a
> server-side hodgepodge still only useful to one consumer, to avoid
> some of the pitfalls Brian mentions.


I think the MobileFrontend extension has probably run into these pitfalls
already.


> Say you want to reformat infoboxes on the mobile web, but not do all the
> other stuff the mobile app does. Can you just get that specific
> transformation? Are some transformations dependent on others?  Or say we
> want to make a change only for the output that gets fed into the PDF
> generator, but not for any other outputs. Can we do that?
>

Maybe what we really need is a way to register transformation classes (e.g.
something like $wgAPIModules). Then have ApiParse have a parameter to
select transformations and apply them to wikitext before and to HTML after
calling the parser. And we'd probably want to do the wikitext-before bit in
ApiExpandTemplates too, and add a new action that takes HTML and applies
only the HTML-after transforms to it.

Or we could go as far as giving ParserOptions (or the ParserEnvironment I
recently heard Tim propose) a list of transformations, to allow for
transformations at some of the points where we have parser hooks. Although
that would probably cause problems for Parsoid.


-- 
Brad Jorsch (Anomie)
Software Engineer
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video Uploads to Commons

2015-02-04 Thread Magnus Manske
I wrote a PHP class to specifically deal with the Wikimedia OAuth,
including uploads (not chunked though). May be helpful.

https://bitbucket.org/magnusmanske/magnustools/src/9dc80c2479a41239b9661b35504dcaaaedf367f7/public_html/php/oauth.php?at=master


On Wed Feb 04 2015 at 09:29:28 Nkansah Rexford  wrote:

> Thanks for all the ideas and suggestions. Will start with enabling oauth
> right away.
>
> I'm dead in php so I always look for python related hooks. I think this
> will do for the oauth? http://pythonhosted.org/mwoauth/
>
> As regards the conversion to webm instead of ogv, I tried but couldn't get
> avconv to process files into the webm format. Perhaps I'm not putting
> things together right. Will read more on that.
>
> Its very interesting for me to know similar tools exist in the system. I
> wish a tool like https://tools.wmflabs.org/videoconvert/ was listed on the
> ways to upload videos on commons page. I would have used it right away.
>
> Really appreciate all your thoughts. With MediaWiki I wish I was a bit good
> in php. Thus I always look around for python related packages to interact
> with the API. But with the oauth module found, I think authentication won't
> be the 'crude' way I'm doing it now.
>
> Will reach out to 'real' users on VP soon too.
> Hi,
>
> there is the tool by Holger which does that using WebM - but it is
> hosted on WMF Labs. It uses OAuth for unified login and moving the files
> to Commons.
>
> Then, a more elaborate tool for
> * storing raw material at the Internet Archive
> * generating patent free WebM proxy clips for editing
> * rendering high-quality videos
> * moving these rendered videos to Commons directly
>
> is the Video Editing Server, developed by some Wikipedians and a MLT
> developer, hosted by the Internet Archive:
>
> https://wikimedia.meltvideo.com/
>
> It also uses OAuth for login and moving files to Commons.
>
> The workflow with this:
>
> # upload all your raw files to the server for
> ## long-term storage
> ## to make them available to other editors
> ## to let the server use them in the rendering process
>
> # the server transcodes all files into WebM "proxy clips"
>
> # editors download the WebM proxy clips
> ## do the editing on your computer
> ## create an MLT project file (eg. using kdenlive or another MLT-based
> video editor)
>
> # upload the project file
> ## server will replace proxy clips with raw material
> ## server will render video project
> ## server will move generated file to Commons
>
> It comes with a search engine, meta data forms... it's still pretty new
> (development started in December '14) but can be used.
> We plan to add some more features like tagging using Wikidata QIDs
> (hence allowing multilingual / localised tagging / searching, adding
> more project file formats and renderer, making old project file
> revisions available for download, give it a nice vector-based theme,
> give it a better domain name and SSL certificate...
>
> Play with it and have fun!
>
> For source code or any issues refer to GitHub:
> https://github.com/ddennedy/wikimedia-video-editing-server
>
> also there see the wiki for the specs and a deployment guide:
> https://github.com/ddennedy/wikimedia-video-editing-server/wiki
>
>
> /Manuel
> --
> Wikimedia CH - Verein zur Förderung Freien Wissens
> Lausanne, +41 (21) 34066-22 - www.wikimedia.ch
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Brian Gerstle
TL;DR; this discussion is great, but I think moving to docs/wikis/etc.
instead of continuing the thread could improve communication and give the
people who end up working on this something to reference later. could just
be my n00b-ness, but I thought others might share the sentiment.

I'm still new here, so please excuse me for possibly going against
convention, but does anyone else think it would be beneficial to move this
problem & proposal into a living document (RFC, wiki, google doc,
whatever)?  In doing so, I hope we can:

   1. Keep track of what the actual problems are along with the proposed
   solution(s)
   2. Group related concerns together, making them easier for those voicing
   them to be heard while also facilitating understanding and resolution
   3. Give us something concrete to go back to whenever we decide to
   dedicate resources to solving this problem, whether it's the next mobile
   apps sprint or something the mobile web team needs more urgently
   4. Prevent the points raised in the email (or the problem itself) from
   being forgotten or lost in the deluge of other emails we get every day

I don't know about you, but I can't mentally juggle the multiple problems,
implications, and the great points everyone is raising—which keeping it in
an email forces me to do.

Either way, looking forward to discussing this further and taking steps to
solve it in the near term.

- Brian


On Wed, Feb 4, 2015 at 10:22 AM, Brad Jorsch (Anomie)  wrote:

> On Wed, Feb 4, 2015 at 2:33 AM, Erik Moeller  wrote:
>
> > If not, then I think one thing to keep in mind is how to organize the
> > transformation code in a manner that it doesn't just become a
> > server-side hodgepodge still only useful to one consumer, to avoid
> > some of the pitfalls Brian mentions.
>
>
> I think the MobileFrontend extension has probably run into these pitfalls
> already.
>
>
> > Say you want to reformat infoboxes on the mobile web, but not do all the
> > other stuff the mobile app does. Can you just get that specific
> > transformation? Are some transformations dependent on others?  Or say we
> > want to make a change only for the output that gets fed into the PDF
> > generator, but not for any other outputs. Can we do that?
> >
>
> Maybe what we really need is a way to register transformation classes (e.g.
> something like $wgAPIModules). Then have ApiParse have a parameter to
> select transformations and apply them to wikitext before and to HTML after
> calling the parser. And we'd probably want to do the wikitext-before bit in
> ApiExpandTemplates too, and add a new action that takes HTML and applies
> only the HTML-after transforms to it.
>
> Or we could go as far as giving ParserOptions (or the ParserEnvironment I
> recently heard Tim propose) a list of transformations, to allow for
> transformations at some of the points where we have parser hooks. Although
> that would probably cause problems for Parsoid.
>
>
> --
> Brad Jorsch (Anomie)
> Software Engineer
> Wikimedia Foundation
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>



-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Fwd: wmf15 performance regression

2015-02-04 Thread Greg Grossmeier
FYI. A performance regression was found and we rolled back all wikis to
1.25wmf14. Follow the bug mentioned for updates.

- Forwarded message from Ori Livneh  -

> Date: Wed, 4 Feb 2015 02:49:37 -0800
> From: Ori Livneh 
> To: "Development and Operations engineers (WMF only)" 
> 
> Subject: [Engineering] wmf15 performance regression
> 
> Hello,
> 
> The roll-out of wmf15 to non-Wikipedias appears to have introduced a
> significant performance regression, which registered as a 1-second increase
> in median page load times across the cluster. The regression abated when I
> rolled back .
> 
> http://i.imgur.com/2sPxYNg.png
> http://i.imgur.com/NFbyOBu.png
> 
> There should be no MediaWiki deployments until this bug is isolated and
> resolved.
> Tracked in .
> 
> Ori

> ___
> Engineering mailing list
> engineer...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/engineering


- End forwarded message -

-- 
| Greg GrossmeierGPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @gregA18D 1138 8E47 FAC8 1C7D |


signature.asc
Description: Digital signature
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Microservices/SOA: let's continue the discussion

2015-02-04 Thread Giuseppe Lavagetto

Hi all,

it has been since the Dev Summit discussions on SOA/Microservices[1]  that I am 
pondering the outcomes and I am willing to post some afterthoughts to these 
lists. Having been one of the most vocal in raising concerns about 
microservices, and having had experience in an heavily service-oriented web 
platform before, I think I owe my fellow engineers some more lengthy 
explanations. Also, let me say that I am very happy with both the discussions 
we had in the Dev Summit and its outcomes - including the fact that the Ops and 
Services teams both share the desire to work strictly toghether on this.

I tried to write down some thoughts about this, and ended up with a way too 
long email. So I decided to put up a page on wikitech here:

https://wikitech.wikimedia.org/wiki/User:Giuseppe_Lavagetto/MicroServices


Apart from my blabbing, have three questions on our strategy: How, when, what? 
None of this is clear to me as of today, and I guess if anyone has a clear 
picture of where we want to be in 6-to-12 months with microservices. If someone 
has a clear plan, please speak up so that we can tackle the challenges ahead of 
us on a practical basis, and not just based on some grand principles :)


Cheers

Giuseppe

[1] I prefer the latter term, probably because SOA sounds bloated to me, and 
reminds me of enterprise software architectures that I don’t like.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Bryan Davis
On Wed, Feb 4, 2015 at 8:51 AM, Brian Gerstle  wrote:
> TL;DR; this discussion is great, but I think moving to docs/wikis/etc.
> instead of continuing the thread could improve communication and give the
> people who end up working on this something to reference later. could just
> be my n00b-ness, but I thought others might share the sentiment.
>
> I'm still new here, so please excuse me for possibly going against
> convention, but does anyone else think it would be beneficial to move this
> problem & proposal into a living document (RFC, wiki, google doc,
> whatever)?  In doing so, I hope we can:
>
>1. Keep track of what the actual problems are along with the proposed
>solution(s)
>2. Group related concerns together, making them easier for those voicing
>them to be heard while also facilitating understanding and resolution
>3. Give us something concrete to go back to whenever we decide to
>dedicate resources to solving this problem, whether it's the next mobile
>apps sprint or something the mobile web team needs more urgently
>4. Prevent the points raised in the email (or the problem itself) from
>being forgotten or lost in the deluge of other emails we get every day
>
> I don't know about you, but I can't mentally juggle the multiple problems,
> implications, and the great points everyone is raising—which keeping it in
> an email forces me to do.
>
> Either way, looking forward to discussing this further and taking steps to
> solve it in the near term.

+1 This sort of major design change is exactly the sort of thing that
I think the RfC process is good at helping with. Start with a straw
man proposal, get feedback from other engineers and iterate before
investing in code changes. The sometime frustrating part is that
feedback doesn't always come as fast as Product and/or the team would
like but we can try to accelerate that by promoting the topic more
often.

Bryan
-- 
Bryan Davis  Wikimedia Foundation
[[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Gabriel Wicke
On Tue, Feb 3, 2015 at 11:33 PM, Erik Moeller  wrote:

> I think you will generally find agreement that moving client-side
> transformations that only live in the app to server-side code that
> enables access by multiple consumers and caching is a good idea. If
> there are reasons not do to this, now'd be a good time to speak up.
>
> If not, then I think one thing to keep in mind is how to organize the
> transformation code in a manner that it doesn't just become a
> server-side hodgepodge still only useful to one consumer, to avoid
> some of the pitfalls Brian mentions. Say you want to reformat
> infoboxes on the mobile web, but not do all the other stuff the mobile
> app does. Can you just get that specific transformation? Are some
> transformations dependent on others?  Or say we want to make a change
> only for the output that gets fed into the PDF generator, but not for
> any other outputs. Can we do that?
>


Right now the plan is to start from plain Parsoid HTML. The mobile app
service would be called for each new revision to prime the cache / storage.
Chaining transformations might be possible, but right now it's not clear
that it would be worth the complexity. Currently AFAIK only OCG and mobile
apps have strong transformation needs, and there seems to be little overlap
in the way they transform the content. Mobile web still wraps sections into
divs, but we are looking into eliminating that by possibly integrating the
section markup into the regular Parsoid output.

Regarding general-purpose APIs vs. mobile: I think mobile is in some ways a
special case as their content transformation needs are closely coupled with
the way the apps are presenting the content. Additionally, at least until
SPDY is deployed there is a strong performance incentive to bundle
information in a single response tailored to the app's needs. One strategy
employed by Netflix is to introduce a second API layer

on
top of the general content API to handle device-specific needs. I think
this is a sound strategy, as it contains the volatility in a separate layer
while ensuring that everything is ultimately consuming the general-purpose
API. If the need for app-specific massaging disappears over time, we can
simply shut down the custom service / API end point without affecting the
general API.

Gabriel
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Chris McMahon
> One strategy
> employed by Netflix is to introduce a second API layer
> <
> http://techblog.netflix.com/2012/07/embracing-differences-inside-netflix.html
> >
> on
> top of the general content API to handle device-specific needs. I think
> this is a sound strategy, as it contains the volatility in a separate layer
> while ensuring that everything is ultimately consuming the general-purpose
> API.


This design appears often enough that it can likely be called a "design
pattern". The Selenium/Webdriver project did exactly the same thing[1].
The API for  Selenium v2 has about 1/3 as many functions as Selenium v1.
People that use Selenium v2 build their own high-level APIs based on the
basic core set of functions available.

Defining the scope of the "general content API" can be challenging.

[1] http://w3c.github.io/webdriver/webdriver-spec.html
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Dan Garry
On 4 February 2015 at 08:40, Bryan Davis  wrote:

> +1 This sort of major design change is exactly the sort of thing that
> I think the RfC process is good at helping with. Start with a straw
> man proposal, get feedback from other engineers and iterate before
> investing in code changes. The sometime frustrating part is that
> feedback doesn't always come as fast as Product and/or the team would
> like but we can try to accelerate that by promoting the topic more
> often.
>

Our plan is to have a spike to experiment to determine whether there are
any early roadblocks in the proposed solution. We're not going to consider
commiting to the RESTBase/Node.js path until after that. It seems quite
reasonable to me to also have an RfC alongside our experimentation to try
to think up alternative solutions, and invest in experimenting with those
solutions too, because we're definitely open to anything that helps us move
forwards at this stage.

We'll start writing up an RfC and see where it takes us.

Dan

-- 
Dan Garry
Associate Product Manager, Mobile Apps
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Erik Moeller
On Wed, Feb 4, 2015 at 8:41 AM, Gabriel Wicke  wrote:

> Regarding general-purpose APIs vs. mobile: I think mobile is in some ways a
> special case as their content transformation needs are closely coupled with
> the way the apps are presenting the content. Additionally, at least until
> SPDY is deployed there is a strong performance incentive to bundle
> information in a single response tailored to the app's needs.

A notion of schemas that declare a specific set of transformations to
be applied/not applied might help avoid overcomplicating things early
on while addressing different transformation needs even within the
growing number of mobile use cases (Android app alpha/beta/stable, iOS
app alpha/beta/stable, mobile web alpha/beta/stable, third party
apps), and potentially making code re-usable for desktop needs down
the road. Since the number of schemas would be limited, and specifying
the correct schema would result in a single response, performance
could be optimized for each use case.

Erik
-- 
Erik Möller
VP of Product & Strategy, Wikimedia Foundation

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video Uploads to Commons

2015-02-04 Thread Manuel Schneider
Hi Magnus,

On 02/04/2015 04:40 PM, Magnus Manske wrote:
> I wrote a PHP class to specifically deal with the Wikimedia OAuth,
> including uploads (not chunked though). May be helpful.
> 
> https://bitbucket.org/magnusmanske/magnustools/src/9dc80c2479a41239b9661b35504dcaaaedf367f7/public_html/php/oauth.php?at=master

very good!
Because I have written a PHP library to use the MW library - it does not
OAuth, because it is command-line based, but it can do chunked uploads!

Maybe there is a way to add OAuth to it, then I will shamelessly steal
from your code.

My code is here:

https://github.com/masterssystems/phpapibot

I know that at least Dan used some it for inspiration for the video
editing server upload component ;-)


/Manuel
-- 
Wikimedia CH - Verein zur Förderung Freien Wissens
Lausanne, +41 (21) 34066-22 - www.wikimedia.ch

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Scrum of Scrums notes for 2015-02-04

2015-02-04 Thread Dan Andreescu
https://www.mediawiki.org/wiki/Scrum_of_scrums/2015-02-04
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Wikimedia-l] Quarterly reviews of high priority WMF initiatives

2015-02-04 Thread Tilman Bayer
Minutes and slides from last week's quarterly review of the
Foundation's Editing (formerly VisualEditor) team await perusal at
https://meta.wikimedia.org/wiki/WMF_Metrics_and_activities_meetings/Quarterly_reviews/Editing/January_2015

On Wed, Dec 19, 2012 at 6:49 PM, Erik Moeller  wrote:
> Hi folks,
>
> to increase accountability and create more opportunities for course
> corrections and resourcing adjustments as necessary, Sue's asked me
> and Howie Fung to set up a quarterly project evaluation process,
> starting with our highest priority initiatives. These are, according
> to Sue's narrowing focus recommendations which were approved by the
> Board [1]:
>
> - Visual Editor
> - Mobile (mobile contributions + Wikipedia Zero)
> - Editor Engagement (also known as the E2 and E3 teams)
> - Funds Dissemination Committe and expanded grant-making capacity
>
> I'm proposing the following initial schedule:
>
> January:
> - Editor Engagement Experiments
>
> February:
> - Visual Editor
> - Mobile (Contribs + Zero)
>
> March:
> - Editor Engagement Features (Echo, Flow projects)
> - Funds Dissemination Committee
>
> We’ll try doing this on the same day or adjacent to the monthly
> metrics meetings [2], since the team(s) will give a presentation on
> their recent progress, which will help set some context that would
> otherwise need to be covered in the quarterly review itself. This will
> also create open opportunities for feedback and questions.
>
> My goal is to do this in a manner where even though the quarterly
> review meetings themselves are internal, the outcomes are captured as
> meeting minutes and shared publicly, which is why I'm starting this
> discussion on a public list as well. I've created a wiki page here
> which we can use to discuss the concept further:
>
> https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/Quarterly_reviews
>
> The internal review will, at minimum, include:
>
> Sue Gardner
> myself
> Howie Fung
> Team members and relevant director(s)
> Designated minute-taker
>
> So for example, for Visual Editor, the review team would be the Visual
> Editor / Parsoid teams, Sue, me, Howie, Terry, and a minute-taker.
>
> I imagine the structure of the review roughly as follows, with a
> duration of about 2 1/2 hours divided into 25-30 minute blocks:
>
> - Brief team intro and recap of team's activities through the quarter,
> compared with goals
> - Drill into goals and targets: Did we achieve what we said we would?
> - Review of challenges, blockers and successes
> - Discussion of proposed changes (e.g. resourcing, targets) and other
> action items
> - Buffer time, debriefing
>
> Once again, the primary purpose of these reviews is to create improved
> structures for internal accountability, escalation points in cases
> where serious changes are necessary, and transparency to the world.
>
> In addition to these priority initiatives, my recommendation would be
> to conduct quarterly reviews for any activity that requires more than
> a set amount of resources (people/dollars). These additional reviews
> may however be conducted in a more lightweight manner and internally
> to the departments. We’re slowly getting into that habit in
> engineering.
>
> As we pilot this process, the format of the high priority reviews can
> help inform and support reviews across the organization.
>
> Feedback and questions are appreciated.
>
> All best,
> Erik
>
> [1] https://wikimediafoundation.org/wiki/Vote:Narrowing_Focus
> [2] https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings
> --
> Erik Möller
> VP of Engineering and Product Development, Wikimedia Foundation
>
> Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
>
> ___
> Wikimedia-l mailing list
> wikimedi...@lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l



-- 
Tilman Bayer
Senior Analyst
Wikimedia Foundation
IRC (Freenode): HaeB

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] More news on Wikidata Query Indexing Strategy

2015-02-04 Thread Nikolas Everett
tl/dr: The technology we started building against (Titan) is probably
dead.  We're reopening the investigation for a backing technology.

Yesterday DataStax  announced

that they'd acquired

ThinkAurelius , the company for whom almost all
the Titan developers work. The ZDNet article

made it pretty clear that they are killing the project

> "We're not going to do an integration. The play here is we'll take
> everything that's been done on Titan as inspiration, and maybe some of the
> Titan project will make it into DSE Graph," DataStax engineering VP Martin
> Van Ryswyk said.


While its certainly possible that someone from the community will come out
of the woodwork and continue Titan its now lost almost all of its top
developers.  It looks like there is some secret succession discussions
going on but I'm not holding out hope that anything will come of it.  This
pretty much blows this project's schedule of having a hardware request by
the end of the month and a publicly released beta at the end of March.

Anyway, we're reopening the investigation to pick a new backend.  We're
including more options than we had before as its become clear that open
source graph databases is a bit of a wild west space.  But there are people
waiting on this.  The developer summit made that clear.  So we're not going
to do the month long dive into each choice like we did last time.  I'm not
100% sure exactly what we'll do but I can assure you we'll be careful.

I know you might want to talk about other options - you may as well stuff
them on
https://www.mediawiki.org/wiki/Wikibase/Indexing#Other_possible_candidates
and we'll get to them.  As always, you can check out our workboard
 to
see what we're actually working on.

Titan is still in the running assuming it gets active maintainers.
OrientDB, which we evaluated last round, is still in there too.  So too are
GraphX and Neo4j.  And ArangoDB.  And Magnus' WDQ.  We'd get much more
involved in maintenance, I think.  And writing a TinkerPop implementation
Elasticsearch.  That's not a serious contender.  It'd get geo support for
free but its really just a low bar to compare all the other options to.

Thanks,

Nik 
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] More news on Wikidata Query Indexing Strategy

2015-02-04 Thread Nikolas Everett
Top posting to add context: this is for the initiative to get a version of
Magnus' wonderful http://wdq.wmflabs.org/ running in production at WMF.

On Wed, Feb 4, 2015 at 4:50 PM, Nikolas Everett 
wrote:

> tl/dr: The technology we started building against (Titan) is probably
> dead.  We're reopening the investigation for a backing technology.
>
> Yesterday DataStax  announced
> 
> that they'd acquired
> 
> ThinkAurelius , the company for whom almost
> all the Titan developers work. The ZDNet article
> 
> made it pretty clear that they are killing the project
>
>> "We're not going to do an integration. The play here is we'll take
>> everything that's been done on Titan as inspiration, and maybe some of the
>> Titan project will make it into DSE Graph," DataStax engineering VP Martin
>> Van Ryswyk said.
>
>
> While its certainly possible that someone from the community will come out
> of the woodwork and continue Titan its now lost almost all of its top
> developers.  It looks like there is some secret succession discussions
> going on but I'm not holding out hope that anything will come of it.  This
> pretty much blows this project's schedule of having a hardware request by
> the end of the month and a publicly released beta at the end of March.
>
> Anyway, we're reopening the investigation to pick a new backend.  We're
> including more options than we had before as its become clear that open
> source graph databases is a bit of a wild west space.  But there are people
> waiting on this.  The developer summit made that clear.  So we're not going
> to do the month long dive into each choice like we did last time.  I'm not
> 100% sure exactly what we'll do but I can assure you we'll be careful.
>
> I know you might want to talk about other options - you may as well stuff
> them on
> https://www.mediawiki.org/wiki/Wikibase/Indexing#Other_possible_candidates
> and we'll get to them.  As always, you can check out our workboard
> 
> to see what we're actually working on.
>
> Titan is still in the running assuming it gets active maintainers.
> OrientDB, which we evaluated last round, is still in there too.  So too are
> GraphX and Neo4j.  And ArangoDB.  And Magnus' WDQ.  We'd get much more
> involved in maintenance, I think.  And writing a TinkerPop implementation
> Elasticsearch.  That's not a serious contender.  It'd get geo support for
> free but its really just a low bar to compare all the other options to.
>
> Thanks,
>
> Nik 
>

And, too add more context, we chose not to just immediately deploy Magnus'
WDQ because we didn't want to maintain a graph database ourselves.  You
should now be able to appreciate the irony of the situation more
thoroughly.  Its healthy to find humor where you can.

Nik
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Improving our code review efficiency

2015-02-04 Thread Federico Leva (Nemo)

> But this doesn't remove it from the projects review queue (or search
> queries, if you just use "project:mediawiki/extensions/MobileFrontend
> status:open").

IMHO the distinction between cleaning up personal and global dashboards 
is not so important. In the end, code review is performed by 
individuals. If each reviewer has a more relevant review queue, all 
reviewers will be more efficient and the backlog will decrease for everyone.


After all, we see from 
https://www.mediawiki.org/wiki/Gerrit/Reports/Code_review_activity that 
20 reviewers do 50 % of the merging in gerrit. Some queries like those 
listed at https://www.mediawiki.org/wiki/Gerrit/Navigation show that 
over ten users have over 100 review requests in their dashboards, among 
those who merged something in the last month. I doubt that's efficient 
for them.


Nemo

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Improving our code review efficiency

2015-02-04 Thread Quim Gil
What about a Gerrit Cleanup Day involving all Wikimedia Foundation
developers and whoever else wants to be involved?

Feedback welcome: https://phabricator.wikimedia.org/T88531

-- 
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Corey Floyd
On Wed, Feb 4, 2015 at 11:41 AM, Gabriel Wicke  wrote:

> Regarding general-purpose APIs vs. mobile: I think mobile is in some ways a
> special case as their content transformation needs are closely coupled with
> the way the apps are presenting the content. Additionally, at least until
> SPDY is deployed there is a strong performance incentive to bundle
> information in a single response tailored to the app's needs. One strategy
> employed by Netflix is to introduce a second API layer
> <
> http://techblog.netflix.com/2012/07/embracing-differences-inside-netflix.html
> >
> on
> top of the general content API to handle device-specific needs. I think
> this is a sound strategy, as it contains the volatility in a separate layer
> while ensuring that everything is ultimately consuming the general-purpose
> API. If the need for app-specific massaging disappears over time, we can
> simply shut down the custom service / API end point without affecting the
> general API.
>


I can definitely understand that motivation for providing mobile specific
service layer - so if the services team wants to implement the API in this
way and support that architecture, I am totally on board.

My remaining hesitation here is that from the reading of this proposal, the
mobile team is the owner of implementing this service, not the services
team (Maybe I am misreading?).

This leads me to ask questions like:
Why is the mobile apps team investigating which is the best server side
technology? That seems outside of our domain knowledge.
Who will be responsible for maintaining this code?
Who will be testing it to make sure that is performant?

I'm new, so maybe these answers are obvious to others, but to me they seem
fuzzy when responsibilities are divided between two teams.

I would propose that this be a project that the Services Team owns. And
that the Mobile Apps Team defines specs on what they need the new service
to provide.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] [gerrit] EUREKA!

2015-02-04 Thread Brian Gerstle
Go to a change , click on the
gitbit

link next to a patch set, then behold: MAGIC!!!

GitHub like diff viewer! No more "All Side-by-Side" w/ 1e6 tabs open.

Enjoy!

Brian


-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [gerrit] EUREKA!

2015-02-04 Thread James Douglas
Hooray!  Thank you for this!  Gerrit's multi-tab diff has been my biggest
pain point in migrating from GitHub.

On Wed, Feb 4, 2015 at 2:34 PM, Brian Gerstle 
wrote:

> Go to a change , click on the
> gitbit
> <
> https://git.wikimedia.org/commit/apps%2Fios%2Fwikipedia/6532021b4f4b1f09390b1ffc3f09d149b2a8d9d1
> >
> link next to a patch set, then behold: MAGIC!!!
> <
> https://git.wikimedia.org/commitdiff/apps%2Fios%2Fwikipedia/712f033031c3c11fe8d521f7fdac4252986ee741
> >
> GitHub like diff viewer! No more "All Side-by-Side" w/ 1e6 tabs open.
>
> Enjoy!
>
> Brian
>
>
> --
> EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
> IRC: bgerstle
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Brion Vibber
I think the way we'd want to go is roughly to have a *partnership between*
the Services and Mobile teams produce and maintain the service.

(Note that the state of the art is that Mobile Apps are using Mobile Web's
MobileFrontend extension as an intermediate API to aggregate & format page
data -- which basically means Max has done the server-side API work for
Mobile Apps so far.)

I'd expect to see Max and/or someone else from the Mobile team
collaborating with the Services team to create what y'all need:
1) something that does what Mobile Apps needs it to...
2) and can be maintained like Services needs it to.

In general I'm in favor of more ad-hoc project-specific teams rather than
completely siloing every service to the Services group, or every mobile UI
to the Mobile group.

-- brion

On Wed, Feb 4, 2015 at 2:29 PM, Corey Floyd  wrote:

> On Wed, Feb 4, 2015 at 11:41 AM, Gabriel Wicke 
> wrote:
>
> > Regarding general-purpose APIs vs. mobile: I think mobile is in some
> ways a
> > special case as their content transformation needs are closely coupled
> with
> > the way the apps are presenting the content. Additionally, at least until
> > SPDY is deployed there is a strong performance incentive to bundle
> > information in a single response tailored to the app's needs. One
> strategy
> > employed by Netflix is to introduce a second API layer
> > <
> >
> http://techblog.netflix.com/2012/07/embracing-differences-inside-netflix.html
> > >
> > on
> > top of the general content API to handle device-specific needs. I think
> > this is a sound strategy, as it contains the volatility in a separate
> layer
> > while ensuring that everything is ultimately consuming the
> general-purpose
> > API. If the need for app-specific massaging disappears over time, we can
> > simply shut down the custom service / API end point without affecting the
> > general API.
> >
>
>
> I can definitely understand that motivation for providing mobile specific
> service layer - so if the services team wants to implement the API in this
> way and support that architecture, I am totally on board.
>
> My remaining hesitation here is that from the reading of this proposal, the
> mobile team is the owner of implementing this service, not the services
> team (Maybe I am misreading?).
>
> This leads me to ask questions like:
> Why is the mobile apps team investigating which is the best server side
> technology? That seems outside of our domain knowledge.
> Who will be responsible for maintaining this code?
> Who will be testing it to make sure that is performant?
>
> I'm new, so maybe these answers are obvious to others, but to me they seem
> fuzzy when responsibilities are divided between two teams.
>
> I would propose that this be a project that the Services Team owns. And
> that the Mobile Apps Team defines specs on what they need the new service
> to provide.
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread James Douglas
> In general I'm in favor of more ad-hoc project-specific teams rather than
completely siloing every service to the Services group, or every mobile UI
to the Mobile group.

I strongly agree.  Based on experience on both sides of this spectrum, I
recommend (when feasible) favoring feature teams over functional teams.

On Wed, Feb 4, 2015 at 3:00 PM, Brion Vibber  wrote:

> I think the way we'd want to go is roughly to have a *partnership between*
> the Services and Mobile teams produce and maintain the service.
>
> (Note that the state of the art is that Mobile Apps are using Mobile Web's
> MobileFrontend extension as an intermediate API to aggregate & format page
> data -- which basically means Max has done the server-side API work for
> Mobile Apps so far.)
>
> I'd expect to see Max and/or someone else from the Mobile team
> collaborating with the Services team to create what y'all need:
> 1) something that does what Mobile Apps needs it to...
> 2) and can be maintained like Services needs it to.
>
> In general I'm in favor of more ad-hoc project-specific teams rather than
> completely siloing every service to the Services group, or every mobile UI
> to the Mobile group.
>
> -- brion
>
> On Wed, Feb 4, 2015 at 2:29 PM, Corey Floyd  wrote:
>
> > On Wed, Feb 4, 2015 at 11:41 AM, Gabriel Wicke 
> > wrote:
> >
> > > Regarding general-purpose APIs vs. mobile: I think mobile is in some
> > ways a
> > > special case as their content transformation needs are closely coupled
> > with
> > > the way the apps are presenting the content. Additionally, at least
> until
> > > SPDY is deployed there is a strong performance incentive to bundle
> > > information in a single response tailored to the app's needs. One
> > strategy
> > > employed by Netflix is to introduce a second API layer
> > > <
> > >
> >
> http://techblog.netflix.com/2012/07/embracing-differences-inside-netflix.html
> > > >
> > > on
> > > top of the general content API to handle device-specific needs. I think
> > > this is a sound strategy, as it contains the volatility in a separate
> > layer
> > > while ensuring that everything is ultimately consuming the
> > general-purpose
> > > API. If the need for app-specific massaging disappears over time, we
> can
> > > simply shut down the custom service / API end point without affecting
> the
> > > general API.
> > >
> >
> >
> > I can definitely understand that motivation for providing mobile specific
> > service layer - so if the services team wants to implement the API in
> this
> > way and support that architecture, I am totally on board.
> >
> > My remaining hesitation here is that from the reading of this proposal,
> the
> > mobile team is the owner of implementing this service, not the services
> > team (Maybe I am misreading?).
> >
> > This leads me to ask questions like:
> > Why is the mobile apps team investigating which is the best server side
> > technology? That seems outside of our domain knowledge.
> > Who will be responsible for maintaining this code?
> > Who will be testing it to make sure that is performant?
> >
> > I'm new, so maybe these answers are obvious to others, but to me they
> seem
> > fuzzy when responsibilities are divided between two teams.
> >
> > I would propose that this be a project that the Services Team owns. And
> > that the Mobile Apps Team defines specs on what they need the new service
> > to provide.
> > ___
> > Wikitech-l mailing list
> > Wikitech-l@lists.wikimedia.org
> > https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> >
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [gerrit] EUREKA!

2015-02-04 Thread MZMcBride
Nice find. The link used to be labeled "tree" I believe. I'm not sure
it's more discoverable as "gitblit", but some of this is moot as I
imagine Gitblit won't survive 2015. Assuming Gerrit stays around for a
while longer, perhaps it would be best to change the link in the Gerrit
user interface to read "view source code" instead of "diffusion" when we
switch to Phabricator's Diffusion for hosting/viewing repositories.

I don't know if Gerrit will survive 2015. It will one day be replaced by
Phabricator's Differential, probably. I personally don't mind Gerrit.

MZMcBride



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Bryan Davis
On Wed, Feb 4, 2015 at 4:00 PM, Brion Vibber  wrote:
> I think the way we'd want to go is roughly to have a *partnership between*
> the Services and Mobile teams produce and maintain the service.
>
> (Note that the state of the art is that Mobile Apps are using Mobile Web's
> MobileFrontend extension as an intermediate API to aggregate & format page
> data -- which basically means Max has done the server-side API work for
> Mobile Apps so far.)
>
> I'd expect to see Max and/or someone else from the Mobile team
> collaborating with the Services team to create what y'all need:
> 1) something that does what Mobile Apps needs it to...
> 2) and can be maintained like Services needs it to.
>
> In general I'm in favor of more ad-hoc project-specific teams rather than
> completely siloing every service to the Services group, or every mobile UI
> to the Mobile group.

+1. This is the only thing that will scale in my opinion. "Full stack"
teams involving design, front end, back end, ops, release, project
management, and testing resources should be formed to work on vertical
slices of functionality ("features" or "products") that are
prioritized by the entire organization. Thinking that some team can be
called on to fulfill all of the cross-feature/product needs is
madness. Services is 3 people, MediaWiki-Core is 9 people (minus
standing obligations like security and performance reviews). Teams of
this size cannot be expected to service all the "backend" needs of the
myriad product/feature verticals that are under the WMF umbrella. If
we don't have enough people to staff projects this way we are trying
to do too many things at once. (Which I'm pretty sure is actually the
case.)

Bryan
-- 
Bryan Davis  Wikimedia Foundation
[[m:User:BDavis_(WMF)]]  Sr Software EngineerBoise, ID USA
irc: bd808v:415.839.6885 x6855

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Investigating building an apps content service using RESTBase and Node.js

2015-02-04 Thread Dan Garry
On 4 February 2015 at 15:00, Brion Vibber  wrote:
>
> In general I'm in favor of more ad-hoc project-specific teams rather than
> completely siloing every service to the Services group, or every mobile UI
> to the Mobile group.
>

Agreed. This also ensures that the service exactly meets the functional
requirements, no more and no less.

Dan

-- 
Dan Garry
Associate Product Manager, Mobile Apps
Wikimedia Foundation
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia Hackathon 2015 in Phabricator

2015-02-04 Thread Arthur Richards
Thanks for getting this started, Quim. Is this also the appropriate place
and is this the appropriate time to start proposing focus
areas/projects/sessions/etc? Is there already a theme/scope defined for
project ideas (eg I see a column on the workboard entitled 'New forms of
editing hack ideas' - is that the general hackathon theme)?

On Wed, Feb 4, 2015 at 2:59 AM, Quim Gil  wrote:

> Hi, just a note to say that the Phabricator project for the Wikimedia
> Hackathon 2015 (Lyon, 23-25 May) has been created.
>
> https://phabricator.wikimedia.org/tag/wikimedia-hackathon-2015/
>
> You are invited to ask questions, provide feedback, and get involved.
>
> Important note: even if we are just bootstrapping the project there, the
> volunteers at Wikimedia France have done a ton of work already (i.e. the
> venue and main services are secured).
>
> --
> Quim Gil
> Engineering Community Manager @ Wikimedia Foundation
> http://www.mediawiki.org/wiki/User:Qgil
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
Arthur Richards
Team Practices Manager
[[User:Awjrichards]]
IRC: awjr
+1-415-839-6885 x6687
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: Phabricator monthly statistics - 2015-01

2015-02-04 Thread Andre Klapper
On Mon, 2015-02-02 at 00:39 -0800, Pine W wrote:
> It would be interesting to compare our trends to those of other open source
> projects with open bug or task trackers.

For trends in other FOSS projects, see data I posted in
https://phabricator.wikimedia.org/T78639#936184


> On Feb 1, 2015 7:47 PM, "James Forrester"  wrote:
> > I'm not entirely sure about whether we should consider this a bad thing, or
> > something to be addressed.

+1. I don't consider it problem per se; see my comment in that link.

andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia Hackathon 2015 in Phabricator

2015-02-04 Thread Andre Klapper
On Wed, 2015-02-04 at 17:43 -0700, Arthur Richards wrote:
> Thanks for getting this started, Quim. Is this also the appropriate place
> and is this the appropriate time to start proposing focus
> areas/projects/sessions/etc? Is there already a theme/scope defined for
> project ideas (eg I see a column on the workboard entitled 'New forms of
> editing hack ideas' - is that the general hackathon theme)?

I'd say yes, based on previous discussion in
https://phabricator.wikimedia.org/T87610

andre
-- 
Andre Klapper | Wikimedia Bugwrangler
http://blogs.gnome.org/aklapper/


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] MediaWiki-schroot

2015-02-04 Thread Tim Starling
For the last year, I've been using schroot to run my local MediaWiki
test instance. After hearing some gripes about Vagrant at the Dev
Summit, I decided to share this idea by automating the setup procedure
and committing the scripts I use. You can read about it here:

https://www.mediawiki.org/wiki/MediaWiki-schroot

-- Tim Starling


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l