Re: [Wikitech-l] Event organised on Marathi Wikipedia (photothon and edit-a-thon)

2015-03-11 Thread Federico Leva (Nemo)

Thanks. Is there a link to this Android app?

Nemo

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki-Vagrant now has support for install in Linux Containers (LXC)

2015-03-11 Thread S Page
On Mon, Mar 9, 2015 at 3:11 PM, Brion Vibber  wrote:

> vagrant with the lxc provider inside Ubuntu 14.10 in a Parallels VM ...has
> the plus that you can use a stock VM if you're going to run Linux anyway.


But MediaWiki-Vagrant uses a "stock VM". Its Vagrantfile loads a
trusty-server-cloudimg-amd64-vagrant-disk1.box which contains a vmdk that
as far as I know is much like other Ubuntu VMs.


> This is a plus for me as I use VMs for a lot of testing and
> prefer Parallels for its much better/faster graphics support.
>

Did you try Parallels instead of VirtualBox? "The Parallels provider for
Vagrant is a plugin officially supported by Parallels. The plugin allows
Vagrant to power Parallels Desktop for Mac based virtual machines"
http://parallels.github.io/vagrant-parallels/

LXC on a Linux host should be faster than Vagrant VM, but it's hard to see
how running Vagrant as an LXC in a VM on a Mac can be faster than running
Vagrant as its own VM. 8-)

-- 
=S Page  WMF Tech writer
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor proxy with blinded tokens

2015-03-11 Thread Gergo Tisza
On Tue, Mar 10, 2015 at 5:40 PM, Chris Steipp  wrote:

> I'm actually envisioning that the user would edit through the third party's
> proxy (via OAuth, linked to the new, "Special Account"), so no special
> permissions are needed by the "Special Account", and a standard block on
> that username can prevent them from editing. Additionally, revoking the
> OAuth token of the proxy itself would stop all editing by this process, so
> there's a quick way to "pull the plug" if it looks like the edits are
> predominantly unproductive.
>

I'm probably missing the point here but how is this better than a plain
edit proxy, available as a Tor hidden service, which a 3rd party can set up
at any time without the need to coordinate with us (apart from getting an
OAuth key)? Since the user connects to them via Tor, they would not learn
any private information; they could be authorized to edit via normal OAuth
web flow (that is not blocked from a Tor IP); the edit would seemingly come
from the IP address of the proxy so it would not be subject to Tor blocking.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Wikimedia REST content API is now available in beta

2015-03-11 Thread Marc Ordinas i Llopis
Congratulations! After all the hard work that has gone into this, it's
great to see it up and running. Besides the improvements it will allow in
existing projects, I can't wait to see the new things it will enable.

-- Marc

On Tue, Mar 10, 2015 at 11:23 PM, Gabriel Wicke 
wrote:

> Hello all,
>
> I am happy to announce the beta release of the Wikimedia REST Content API
> at
>
> https://rest.wikimedia.org/
>
> Each domain has its own API documentation, which is auto-generated from
> Swagger API specs. For example, here is the link for the English Wikipedia:
>
> https://rest.wikimedia.org/en.wikipedia.org/v1/?doc
>
> At present, this API provides convenient and low-latency access to article
> HTML, page metadata and content conversions between HTML and wikitext.
> After extensive testing we are confident that these endpoints are ready for
> production use, but have marked them as 'unstable' until we have also
> validated this with production users. You can start writing applications
> that depend on it now, if you aren't afraid of possible minor changes
> before transitioning to 'stable' status. For the definition of the terms
> ‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning
> .
>
> While general and not specific to VisualEditor, the selection of endpoints
> reflects this release's focus on speeding up VisualEditor. By storing
> private Parsoid round-trip information separately, we were able to reduce
> the HTML size by about 40%. This in turn reduces network transfer and
> processing times, which will make loading and saving with VisualEditor
> faster. We are also switching from a cache to actual storage, which will
> eliminate slow VisualEditor loads caused by cache misses. Other users of
> Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
> translation will benefit similarly.
>
> But, we are not done yet. In the medium term, we plan to further reduce
> the HTML size by separating out all read-write metadata. This should allow
> us to use Parsoid HTML with its semantic markup
>  directly for
> both views and editing without increasing the HTML size over the current
> output. Combined with performance work in VisualEditor, this has the
> potential to make switching to visual editing instantaneous and free of any
> scrolling.
>
> We are also investigating a sub-page-level edit API for
> micro-contributions and very fast VisualEditor saves. HTML saves don't
> necessarily have to wait for the page to re-render from wikitext, which
> means that we can potentially make them faster than wikitext saves. For
> this to work we'll need to minimize network transfer and processing time on
> both client and server.
>
> More generally, this API is intended to be the beginning of a
> multi-purpose content API. Its implementation (RESTBase
> ) is driven by a declarative
> Swagger API specification, which helps to make it straightforward to extend
> the API with new entry points. The same API spec is also used to
> auto-generate the aforementioned sandbox environment, complete with handy
> "try it" buttons. So, please give it a try and let us know what you think!
>
> This API is currently unmetered; we recommend that users not perform more
> than 200 requests per second and may implement limitations if necessary.
>
> I also want to use this opportunity to thank all contributors who made
> this possible:
>
> - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
> Services team worked hard to build RESTBase, and to make it as extensible
> and clean as it is now.
>
> - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
> Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
> cluster backing this API.
>
> - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
> Marc Ordinas i Llopis is solving the extremely difficult task of converting
> between wikitext and HTML, and built a new API that lets us retrieve and
> pass in metadata separately.
>
> - On the MediaWiki core team, Brad Jorsch quickly created a minimal
> authorization API that will let us support private wikis, and Aaron Schulz,
> Alex Monk and Ori Livneh built and extended the VirtualRestService that
> lets VisualEditor and MediaWiki in general easily access external services.
>
> We welcome your feedback here:
> https://www.mediawiki.org/wiki/Talk:RESTBase - and in Phabricator
> 
> .
>
> Sincerely --
>
> Gabriel Wicke
>
> Principal Software Engineer, Wikimedia Foundation
>
> ___
> Engineering mailing list
> engineer...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/engineering
>
>
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikim

Re: [Wikitech-l] Tor proxy with blinded tokens

2015-03-11 Thread Risker
On 11 March 2015 at 05:23, Gergo Tisza  wrote:

> On Tue, Mar 10, 2015 at 5:40 PM, Chris Steipp 
> wrote:
>
> > I'm actually envisioning that the user would edit through the third
> party's
> > proxy (via OAuth, linked to the new, "Special Account"), so no special
> > permissions are needed by the "Special Account", and a standard block on
> > that username can prevent them from editing. Additionally, revoking the
> > OAuth token of the proxy itself would stop all editing by this process,
> so
> > there's a quick way to "pull the plug" if it looks like the edits are
> > predominantly unproductive.
> >
>
> I'm probably missing the point here but how is this better than a plain
> edit proxy, available as a Tor hidden service, which a 3rd party can set up
> at any time without the need to coordinate with us (apart from getting an
> OAuth key)? Since the user connects to them via Tor, they would not learn
> any private information; they could be authorized to edit via normal OAuth
> web flow (that is not blocked from a Tor IP); the edit would seemingly come
> from the IP address of the proxy so it would not be subject to Tor
> blocking.
> ___
>

Those kinds of services are probably already range blocked or are likely to
be range blocked, because they're one of the main vectors through which we
get spam particularly, and abusive harassment-type vandalism secondarily.
The user would still need IPBE or similar permissions to edit through that
service.

Risker/Anne
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Engineering] Wikimedia REST content API is now available in beta

2015-03-11 Thread Hay (Husky)
This is awesome. Congratulations to Gabriel and the rest of the team.
I'll surely hope this will provide a stable platform for getting
Wikimedia content on even more platforms.

-- Hay

On Wed, Mar 11, 2015 at 12:20 PM, Marc Ordinas i Llopis
 wrote:
> Congratulations! After all the hard work that has gone into this, it's
> great to see it up and running. Besides the improvements it will allow in
> existing projects, I can't wait to see the new things it will enable.
>
> -- Marc
>
> On Tue, Mar 10, 2015 at 11:23 PM, Gabriel Wicke 
> wrote:
>
>> Hello all,
>>
>> I am happy to announce the beta release of the Wikimedia REST Content API
>> at
>>
>> https://rest.wikimedia.org/
>>
>> Each domain has its own API documentation, which is auto-generated from
>> Swagger API specs. For example, here is the link for the English Wikipedia:
>>
>> https://rest.wikimedia.org/en.wikipedia.org/v1/?doc
>>
>> At present, this API provides convenient and low-latency access to article
>> HTML, page metadata and content conversions between HTML and wikitext.
>> After extensive testing we are confident that these endpoints are ready for
>> production use, but have marked them as 'unstable' until we have also
>> validated this with production users. You can start writing applications
>> that depend on it now, if you aren't afraid of possible minor changes
>> before transitioning to 'stable' status. For the definition of the terms
>> ‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning
>> .
>>
>> While general and not specific to VisualEditor, the selection of endpoints
>> reflects this release's focus on speeding up VisualEditor. By storing
>> private Parsoid round-trip information separately, we were able to reduce
>> the HTML size by about 40%. This in turn reduces network transfer and
>> processing times, which will make loading and saving with VisualEditor
>> faster. We are also switching from a cache to actual storage, which will
>> eliminate slow VisualEditor loads caused by cache misses. Other users of
>> Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
>> translation will benefit similarly.
>>
>> But, we are not done yet. In the medium term, we plan to further reduce
>> the HTML size by separating out all read-write metadata. This should allow
>> us to use Parsoid HTML with its semantic markup
>>  directly for
>> both views and editing without increasing the HTML size over the current
>> output. Combined with performance work in VisualEditor, this has the
>> potential to make switching to visual editing instantaneous and free of any
>> scrolling.
>>
>> We are also investigating a sub-page-level edit API for
>> micro-contributions and very fast VisualEditor saves. HTML saves don't
>> necessarily have to wait for the page to re-render from wikitext, which
>> means that we can potentially make them faster than wikitext saves. For
>> this to work we'll need to minimize network transfer and processing time on
>> both client and server.
>>
>> More generally, this API is intended to be the beginning of a
>> multi-purpose content API. Its implementation (RESTBase
>> ) is driven by a declarative
>> Swagger API specification, which helps to make it straightforward to extend
>> the API with new entry points. The same API spec is also used to
>> auto-generate the aforementioned sandbox environment, complete with handy
>> "try it" buttons. So, please give it a try and let us know what you think!
>>
>> This API is currently unmetered; we recommend that users not perform more
>> than 200 requests per second and may implement limitations if necessary.
>>
>> I also want to use this opportunity to thank all contributors who made
>> this possible:
>>
>> - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
>> Services team worked hard to build RESTBase, and to make it as extensible
>> and clean as it is now.
>>
>> - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
>> Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
>> cluster backing this API.
>>
>> - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
>> Marc Ordinas i Llopis is solving the extremely difficult task of converting
>> between wikitext and HTML, and built a new API that lets us retrieve and
>> pass in metadata separately.
>>
>> - On the MediaWiki core team, Brad Jorsch quickly created a minimal
>> authorization API that will let us support private wikis, and Aaron Schulz,
>> Alex Monk and Ori Livneh built and extended the VirtualRestService that
>> lets VisualEditor and MediaWiki in general easily access external services.
>>
>> We welcome your feedback here:
>> https://www.mediawiki.org/wiki/Talk:RESTBase - and in Phabricator
>> 
>> .
>>
>> Sincerely --
>>
>> Gabriel Wicke

Re: [Wikitech-l] [Engineering] Wikimedia REST content API is now available in beta

2015-03-11 Thread Derk-Jan Hartman
Awesome, can't wait to see the 'downstream' effects of such an impressive
new piece in our software puzzle globe.

Great work team

DJ

On Wed, Mar 11, 2015 at 1:00 PM, Hay (Husky)  wrote:

> This is awesome. Congratulations to Gabriel and the rest of the team.
> I'll surely hope this will provide a stable platform for getting
> Wikimedia content on even more platforms.
>
> -- Hay
>
> On Wed, Mar 11, 2015 at 12:20 PM, Marc Ordinas i Llopis
>  wrote:
> > Congratulations! After all the hard work that has gone into this, it's
> > great to see it up and running. Besides the improvements it will allow in
> > existing projects, I can't wait to see the new things it will enable.
> >
> > -- Marc
> >
> > On Tue, Mar 10, 2015 at 11:23 PM, Gabriel Wicke 
> > wrote:
> >
> >> Hello all,
> >>
> >> I am happy to announce the beta release of the Wikimedia REST Content
> API
> >> at
> >>
> >> https://rest.wikimedia.org/
> >>
> >> Each domain has its own API documentation, which is auto-generated from
> >> Swagger API specs. For example, here is the link for the English
> Wikipedia:
> >>
> >> https://rest.wikimedia.org/en.wikipedia.org/v1/?doc
> >>
> >> At present, this API provides convenient and low-latency access to
> article
> >> HTML, page metadata and content conversions between HTML and wikitext.
> >> After extensive testing we are confident that these endpoints are ready
> for
> >> production use, but have marked them as 'unstable' until we have also
> >> validated this with production users. You can start writing applications
> >> that depend on it now, if you aren't afraid of possible minor changes
> >> before transitioning to 'stable' status. For the definition of the terms
> >> ‘stable’ and ‘unstable’ see
> https://www.mediawiki.org/wiki/API_versioning
> >> .
> >>
> >> While general and not specific to VisualEditor, the selection of
> endpoints
> >> reflects this release's focus on speeding up VisualEditor. By storing
> >> private Parsoid round-trip information separately, we were able to
> reduce
> >> the HTML size by about 40%. This in turn reduces network transfer and
> >> processing times, which will make loading and saving with VisualEditor
> >> faster. We are also switching from a cache to actual storage, which will
> >> eliminate slow VisualEditor loads caused by cache misses. Other users of
> >> Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
> >> translation will benefit similarly.
> >>
> >> But, we are not done yet. In the medium term, we plan to further reduce
> >> the HTML size by separating out all read-write metadata. This should
> allow
> >> us to use Parsoid HTML with its semantic markup
> >>  directly
> for
> >> both views and editing without increasing the HTML size over the current
> >> output. Combined with performance work in VisualEditor, this has the
> >> potential to make switching to visual editing instantaneous and free of
> any
> >> scrolling.
> >>
> >> We are also investigating a sub-page-level edit API for
> >> micro-contributions and very fast VisualEditor saves. HTML saves don't
> >> necessarily have to wait for the page to re-render from wikitext, which
> >> means that we can potentially make them faster than wikitext saves. For
> >> this to work we'll need to minimize network transfer and processing
> time on
> >> both client and server.
> >>
> >> More generally, this API is intended to be the beginning of a
> >> multi-purpose content API. Its implementation (RESTBase
> >> ) is driven by a declarative
> >> Swagger API specification, which helps to make it straightforward to
> extend
> >> the API with new entry points. The same API spec is also used to
> >> auto-generate the aforementioned sandbox environment, complete with
> handy
> >> "try it" buttons. So, please give it a try and let us know what you
> think!
> >>
> >> This API is currently unmetered; we recommend that users not perform
> more
> >> than 200 requests per second and may implement limitations if necessary.
> >>
> >> I also want to use this opportunity to thank all contributors who made
> >> this possible:
> >>
> >> - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
> >> Services team worked hard to build RESTBase, and to make it as
> extensible
> >> and clean as it is now.
> >>
> >> - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
> >> Halsell and Mark Bergsma helped to procure and set up the Cassandra
> storage
> >> cluster backing this API.
> >>
> >> - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
> >> Marc Ordinas i Llopis is solving the extremely difficult task of
> converting
> >> between wikitext and HTML, and built a new API that lets us retrieve and
> >> pass in metadata separately.
> >>
> >> - On the MediaWiki core team, Brad Jorsch quickly created a minimal
> >> authorization API that will let us support private wikis, and A

Re: [Wikitech-l] [Engineering] Wikimedia REST content API is now available in beta

2015-03-11 Thread Brian Gerstle
Really impressive work guys, love the generated docs!

On Tue, Mar 10, 2015 at 6:23 PM, Gabriel Wicke  wrote:

> Hello all,
>
> I am happy to announce the beta release of the Wikimedia REST Content API
> at
>
> https://rest.wikimedia.org/
>
> Each domain has its own API documentation, which is auto-generated from
> Swagger API specs. For example, here is the link for the English Wikipedia:
>
> https://rest.wikimedia.org/en.wikipedia.org/v1/?doc
>
> At present, this API provides convenient and low-latency access to article
> HTML, page metadata and content conversions between HTML and wikitext.
> After extensive testing we are confident that these endpoints are ready for
> production use, but have marked them as 'unstable' until we have also
> validated this with production users. You can start writing applications
> that depend on it now, if you aren't afraid of possible minor changes
> before transitioning to 'stable' status. For the definition of the terms
> ‘stable’ and ‘unstable’ see https://www.mediawiki.org/wiki/API_versioning
> .
>
> While general and not specific to VisualEditor, the selection of endpoints
> reflects this release's focus on speeding up VisualEditor. By storing
> private Parsoid round-trip information separately, we were able to reduce
> the HTML size by about 40%. This in turn reduces network transfer and
> processing times, which will make loading and saving with VisualEditor
> faster. We are also switching from a cache to actual storage, which will
> eliminate slow VisualEditor loads caused by cache misses. Other users of
> Parsoid HTML like Flow, HTML dumps, the OCG PDF renderer or Content
> translation will benefit similarly.
>
> But, we are not done yet. In the medium term, we plan to further reduce
> the HTML size by separating out all read-write metadata. This should allow
> us to use Parsoid HTML with its semantic markup
>  directly for
> both views and editing without increasing the HTML size over the current
> output. Combined with performance work in VisualEditor, this has the
> potential to make switching to visual editing instantaneous and free of any
> scrolling.
>
> We are also investigating a sub-page-level edit API for
> micro-contributions and very fast VisualEditor saves. HTML saves don't
> necessarily have to wait for the page to re-render from wikitext, which
> means that we can potentially make them faster than wikitext saves. For
> this to work we'll need to minimize network transfer and processing time on
> both client and server.
>
> More generally, this API is intended to be the beginning of a
> multi-purpose content API. Its implementation (RESTBase
> ) is driven by a declarative
> Swagger API specification, which helps to make it straightforward to extend
> the API with new entry points. The same API spec is also used to
> auto-generate the aforementioned sandbox environment, complete with handy
> "try it" buttons. So, please give it a try and let us know what you think!
>
> This API is currently unmetered; we recommend that users not perform more
> than 200 requests per second and may implement limitations if necessary.
>
> I also want to use this opportunity to thank all contributors who made
> this possible:
>
> - Marko Obrovac, Eric Evans, James Douglas and Hardik Juneja on the
> Services team worked hard to build RESTBase, and to make it as extensible
> and clean as it is now.
>
> - Filippo Giunchedi, Alex Kosiaris, Andrew Otto, Faidon Liambotis, Rob
> Halsell and Mark Bergsma helped to procure and set up the Cassandra storage
> cluster backing this API.
>
> - The Parsoid team with Subbu Sastry, Arlo Breault, C. Scott Ananian and
> Marc Ordinas i Llopis is solving the extremely difficult task of converting
> between wikitext and HTML, and built a new API that lets us retrieve and
> pass in metadata separately.
>
> - On the MediaWiki core team, Brad Jorsch quickly created a minimal
> authorization API that will let us support private wikis, and Aaron Schulz,
> Alex Monk and Ori Livneh built and extended the VirtualRestService that
> lets VisualEditor and MediaWiki in general easily access external services.
>
> We welcome your feedback here:
> https://www.mediawiki.org/wiki/Talk:RESTBase - and in Phabricator
> 
> .
>
> Sincerely --
>
> Gabriel Wicke
>
> Principal Software Engineer, Wikimedia Foundation
>
> ___
> Engineering mailing list
> engineer...@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/engineering
>
>


-- 
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Tor proxy with blinded tokens

2015-03-11 Thread Chris Steipp
On Mar 11, 2015 2:23 AM, "Gergo Tisza"  wrote:
>
> On Tue, Mar 10, 2015 at 5:40 PM, Chris Steipp 
wrote:
>
> > I'm actually envisioning that the user would edit through the third
party's
> > proxy (via OAuth, linked to the new, "Special Account"), so no special
> > permissions are needed by the "Special Account", and a standard block on
> > that username can prevent them from editing. Additionally, revoking the
> > OAuth token of the proxy itself would stop all editing by this process,
so
> > there's a quick way to "pull the plug" if it looks like the edits are
> > predominantly unproductive.
> >
>
> I'm probably missing the point here but how is this better than a plain
> edit proxy, available as a Tor hidden service, which a 3rd party can set
up
> at any time without the need to coordinate with us (apart from getting an
> OAuth key)? Since the user connects to them via Tor, they would not learn
> any private information; they could be authorized to edit via normal OAuth
> web flow (that is not blocked from a Tor IP); the edit would seemingly
come
> from the IP address of the proxy so it would not be subject to Tor
blocking.
>

Setting up a proxy like this is definitely an option I've considered. As I
did, I couldn't think of a good way to limit the types of accounts that
used it, or come up with an acceptable collateral I could keep from the
user, that would prevent enough spammers to keep it from being blocked
while being open to people who needed it. The blinded token approach lets
the proxy rely on a trusted assertion about the identity, by the people who
it will impact if they get it wrong. That seemed like a good thing to me.

However, we could substitute the entire blinding process with a public page
that the proxy posts to that says, "this user wants to use tor to edit,
vote yes or no and we'll allow them based on your opinion". And the proxy
only allows tor editing by users with a passing vote.

That might be more palatable for enwiki's socking policy, with the risk
that if the user's IP has ever been revealed before (even if they went
through the effort of getting it deleted), there is still data to link them
to their real identity. The blinding breaks that correlation. But maybe a
more likely first step to actually getting tor edits?

___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] MediaWiki-Vagrant now has support for install in Linux Containers (LXC)

2015-03-11 Thread Matthew Flaschen

On 03/11/2015 05:06 AM, S Page wrote:

On Mon, Mar 9, 2015 at 3:11 PM, Brion Vibber  wrote:


vagrant with the lxc provider inside Ubuntu 14.10 in a Parallels VM ...has
the plus that you can use a stock VM if you're going to run Linux anyway.



But MediaWiki-Vagrant uses a "stock VM". Its Vagrantfile loads a
trusty-server-cloudimg-amd64-vagrant-disk1.box which contains a vmdk that
as far as I know is much like other Ubuntu VMs.


When you use the VirtualBox provider, it starts by copying that stock 
VM.  However, it then makes it MediaWiki-specific in many ways (running 
services, contents of /vagrant, etc.)


My understanding is that the LXC provider allows essentially any Ubuntu 
VM.  Then, you can use that VM for general work, as well as run one or 
more LXC containers (including but not limited to the MWV one) in isolation.


With the VirtualBox provider, you can't run multiple different Vagrants 
(e.g. WordPress, MediaWiki, etc.) on one VM.


Matt Flaschen


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l