Re: [Wikitech-l] about bots

2012-09-28 Thread Krinkle
There is 3 different bot thingies you should know about, I'll briefly describe
them each:

== The bot user right:

This is the right that grants the user the ability to perform an edit with a
bot flag.

Facts:
* Not all users with this right are bots.
* The flag can be toggled on a per-edit basis. Bot software will activate this
flag. But an account can be used by humans and bot software simultaneously.
Dedicated bot accounts will typically have all its edits bot-flagged, but other
users may contribute regularly and also run a bot from time to time with their
credentials[1].

== The bot user group:

This user group is available in MediaWiki by default to make it possible to
grant a user the bot right (because user management goes by groups, not rights.
To grant a user the bot right, one adds the user to a group that provides that
right).

Facts:
* Group membership can change over time. There are many bot-flagged edits by
users that are no longer in a user group providing the bot right. Likewise there
are many edits not bot-flagged by users that now have the bot right (which they
may or may not use for each edit).
* Not all bots are a member of this group (there are other groups that provide
this right, sysop, for example).

== The bot flag:

This is the only reliable factor. This indicates most accurately that the edit
was intended as bot edit (and that the user could do so because they had the bot
user right when the edit was made).

It is especially reliable because the data is stored with the edit, not
calculated afterwards (so it is regardless of the user's group memberships at
time of query).

However it has one catch: The data is only stored in the recentchanges table,
from which it expires after 30 days. I guess this explains why the best way is
also the least common way to categorize bot edits in analytics (unless only
covering recent data).

There is an open bug to store the bot flag in the revision table, thus making it
permanently available[2].


-- Krinkle


[1] For example on Commons, where I am a sysop, there is a bot I ran that edits
sysop-protected pages, therefore I had to run this bot under my personal account
for a while, marking its edits as bot. Most sysop-bots (including mine now) 
have
a separate account which is then given membership to the  sysop user group,
but this isn't always the case. For example on Wikipedia I know there's various
admins that use software to automatically block certain IP-addresses from time
to time (proxies, TOR, zombies, whatever). Some are ran on bot accounts, some
not.

[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=17237


On May 25, 2012, at 7:49 PM, Fabian Kaelin fkae...@wikimedia.org wrote:

 Hi,
 
 Sorry about the length of this mail, it reads faster than it looks.
 
 I am working with the recentchanges and the cu_changes (checkuser)
 mediawiki SQL tables. I would like to be able to filter bot activity,
 unfortunately I am increasingly confused.
 
 Things that I think I know:
 
   - In the 
 recentchangeshttp://www.mediawiki.org/wiki/Manual:Recentchanges_table
 table
   there is a `rc_bot` flag that should indicate whether the edit comes from a
   bot.
   - The checkuser table
 cu_changeshttp://www.mediawiki.org/wiki/Extension:CheckUser (which
   is not documented on the mediawiki database layout
 pagehttp://www.mediawiki.org/wiki/Manual:Database_layout)
   contains mostly the same information as the recentchanges table but for a
   longer period of time. However, there is no bot flag as there is on the
   recentchanges table - I don't know why not.
   - There is a `bot` entry in the
 user_groups.ug_grouphttp://www.mediawiki.org/wiki/Manual:User_groups_table
 field.
   A revision/recentchanges/cu_changes entry can be identified as bot by
   joining the original table with user_groups on the user_id and by setting
   ug_group=`bot`.
   - The user_groups method way of identifying bots is inefficient and the
   data seems incomplete. For some other projects we have used various other
   bot tables created by hand (on db1047: halfak.bot used during WSOR 2011 or
   declerambaul.erik_bots containing the bots identified by Erik Zachte).
 
 I would like to know the answers to the following questions:
 
 1. *What is the meaning/purpose of the rc_bot flag on recentchanges? *There
 are entries in the recentchanges table from editors that are flagged as
 bots in the user_groups and the other bot tables but still have the rc_bot
 flag set to 0.
 
 mysql select rc.rc_user_text from recentchanges rc join user_groups ug ON
 (rc.rc_user=ug.ug_user)  WHERE  ug.ug_group = 'bot' and rc.rc_bot=0 limit 1;
 +--+
 | rc_user_text |
 +--+
 | ClueBot NG   |
 +--+
 
 2. *Why is there no bot flag in the checkuser table? *A lot of the other
 fields seem to be copied from the recentchanges table, why not the rc_bot
 field? The check user table contains both entries that are flagged as bots
 in the recentchanges table and entries

Re: [Wikitech-l] HTMLMultiSelectField as select multiple=multiple/

2012-09-28 Thread Krinkle
On Sep 29, 2012, at 4:51 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 On Fri, 25 May 2012 09:11:20 -0700, Daniel Werner 
 daniel.wer...@wikimedia.de wrote:
 
 
 Alright, just submitted this for review to gerrit:
 https://gerrit.wikimedia.org/r/#/c/8924/
 
 [..]
 
 Yes, multiselect is a VERY bad usability choice. Frankly we shouldn't use it 
 anywhere. If we're using JS to make a better UI it would actually be much 
 better to output usable checkboxes and then turn the checkboxes into a 
 special JS multi-select. Instead of outputting an unusable multi-select and 
 compensating for it.
 

+1 :-)

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New extension: Diff

2012-09-26 Thread Krinkle
On Sep 25, 2012, at 7:54 PM, Tyler Romeo tylerro...@gmail.com wrote:

 Is there a reason we don't just put this in the core?
 

Many points have been made already in a short amount of time, which emphasizes
how touchy this subject can be.

Anyway, a different view that I haven't heard as much is the following.

Being in core is not a stamp of approval. This picture never existed and if it
did, it needs to go away. We're going towards a flexible modular system, which
means components have dependencies and build on each other - as opposed to just
being there.

So unless other existing core functionality would need it, it doesn't make sense
to include it.

Instead, extensions should prove themselves. If an extension provides
functionality that other extensions need, those other extensions will simply add
Make sure X and Y is installed first to their instructions and use it that
way.

This gives a few advantages:
* Fair competition. Extensions can decide that they want to use, make it also
easy for developers to fork a utility and improve it (like extensions do).
* Flexibility. Once it is in core, we have to support it, which is especially
awkward if it isn't in use, because that means we have an untracked dependency
on something we don't even use, and can't be easily replaced either because some
extension might use it, somehow. 

It goes at the cost of not having a standard, but I'm not sure a blanket
standard now, must use this is what we want here, at least not until it has
proven itself by being used over a long period of time by other extensions.

I mean, things don't have to be in core to be usable. Let it be an extension,
let it grow. Extensions can perfectly depend on other extensions, there is no
need to have it be in core, make your own decisions.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Skin pages on MW.org, and Skin repos in Gerrit

2012-09-26 Thread Krinkle
On Sep 26, 2012, at 8:01 PM, Niklas Laxström niklas.laxst...@gmail.com wrote:

 On 26 September 2012 10:08, Krinkle krinklem...@gmail.com wrote:
 Another problem I found in the current setup is that its a bit 
 counter-intuitive how to manage the directory structure for developers. I 
 mean, most of us probably have this:
 
 - mediawiki
 - /core (clone mediawiki/core.git)
 - /extensions (directory with clones of individual extensions or clone of 
 mediawiki/extensions.git tracking repo)
 
 In SVN time extensions were a subdir of mediawiki core and I doubt
 that everyone has suddenly decided to change it. At least I haven't.
  -Niklas

No, not at all. They never were and never will.

In svn we have:

[mediawiki]
- trunk/
- - phase3/
- - extensions/

Extensions has always been besides never inside core.

Of course in unversioned installations (e.g. tarballs) we put extensions in
the extensions subdirectory. And even in versioned installations, one can

* git clone individual extensions in the extensions directory
* git clone extensions next to core and place symlinks for invidual extensions 
in the extensions directory

But if someone simply clones the mediawiki/extensions.git tracking repository, 
then it is kind of annoying to have to put symlinks in place. I have my local 
dev environment set up like this:

$wgScriptPath = '/mediawiki/core';
$wgExtensionAssetsPath = '/mediawiki/extensions';
$extDir = dirname( $IP ) . '/extensions';

require_once( $extDir/Vector/Vector.php );

Anyway, /offtopic

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki 1.20 release candidate (and triage announcement)

2012-09-25 Thread Krinkle
On Sep 23, 2012, at 8:03 PM, Mark A. Hershberger m...@everybody.org wrote:

 On 09/23/2012 12:54 PM, Krinkle wrote:
 Also, this bugzilla query should be empty before release as well (either by 
 fixing bugs,
 or reviewing/merging pending commits that claim to fix stuff, or deferring 
 the bug to
 a later release etc.). People assign and block things usually for a reason:
 
 
https://bugzilla.wikimedia.org/buglist.cgi?query_format=advancedtarget_milestone=1.20.0%20releaseproduct=MediaWikiresolution=---
 
 [..]
 
 I can hold a triage for blockers at UTC1100 on Tue, October 2
 (http://hexm.de/lr).  At this point, there are over 100 non-enhancment
 bugs marked for 1.20.  Please help trim the list before then.
 

Actually, there are ~ 65 (as my link reflected),
the shortened link forgot to exclude resolved bugs.

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki 1.20 release candidate

2012-09-23 Thread Krinkle
On Sep 22, 2012, at 10:54 PM, Mark A. Hershberger m...@everybody.org wrote:

 On 09/22/2012 02:50 PM, Krinkle wrote:
 On Sep 21, 2012, at 4:13 PM, Mark A. Hershberger m...@everybody.org wrote:
 That commit is not included.  I can merge it in or make a second RC with
 1.20wmf12.
 
 What do you think is the better way to go?
 
 I'd say re-branch from master.
 
 My thinking was to branch from a point that was known -- something whose
 issues we knew.  I think this worked pretty well.
 
 For instance, the issue Niklas raised was because I used a WMF branch.
 
 I'm now thinking that I'll just stick with the 1.20wmf12 and just apply
 the patch he pointed to.  This is in the vein of sticking to known issues.
 
 What do you think?
 
 Mark.


I think master is more stable then whatever wmf branch. I know because of 
commits
recently merged and whatnot.

If there are problems we'll find them in the release candidate period.
And by that time we'll have had a new wmf branch as well to see how the latest 
code
performs on wmf.

Also, this bugzilla query should be empty before release as well (either by 
fixing bugs,
or reviewing/merging pending commits that claim to fix stuff, or deferring the 
bug to
a later release etc.). People assign and block things usually for a reason:

https://bugzilla.wikimedia.org/buglist.cgi?query_format=advancedtarget_milestone=1.20.0%20releaseproduct=MediaWikiresolution=---

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki 1.20 release candidate

2012-09-22 Thread Krinkle
On Sep 21, 2012, at 4:13 PM, Mark A. Hershberger m...@everybody.org wrote:

 On 09/21/2012 03:57 AM, Niklas Laxström wrote:
 Earlier you wrote that it is based on 1.20wmf11 branch. I didn't check
 the tarball but there were pretty severe i18n issues with plurals
 around that time. Do you know whether fixes for those issues are
 already included or not? Most important is
 https://gerrit.wikimedia.org/r/#/c/23900/
 
 That commit is not included.  I can merge it in or make a second RC with
 1.20wmf12.
 
 What do you think is the better way to go?


I'd say re-branch from master.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Notification bubble system

2012-09-22 Thread Krinkle
On Sep 22, 2012, at 9:31 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 On Sat, 22 Sep 2012 00:10:11 -0700, Isarra Yos zhoris...@gmail.com wrote:
 
 On 21/09/2012 11:46, Rob Moen wrote:
 On Sep 20, 2012, at 3:48 PM, Krinkle wrote:
 
 If they happened as a direct
 consequence of a user action, maybe it should appear inside the interface 
 where
 it was performed?
 Agreed, interaction related notifications should be localized in the 
 interface where the action is be performed.
 This increases visibility and implies a connection to the user action.
 
 Aye, and while putting notices in the things themselves would only be 
 feasible with some, still tying the other notices to the relevant part of 
 the page would probably help. Like if when clicking the star to watch 
 something, the bubble appeared pointing to the watchlist, that would make 
 the connection between the star and the list itself better than any block of 
 text explaining it ever could...
 
 
 https://upload.wikimedia.org/wikipedia/mediawiki/7/7b/Contextual-Notifications-Mockup.png
 
 Hmmm... that's actually a pretty nice idea. Though we'll need a separate 
 system for that.
 

Indeed.

By the way, arguably, in the case of watchlist manipulation it would be 
pointing to the
watchlist (-link) itself, not the watch star.

Similar to how Install in the Mac App Store triggers an animation that drops 
an icon
into the Dock. Even if the Dock itself isn't visible, it shows in which general 
direction
it went.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Initial stab at responsive images for high-density displays

2012-09-20 Thread Krinkle
On Sep 18, 2012, at 5:47 PM, Jon Robson jdlrob...@gmail.com wrote:

 Awesome!
 Correct me if I'm wrong but the way this is currently written an image for
 foo.jpg will first load foo.jpg then replace the src attribute for this
 element then load the image foo-2.0.jpg ?
 

It did that because the javascript function was hooked on window.load, which by
design does not fire until all images are downloaded.

The patch [1] has been revised and now fires on document ready, which should be
early enough to not waste much bandwidth.

I suggest we built-upon or write or own module further and integrate the
lazy-load principle. In other words, on document ready fix the images above
the fold, which may or may not have started downloading yet.

Then cancel the rest and set those appropriately just before they come into
view. That saves bandwidth in general (by not loading images when they are not
visible), and makes sure to download the right image based on the environment at
that point in time.

When a standard for srcset (or whatever it will be called) is ready and actually
implemented in some browser we could also opt to keep it without javascript.

Assuming plans won't get worse, the standard will include a natural fallback by
storing the 1-0 ratio image in the src attribute. Which is what we'd want on
older browsers/devices anyway.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Notification bubble system

2012-09-20 Thread Krinkle
On Sep 19, 2012, at 8:41 PM, Trevor Parscal tpars...@wikimedia.org wrote:

 I'm glad this area is getting a lot of interest - unfortunately I haven't
 been able to keep up on this thread but I wanted to give a suggestion
 related to adding icons.
 
 It's reasonable to take an option that provides a URL to an icon image, but
 we should have a common (customizable per skin and language) set of icons
 that can be used by symbolic name, like success, failure,
 information, error, etc.
 
 This helps in a few ways:
 
   - We can make sure they match the skin they are being used in
   - We can internationalize them
   - We can avoid having multiple icons for the same purpose
   - It makes it easier to use the icon feature
 
 - Trevor
 


Interesting idea. Though I must say I was (so far) thinking quite the opposite.
I like your idea as well.

I was thinking not to use icons for the type of message. For one because it
would have to be very well controlled to allow for adaption to skin, language
and avoid duplication (though your proposal seems to handle that quite well).
But also because:
* it is hard to categorize a message in such a specific category
* if possible, avoiding icons is a good thing in my opinion. Icons are nice, but
sometimes a simple message suffices. But having some messages with and others
without an icon doesn't seem nice either as it would break the grid and user
expectation. Would we have a default icon?
* it means we can't have an icon for source, only for type (because I believe
having 2 icons is not an option)

I think source is more important than type. Where (groups of) modules in an
extension, gadgets or core are sources.
Examples of sources that could be identified by their own icon:

Watchlist:
* Added X to your watchlist.
* An error occurred while removing X from your watchlist.
* John edited X \ (snippet from edit summary)

Discussion:
* John sent you a personal message. # edit on user talk page..
* John started a discussion on subject.
* John commented on thread name.

Countervandalism Network gadgets:
* Blacklisted Jack renamed X to Y. \ (log message)
* John edited monitored page X. (edit summary)

As for messages confirming a user's own page and user actions, I've been
thinking a bit lately. Maybe a notification bubble is not the right way to
communicate confirmations of user's direct own actions. Here's a brief list of
such messages (similar to the kind of messages one would see in the yellow bar
of Google products like Gmail and Gerrit).

* Loading edit screen...
* The conversation has been archived. [learn more] [undo]
* Edit saved!
* The page has been renamed. [undo]

It feels to me like these kind of messages would only be appropiate to appear
through a notification bubble if they happened externally. Or if it was like a
scheduled event, (so the messages lets the user know that the scheduled action
took place and that it succeeded (or failed)). If they happened as a direct
consequence of a user action, maybe it should appear inside the interface where
it was performed?

Anyway, just my 2 cents :)

-- Krinkle




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] HTML5 and non valid attributes/elements of previous versions (bug 40329)

2012-09-20 Thread Krinkle
On Sep 19, 2012, at 6:23 PM, Derk-Jan Hartman d.j.hartman+wmf...@gmail.com 
wrote:

 I would like to open some discussion about
 https://bugzilla.wikimedia.org/show_bug.cgi?id=40329
 This bug is about the fact that we currently do a 'partial' transform of
 the HTML5-invalid attribute 'align'.
 
 We all agree that this is bad, what we need to figure out is what to do
 next:
 
 1: Disable the transform and output the align attribute even though it's
 not valid HTML5. Solve validness later.
 2: Remove the attribute from HTML5 and 'break' the content. Fix by users
 (or bot).
 3: Disable HTML5, correct the content of the wiki's (possibly with a bot)
 and remove the attribute in HTML5 mode, reenable HTML5.
 4: Fix the transform (not that easy)
 
 My personal preference is with 1, since this is causing trouble now and
 with 1 we solve immediate problems, we just add to the lack of valid HTML5
 output that we already have. In my opinion 2 would be too disruptive and 3
 would take too long.
 
 Danny is of the opinion that we should never transform at the parser side
 and that we should fix the content instead (2 or 3).
 
 So, how best to fix the issue/what should be our strategy with regard to
 content that is not HTML 5 valid in general ?
 Discuss
 

I agree with others, #1 seems to be the best choice.

The W3C validator is not a visitor nor a user of the software. It's a useful 
tool to find problems, but as long as browsers are not standards compliant, and 
the W3C validator stays ignorant of that fact, we have very good reason to 
choose to optimize for real browsers, and not the hypothetical browser in the 
eyes of the validator.

The HTML output of the MediaWiki software is meant for users. Users that have 
browsers in front of them.

All relevant browsers support align, regardless of whether the page is in 
HTML5 made.

Having said that, word shall be spread to users to stop using align and make 
layouts in CSS instead (through classes), which by design will make use of 
align impossible and require usage of text-align and margin instead.

Even if we could transform it correctly, I would oppose automatic 
transformation (be it from output-only in the parser, or by a bot changing the 
actual wikitext). Because the align attribute is a means to an end that has 
lots of implications and possible unintended side-effects. Contrary to 
text-align and margin, which are very specific and targeted at their purpose. 
By replacing a single align attribute with all kinds of inline styles the 
original intention of that align attribute will be lost at the cost of a lot of 
bloat in the output that we don't really need anyway.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Notification bubble system

2012-09-19 Thread Krinkle
On Sep 18, 2012, at 5:07 PM, Gerard Meijssen gerard.meijs...@gmail.com wrote:

 On 18 September 2012 00:09, Steven Walling steven.wall...@gmail.com wrote:
 
 Following up on what I said previously about wanting to build on top of
 this, there are some usability enhancements proposed at bug #40307.
 
 On the truncation issue: length is already something of an issue, from my
 perspective. For example: the default watchlist message, which is 37 words,
 seems to have been written on the assumption that the notification
 container would be much wider. I've heard some complaints about it.[1] I'm
 not sure if truncation is the correct method, but we should do something to
 keep messages short.
 
 Steven
 
 1. [10:22:25] Fluffernutter: the javascript flash for watchlisting is
 driving me very slightly nuts - it lasts too long and blocks out text
 https://meta.wikimedia.org/wiki/IRC_office_hours/Office_hours_2012-09-08
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 
 Hoi,
 The default watchlist is 37 words ... these words are probably en.wikipedia
 measurements. To what extend have different scripts and different languages
 been considered ?
 Thanks.
  GerardM
 

Doesn't matter in this case. The point is that the message is oversized. It
contains too much redundant information for a simple confirmation that the page
was added to the watchlist.

Considering that this message is not wiki-content, it doesn't make sense to
truncate it. It simply needs to be shortened at the source. The interface
messages in question are (from ApiWatch.php):

* addedwatchtext [1]
* removedwatchtext [2]

removedwatchtext is short and to the point.

-- Krinkle

[1] https://www.mediawiki.org/wiki/MediaWiki:addedwatchtext/en
[2] https://www.mediawiki.org/wiki/MediaWiki:removedwatchtext/en
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Notification bubble system

2012-09-19 Thread Krinkle
On Sep 19, 2012, at 2:57 AM, Helder . helder.w...@gmail.com wrote:

 On Tue, Sep 18, 2012 at 9:28 PM, Jon Robson jdlrob...@gmail.com wrote:
 On Commons it seems to take 5 seconds to disappear which is too long as at
 this point I'm wondering how to dismiss it.
 I think the time should depend on the length of the message.
 The watchlist notification in Portuguese has ~46 words,
 and if I didn't know what it was saying, that information would be lost.
 Maybe it should allow us to see previous notifications by clicking somewhere.
 
 On Tue, Sep 18, 2012 at 9:28 PM, Rob Moen rm...@wikimedia.org wrote:
 Not sure if this is a known issue, but the notification is at the top of the 
 document regardless of where you are on the page.  Meaning if I'm at the 
 bottom of a long article, I have to scroll up to the top to see the bubble.  
 Should it not be relative to the scroll position?
 
 I noticed this by firing off a mw.notify('hi james') in the console at the 
 bottom of a long article.  This may have gone unnoticed as it seems 
 mw.notify is only triggered by UI components at the top of the page.
 
 +1. This is bothering me as well.
 
 Helder

I agree.

Just for the record though, lets not forget what it was just weeks ago:

* Only one message at a time, new one wiped previous one from existence
unconditionally
* No way to close it
* Took up full width
* At the top of the page (still)
* Bumped down the article by the height of the message

So we are making some progress here.

Suggestions I saw so far in this thread:
* Notification queue should follow scroll position (fixed position)
* Add close button (even when they close regardless of click target, as visual 
clue)
* Extend base framework for universal layout of messages. We already have 
title/body,
  to be extended with icon and buttons.


One potential problem with making them appear mid-page (fixed queue) is when the
bottom of the page is reached. It should then start a new column to the left.
Other wise it will continue down the page where it can't be seen due the queue
following the scroll position.

Another thing I don't like is how they move up in the queue one after another.
What I like about Growl and Notification Center is that messages stay fixed
where they first appear. And new messages are added to the bottom (or next
column), or, when the first position is available again, it will re-use those
positions again.

That way users don't have to re-focus constantly and follow where which message
went when dismissing some of them. This gets more important as we start to
introduce notifications that do user interactions (sticky ones, which should not
move but be sticky).

However that brings another problem: Resizing the window. The spreading of
messages over multiple columns would have to either be re-build after resizing
the window.


-- Krinkle




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Notification bubble system

2012-09-15 Thread Krinkle
On Sep 14, 2012, at 3:11 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 On Thu, 13 Sep 2012 15:03:04 -0700, Steven Walling steven.wall...@gmail.com 
 wrote:
 
 On Wed, Sep 12, 2012 at 11:09 PM, Erik Moeller e...@wikimedia.org wrote:
 
 Is the system documented somewhere already so gadget/script authors
 can start using it?
 
 
 +1 to docs for this, please. :)
 
 My other question is similar to what Andrew asked earlier: what potential
 is there for including something more than just strings?
 
 My team just finished up a test where simple your edit was saved
 confirmation messages lead to a significant bump in the editing activity of
 newbies on English Wikipedia.[1] The only core differences between the
 custom notification we built and bubble notifications are that it was
 center-aligned and that it included a checkmark icon. I would prefer to
 build on top of this if we're going to try and make an edit confirmation
 message a part of MediaWiki.
 
 mw.notify accepts DOM nodes and jQuery objects. So you can add in whatever 
 html you want by parsing it into dom.
 mw.notify also accepts mw.Message objects so you can use `mw.notify( 
 mw.message( 'foo' ) );` and it'll be parsed.
 

... yes, but now that we're on the subject, lets try to aim for standardization 
here instead of encouraging arbitrary HTML for layout (I'd like to phase that 
out sooner rather than later). We can allow HTML inside the body (e.g. for 
hyperlinks which are allowed even in edit summaries), though we could call 
jQuery .text() on html and disallow HTML completely (while still keeping output 
clean of html characters). We'll see about that later. One step at a time.

I propose to implement the following content options (in addition to the 
configuration options we have like autoHide and tag). Inspired by API for 
Notification Center as found in OS X and iOS:

* icon (optional)
Must be square and transparent. Can potentially be displayed in different 
sizes. Important here to know that this icon is for source identification, not 
message type. I think it is good design to keep images out of notifications. No 
smilies, check marks or the like (unless the icon of a feature contains it).

* title (optional)
Title of the message. If too long, will be auto-ellipsis-ified.

* body (required)
Spans around up to 3 or 4 lines. If too long, will be auto-ellipsis-ified (in 
the case of a confirmation it would contain just one or two sentences, in case 
of a notification of en edit it might show (part of an) edit summary).

* buttons (optional, multi, recommendation is to use 2 buttons, not 1 or 3+ )
Similar to jQuery UI dialog buttons): Label text and callback function. There 
can be be no two buttons with the same label. When a message has buttons it 
basically becomes what would be called an Alert (has buttons and doesn't 
autohide) in Notification Center lingo (as opposed to Banner, which 
autohides and has no buttons). It makes sense to automatically enforce 
autoHide:false if buttons are set.

Applications / Features that send many notifications might abstract part of 
this internally, like:

code
var extPath = mw.config.get( 'wgExtensionAssetsPath' ) + '/Feature';
/**
 * @param {mw.Title} title
 * @param {Object} revision Information about revision as given by the API.
 */
Feature.prototype.editNotif = function( title, revision ) {
  return mw.notify({
content: {,
  icon: extPath + '/modules/ext.Feature.foo/images/notify.icon.png',
  title: title.getPrefixedText(),
  body: $.parseHTML(  revision.parsedcomment )
  },
  config: {
autoHide: true
  });
};
/code

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] +2 on github (maintaining code quality via an informal review process)

2012-09-10 Thread Krinkle
Indeed, this is by design on GitHub.

These kind of rules need to be socially applied and controlled.
There is no way to prevent it mechanically through GitHub.
Common agreement between developers, prior to granting access, of course.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] A few questions about adding user preferences via the API

2012-09-10 Thread Krinkle
I think MZMcBride is referring to the fact[1] that the current preferences 
backend
stores whatever it is given, even keys that don't exist anywhere and have
no registry or default value.

This is inherited by how User.php implements setOption(), saveSettings() and
saveOptions().

I personally think it is a bad thing to allow arbitrary stuff with no 
documentation
and conventions to be stored through a simple api.php request into the 
user_options
table.

On the other hand, do keep in mind that Gadgets (not the gadget scripts but the 
extension
itself) (ab)uses this by checking whether there is a preference is set for 
gadget- + gadgetId.

It may be useful to require a server side registry, for Gadgets this wouldn't 
be an issue since
all gadget Ids are known. For backwards compatibility array_keys of 
wgDefaultOptions could
be registered automatically / implicitly.

-- Krinkle



On Sep 10, 2012, at 5:23 AM, Tyler Romeo tylerro...@gmail.com wrote:

 Someone can feel free to correct me if I'm wrong, but currently user
 preferences uses an HTMLForm object to handle everything, and new
 preferences can be added arbitrarily using the GetPreferences hook. I'd
 imagine this will definitely be supported in the future as it is the
 primary way for extensions to add new user preferences.
 
 As such, the only real restriction on the key size is that of the database,
 which is currently varbinary(32). Since each preference gets its own row,
 there is no limit on how many extensions can be added (unless you count
 limits on the size of the database).
 
 *--*
 *Tyler Romeo*
 Stevens Institute of Technology, Class of 2015
 Major in Computer Science
 www.whizkidztech.com | tylerro...@gmail.com
 
 
 
 On Sun, Sep 9, 2012 at 8:37 PM, MZMcBride z...@mzmcbride.com wrote:
 
 Forwarding from https://bugzilla.wikimedia.org/show_bug.cgi?id=40124#c0:
 
 ---
 A lot more functionality around user preferences was added with
 https://gerrit.wikimedia.org/r/#/c/5126/
 
 Thank you for that.
 
 Recently I found that one can add new options (''keys''). This is very
 useful for Community scripts. So I hope this was intentional behaviour and
 this bug is about clarifying that.
 
 What in particular, I'd like to know is:
 
 * Will it be supported in future?
 
 * Which max. size (i.e. how many bytes) can the pref have?
 
 * How many prefs can be added?
 
 * Is it possible to remove one particular pref without having to reset all?
 
 Again, it would be really useful if I could get started using this feature.
 Currently I have to publicly store user prefs for my scripts using (
 https://commons.wikimedia.org/wiki/MediaWiki:Gadget-SettingsManager.js )
 in
 their common or skin.js. Having a private place to store them would be
 great, especially because one can retrieve them with both, the API and they
 are by default in mw.user.options ­ JavaScript RL module.
 ---
 
 Any help in answering these questions on this list and/or on the bug would
 be great.
 
 MZMcBride
 
 
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] 5 tips to get your code reviewed faster

2012-08-30 Thread Krinkle
On Aug 30, 2012, at 9:59 AM, Antoine Musso hashar+...@free.fr wrote:

 It is also a good practice to add a cover message on the new patchset to
 explain what it changes compared to the previous one. 

Yes, very important. If you submit a patch set, please do leave a quick comment
explaining what you changed. I personally like to use bullet points for those
comments like:

* Rebased

or

* Addressed comments from John.
* Removed redundant code in foo.js.

 
  PS3: rebased on latest master
  PS4: fix the, now obsolete, function() call
 
 Where PS is used instead of PatchSet.
 

This is not needed. Because if you leave a comment (and do it right, as in,
click Review on the main gerrit change page under the correct Patch set
heading) gerrit prefixes this to your comments automatically.

The only reason to need such a prefix is if you're putting it somewhere else,
such as the commit-msg – where such comments don't belong in the first place.

Putting it in the commit message is imho annoying because:
* It is not at all helpful when the commit is merged because only the last
version is merged. The rest are in-between steps during the review process and
discussion and details of that process belong in the review thread, not in the
commit message.
* And, as the author, it is kinda hard because you don't know the patch version
number until it is submitted, and someone else can submit a version while you're
working.

Having said that, do make sure that your commit message is still accurate and
explains the full change (not just the first version of the patch, nor just the
last amendment to the patch).

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] $( 'div' ) vs. $( 'div /') in coding conventions (and methods for building DOMs with jQuery)

2012-08-28 Thread Krinkle
Okay, sorry for being away for 30 minutes while I enjoyed dinner.

Someone[1] pointed me to this thread and suggested I chime in, so here I go.


On Aug 28, 2012, at 2:50 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 Either way $( 'div' ) is NOT something officially supported by jQuery [..]
 

This is simply incorrect.
* It has been explicitly supported (whether or not intentionally/officiallt) by 
jQuery for years as can be seen in the source code.
* It has been used by jQuery core and other jQuery project for years (not just 
sporadically, but pretty much everywhere, consistency).

And I'm happy to announce that as of today, by popular demand, the jQuery team 
has finally updated[4] the 3-year old documentation to reflect this feature.

Up until now the documentation for the root jQuery() function still reflected 
the situation as it was 3 years ago. Where the string always had to be fully 
valid with closing tag, with the exception of input and img/ because the 
native parsers in the browsers supported it that way (not because jQuery wanted 
us to).

But for a while now jQuery features element creation through the native 
createElement() by passing a single tag (with optional closing tag or 
quick-closing[2]). As such I've reverted this edit[3].



On Aug 28, 2012, at 9:57 AM, Tim Starling tstarl...@wikimedia.org wrote:

 Personally, I would use document.getElementById() to do that. It's
 standard, and it's faster and more secure. More complex selectors
 derived from user input can be replaced with jQuery.filter() etc. with
 no loss of performance.
 
 -- Tim Starling
 


Indeed.
Moreover, aside from the performance and security, there's another important 
factor to take into account. And that is the fact that IDs can contain 
characters that have special meaning in CSS selectors (such as dots).

We've seen this in before when dealing with a MediaWiki heading (where the 
ID-version of the heading can (or could) contain dots). So whenever you have 
what is supposed to be an element ID in a variable, use document.getElementById 
(even if you don't care about performance or security).




On Aug 28, 2012, at 6:39 AM, Chris Steipp cste...@wikimedia.org wrote:

 On Mon, Aug 27, 2012 at 4:37 PM, Mark Holmquist mtrac...@member.fsf.org 
 wrote:
 I also tried to get an answer about the better between $( 'div
 class=a-class /' ) and $( 'div /' ).addClass( 'a-class' ), but
 apparently there's little difference. At least when creating dynamic
 interfaces, I'd like to have some guidance and consistency if anyone is
 interested in chatting about it.
 
 I'm going to try and put some guidelines for secure javascript code
 together soon, but it's a much better habit to use .addClass(
 'a-class' ) and the other functions to add attributes.
 

I'm looking forward to that.

Note that it is perfectly fine and secure to use:
 $( 'div class=a-class/div' );

But when working with variables (whether from user input or not), then methods 
like addClass should be used instead. Both for security as well as 
predictability:
$( 'div class=' + someVar + '/div' ); // Bad

If the variable contains any unexpected characters it can for example cause the 
jQuery object to be a collection of 2 or 3 elements instead of 1.



On Aug 28, 2012, at 8:00 PM, Ryan Kaldari rkald...@wikimedia.org wrote:

 In that case, perhaps we should just say that all of the options are fine:
 $( 'div' )
 $( 'div/' )
 $( 'div/div' )
 

Indeed[5].



On Aug 28, 2012, at 2:50 AM, Daniel Friesen dan...@nadir-seen-fire.com wrote:

 If you don't like the XHTML-ish shortcut that jQuery provides, then our 
 coding conventions should be to use `$( 'div/div' );`.
 

I agree we shouldn't use XHTML-ish shortcuts because it looks confusing:
 $('ulli//ul');

That works because jQuery converts tag/ to tag/tag.
But just because jQuery allows that doesn't mean we should do it.
I'd recommend we keep it simple and always use valid HTML syntax when writing 
HTML snippets for parsing. 

Either use the tag syntax to create a plain element, or use fully valid 
XML/HTML syntax (with no shortcuts) for everything else.


-- Timo

[1] Well, actually, almost a dozen someones.

[2] http://api.jquery.com/jQuery/?purge=1#jQuery2

[3] 
https://www.mediawiki.org/w/index.php?title=Manual%3ACoding_conventions%2FJavaScriptdiff=576860oldid=576443

[4] 
https://github.com/jquery/api.jquery.com/commit/ea8d2857cd23b2044948a15708a26efa28c08bf2

[5] 
https://www.mediawiki.org/w/index.php?title=Manual%3ACoding_conventions%2FJavaScriptdiff=576924oldid=576860

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposed new core feature: page with limited or no history

2012-08-28 Thread Krinkle
Although for a different use case, I find myself facing a related problem with 
my bot on Commons.

https://commons.wikimedia.org/wiki/Commons:Auto-protected_files/wikipedia/zh

Those pages get updated periodically whenever a commons image starts or ends 
being used on a Main Page.
Aside from the fact that this in particular could be a native feature[1], this 
page will get a lot of revisions.

Now also aside is that these old revisions are quite useless. Even if someone 
would make an edit in between, the bot will overwrite it. But I don't care so 
much for the wasted space, since it is relatively small waste.

The problem comes in when these pages need maintenance. I find myself 
periodically checking up on my bot generated pages to make sure I move them to 
/Archive_# and delete those and start clean. Because when the revision count 
reaches 2,500, it can no longer be deleted because of the limit we implemented.

So to keep them mobile and usable, I always fight the limit by moving it to a 
subpage (without a redirect) before it reaches that limit and delete it there 
and start a new page on the old name.

Would be nice if these kind of bot-generated pages could avoid that. And if in 
doing so we saving revision space, that's nice.

Other examples:
* https://commons.wikimedia.org/wiki/Commons:Database_reports (and subpages)
* https://commons.wikimedia.org/wiki/Commons:Auto-protected_files (and subpages)

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Getting rid of leave page warning

2012-08-12 Thread Krinkle
On Aug 11, 2012, at 11:05 PM, Jeroen De Dauw wrote:

 Hey,
 
 I'm tried fixing some issue where a leaving page warning is shown when it
 should not, but have not been able to find how to get rid of it (as I'm not
 familiar with the relevant code at all).
 
 https://bugzilla.wikimedia.org/show_bug.cgi?id=39158
 
 Would be awesome if someone that is familiar with this stuff can have a
 look at it :)
 
 Cheers

This is part of the Vector extension.

Use option useeditwarning.

-- Krinkle


[1] https://www.mediawiki.org/wiki/Extension:Vector
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] gerrit tab width

2012-08-09 Thread Krinkle
Regarding how to calculate the line-width (for the 80-100 convention), the
way my old editor used to do it made the most sense to me: Consider a tab 1
character (because it is).

That also gives the most flexibility in indenting/outdenting blocks without
having to worry frequently about it being too long for the 80-100
convention.

The downside of that approach is that a block with 40 characters per line
in a block that is indented 8 times may appear to be longer than a block
that is not intended at all and has a 100 characters per line (e.g. if your
editor shows tabs as 8 spaces, 8 * 8 + 40 = 104).


On Thu, Aug 9, 2012 at 1:24 PM, Platonides platoni...@gmail.com wrote:

 For a review tool, we want to spot spaces instead of tabs, so we
 should use an uncommon tab size.
 I vote for 7 spaces.


Gerrit already shows this very clearly by showing the red tab symbols.

This is enabled by default and can be changed by setting Differences 
Preferences  Show Tabs.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Periodic updates from bits.wikimedia.org

2012-08-09 Thread Krinkle
On Aug 9, 2012, at 4:49 PM, Eugene Zelenko wrote:

 Hi!
 
 I noticed that content from  bits.wikimedia.org (including WikiEditor)
 is updated quite regularly - ~ every 20 minutes on Commons.
 
 Such behavior is definitely creates problem for users with slow
 connections or with payed data traffic.
 
 Are JavaScript/CCS are really updated so often?
 
 Eugene.
 

Can you elaborate a bit? (urls, timestamps, http headers, ..)

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Deprecation levels

2012-08-07 Thread Krinkle
On Tue, Aug 7, 2012 at 11:26 AM, Jeroen De Dauw jeroended...@gmail.comwrote:

 Hey,

 This is something I've come across several times before when deprecating
 functions: people want different behaviour with regard to the warnings they
 get. Typically what people want can be split into two groups:


Didn't we solve this already by being able to pass a version to
wfDeprecation() and allowing users to set $wgDeprecationReleaseLimit to
hide/show from whatever cut-off point they desire?

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Dropping Alias as a recommended way to setup Short URLs

2012-08-04 Thread Krinkle
On Sat, Aug 4, 2012 at 1:07 PM, Benjamin Lees emufarm...@gmail.com wrote:

 On Fri, Aug 3, 2012 at 11:42 PM, Daniel Friesen
 li...@nadir-seen-fire.comwrote:

  To top off the issues with Alias, it can't be used to setup a 404 image
  thumbnail handler and it can't be used in the future plans of MediaWiki
  handling 404s internally.
 
 
 Don't Wikimedia wikis use Alias?  How are they going to handle this?


Rephrase, how are they handling this?

Also note by the way, that at the moment this thread is primarily about
changing the recommended setup in our documentation. Afaik MediaWiki can
and will (at least for a long while to come) support both. Even more
because the default set up out of the box is /w/index.php/Page_name, and
the only way we can make sure existing wikis don't break is by supporting
this.

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Nightly tarballs?

2012-08-01 Thread Krinkle
We already have hourly snapshots of the stable master though:

https://toolserver.org/~krinkle/mwSnapshots/#!/mediawiki-core/master

(and it includes release branches, feature branches and wmf branches).

That could be expanded to keep old version (right now it only keeps the
latest one).

-- KJrinkle

On Wed, Aug 1, 2012 at 12:29 PM, Mark A. Hershberger m...@everybody.orgwrote:

 Is there any interest in having nightly snapshots of MediaWiki available?

 I realize people could just use git, but this poses a problem for users
 who are familiar with extracting, say, a .xip file, and making MediaWiki
 work, but are stymied by the esoteric nature of git.

 This would be similar to Mozilla's nightlies:
 (http://nightly.mozilla.org/) and may also be a stepping stone for
 people to get into development, or at least patch submission.

 This all came up because I had the chance to provide a snapshot to help
 solve a problem in 1.19
 (http://thread.gmane.org/gmane.org.wikimedia.mediawiki/40011, shortened:
 http://hexm.de/kt).

 I used the make-release script
 (http://svn.wikimedia.org/viewvc/mediawiki/trunk/tools/make-release/,
 shortened: http://hexm.de/ku) and put the snapshot up at
 http://mah.everybody.org/snapshots/.

 I'm willing to set this up to run on wmflabs.org or on my own server if
 there is interest.  This may also be a good way to measure the need
 for a point release -- for example, if the nightly starts including
 fixes for annoying bugs that affect a lot of people, then a point
 release is probably needed.  (I'm looking at you, Bug #24985.)

 --
 http://hexmode.com/

 Human evil is not a problem.  It is a mystery.  It cannot be solved.
   -- When Atheism Becomes Religion, Chris Hedges

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Nightly tarballs?

2012-08-01 Thread Krinkle
On Wed, Aug 1, 2012 at 4:26 PM, Chad innocentkil...@gmail.com wrote:

 On Wed, Aug 1, 2012 at 7:17 PM, Mark A. Hershberger m...@everybody.org
 wrote:
  On 08/01/2012 06:50 PM, Platonides wrote:
  I don't think we need a separate nightly page. If someone wants a
  nightly, he can use this git download. Differences with release
  mediawiki are minimal.
 
  The release *does* include some standard extensions that are not in the
  gitweb link.
 
  But, yes, having the gitweb link makes this less of an issue.
 

 Let's please not link to the gitweb tars. They're not cached, so
 each one would be generated on demand.

 -Chad


That's why I created mwSnapshots are a replacement for vvv's mw-nightly
(which, before it was taken down, afaik still only worked with SVN).

https://toolserver.org/~krinkle/mwSnapshots/#!/mediawiki-core/master

And it does cache :)

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] 8 simple ways for improving Gerrit

2012-07-28 Thread Krinkle
On Jul 28, 2012, at 5:21 AM, Chad wrote:

 [..]
 
 1) backgroundColor -- pretty self explanatory, default is #FCFEEF for
 anonymous users and #FF for logged in users.
 2) topMenuColor -- the color in the top menu area, the puke green, #D4E9A9
 3) textColor -- self explanatory, default is #00
 4) trimColor -- used for section headers and such, same green as
 topMenuColor by default
 5) selectionColor -- the cream-yellow color used when you're highlighting a
 given field/button/etc in Gerrit, color is #CC
 
 Additionally, [..]:
 
 5) changeTableOutdatedColor -- That glaring red that shows up in the
 dependencies field when a dependency is outdated, default #F08080
 6) tableOddRowColor -- with 2.5, we can do alternating colors for the change
 listings, making it easier to read. By default, transparent.
 7) tableEvenRowColor -- complement of
 
 [..]
 
  Subject: [Wikitech-l] 8 simple ways for improving Gerrit
 

The 7--+1 simple ways for improving Gerrit.
After 5 comes 6, you know ;-)

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Serious alternatives to Gerrit

2012-07-19 Thread Krinkle
On Jul 18, 2012, at 9:35 PM, Roan Kattouw wrote:

 On Wed, Jul 18, 2012 at 9:30 PM, Subramanya Sastry
 ssas...@wikimedia.org wrote:
 (b) Commit amends hide evolution of an idea and the tradeoffs and
 considerations that went into development of something -- the reviews are
 all separate from git commit history.   All that is seen is the final
 amended commit -- the discussion and the arguments even that inform the
 commit are lost.  Instead, if as in (a), a topic branch was explicitly (or
 automatically created when a reviewer rejects a commit), and fixes are
 additional commits, then, review documentation could be part of the commit
 history of git and shows the considerations that went into developing
 something and the evolution of an idea.
 
 There was an email recently on wikitext-l where Mark Bergsma was asked to
 squash his commits (http://osdir.com/ml/general/2012-07/msg20847.html) -- I
 personally think it is a bad idea that is a result of the gerrit review
 model.  In a different review model, a suggestion like this would not be
 made.
 
 
 Although squashing and amending has downsides, there is also an
 advantage: now that Jenkins is set up properly for mediawiki/core.git
 , we will never put commits in master that don't pass the tests. With
 the fixup commit model, intermediate commits often won't pass the
 tests or even lint, which leads to false positives in git bisect and
 makes things like rolling back deployments harder.
 
 Roan
 

I disagree. Contrary to what many think, every single patch set and amendment
goes into the mediawiki/core.git repository, whether reviewed and passing, or
a fresh mistake. This is easily verified by the fact that every patch set
revision has its own gitweb link, and the fact that git-review downloads the
merge request from a remote branch, inside the core.git repo (or whatever the
repo may be).

git-bisect is not an issue now and won't be an issue in the branch-review
model[1], because git-bisect only affects the HEAD's tree (naturally). The way 
to
merge a branch would be to squash the branch into one commit when merging
(e.g. the merge commit). This is also how jQuery lands branches most of the
time, and iirc GitHub (as in, the internal repo github.com/github/github) also
works this way with long-lived branches (based on a presentation they gave;
How GitHub uses GitHub to build GitHub[2]).

And, of course, we will stick to the model of only allowing merges when tests
pass. So the HEAD's history will remain free of commits that don't pass
tests.

One could argue that then the branches'  history will not be included in the
master's history, but that's not the case now either. Because only the last
amendment will be in the master's history (which is just as much a squash,
except that that squash is the result if re-ammending over and over again,
instead of subsequent commits being squashed afterwards).

-- Krinkle


[1] branch-review model, as in, a model where a review is about a
topic-branch, whether it contains 1 commit, or many. Or even a branch from a
fork. In other words the pull-request model from GitHub. And yes, on GitHub
one can also create a pull-request from one branch to another, within the same
repository (e.g. mediawiki-core/feature-x - mediawiki-core/master or
mediawiki-core/feature-x-y-z - mediawiki-core/feature-x) .

[2] http://zachholman.com/talk/how-github-uses-github-to-build-github

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] overprinting wikipedia table

2012-07-17 Thread Krinkle
On Jul 10, 2012, at 7:44 AM, Brion Vibber wrote:

 On Tue, Jul 10, 2012 at 10:15 AM, jida...@jidanni.org wrote:
 
 Just hit many CTRL++ in Firefox 15.
 
 
 re: http://en.wikipedia.org/wiki/Language_family#See_also
 
 This is an example of poor, non-scalable layout by wiki authors -- the kind
 of thing that often messes up mobile view, but can happen on the regular
 desktop view as well depending on your window size. Nothing ill-intentioned
 in the code, but as your window gets smaller relative to the text size it
 gets less and less tenable to lay out a floated table next to a two-column
 list.
 
 There are a couple things that can be fixed here:
 * drop the floated table entirely, show it inline
 * drop the multi-column on the list
 * or -- drop the above for smaller screens, while keeping them for larger
 screens (with CSS media queries)
 
 CSS media queries can't be done with inline styles, so that needs to
 probably switch to a class that can be defined in the global styles.
 (Alternatively, we could devise an extension to let you add style
 sections from templates, which might help since templates are being used
 here to add the inline styles.)
 
 -- brion


When using column-count it does indeed seem attractive to try to change the
column-count based on the screen width. However there is a (imho, better)
solution for this.

Use column-width instead of column-count. column-width is basically a responsive
version of column-count. That adds and removes columns dynamically. And with an
added bonus feature that it respects the current parent element dimensions
rather than the entire window, which is pretty much always what you want.

Check the paragraph example on {{column-width}} at Wikipedia for an example:

https://en.wikipedia.org/wiki/Template:Column-width

-- Krinkle



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikidata Infoboxes

2012-07-11 Thread Krinkle
On Jul 10, 2012, at 10:29 PM, jmccl...@hypergrove.com wrote:

 In short it is either 
 
 * no wikipedias
 can be considered part of the semantic web 
 
 * or all wikipedias stand
 at the center of the semantic web 
 
 

No. A conclusion like that seems to be conflicting with what wikidata is.

Whether some Wikipedia's output is semantically correct is important, but 
(afaik) has *zero* relationship with Wikidata. And as such is not relevant here.

Centralizing infobox designs is a good idea.
Centralizing only the html output for infoboxes but doing the style locally, 
that sounds good too
(as in, better than what we have now).

But neither of those is or should be put in relation with wikidata.

This sounds like one of the many things a template repository wiki will be 
doing. But wikidata is not a template repository and is explicitly designed to 
disallow anything even like it.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikidata Infoboxes

2012-07-11 Thread Krinkle
Replies inline.

On Jul 11, 2012, at 6:23 AM, jmccl...@hypergrove.com wrote:

 
 When you say Whether some Wikipedia's output is
 semantically correct is important, but (afaik) has *zero* relationship
 with Wikidata. And as such is not relevant here then I feel compelled
 to point out that an ontology is most certainly envisioned -- wikidata
 is implementing the SMW Property namespace! Undoubtedly it will use
 Category: for owl:Class representations, just like SMW. And builtin
 Datatypes, just like SMW. So, wikidata actually is *100%* concerned with
 the semantic web. 
 

I agree completely :). Wikidata will most certainly allow MediaWiki sites to
more easily output good and properly organized semantic data that is machine
readable and follows standards.

I am merely pointing out that, from what I've seen so far (note I am just
observing Wikidata, I'm not on their team or actively participate in its
development by other means) ..so far, that it is intended to allow including
data from a repository. And to allow that free of format constraints.

For example, one popular example used is the population of Berlin. I may want to
retrieve the raw number, of formatted according to the user language. Or perhaps
I want to output a table in an article with the yearly population numbers of the
last 20 years and then add ref invocations for the sources as known to
Wikidata.

Or perhaps I want an estimate of different sources (some source may indicate the
population at 2011-01-01 to be number X, another organization may have a
different method and came up with a different number at a different date in
2011). Or a range. Etc. Many variations possible.

And I might add that having semantic output does not require any form or
centralization. One can output semantic html with data attributes or whatever
microformat right from wikitext (like done on Wikipedia right through the
{{Persondata}} template[1]).

And likewise one will be able to output data from Wikidata without having to use
a particular format.

That's not to say that there shouldn't be any html view of wikidata by default,
that could be a very useful feature. I'll leave that up to someone else more
involved to comment about.


[1] https://en.wikipedia.org/wiki/Template:Persondata

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] git commit history

2012-07-02 Thread Krinkle
On Jul 2, 2012, at 2:30 AM, Subramanya Sastry wrote:

 
 One thing I just noticed when looking at the git history via gitk (on Ubuntu) 
 is that the history looks totally spaghetti and it is hard to make sense of 
 the history.  This seems to have happened since the switch to git and 
 post-commit review workflow.  It might be worth considering this as well.  
 git pull --rebase (which I assume is being used) usually helps eliminate 
 noisy merge commits, but I suspect something else is going on -- post-review 
 commit might be one reason.  Is this something that is worth fixing and can 
 be fixed?  Is there some gerrit config that lets gerrit rebase before merge 
 to let fast-forwarding and eliminate noisy merges?
 
 Subbu.

Yep, this happens whenever a change is merged from the gerrit interface.

What we use locally to pull from gerrit doesn't influence the repository.

Also, one doesn't need `git pull --rebase` if you work in a topic branch 
instead of master (which everybody should). Other wise pulling from master 
mighit indeed cause a merge commit. But even then, git-review will warn when 
trying to push for review because it'll have to push 2 commits instead of one.

So best to always work in a topic branch, keep master clean, and do simple 
pulls from gerrit/master.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] jQuery 2.0 dropping IE 6,7,8 support

2012-07-02 Thread Krinkle
On Jul 2, 2012, at 10:38 AM, Derk-Jan Hartman wrote:

 http://blog.jquery.com/2012/06/28/jquery-core-version-1-9-and-beyond/
 
 jQuery 1.8 should arrive within a month. Here is our thinking about the
 next two versions of jQuery to follow it, and when they’ll arrive:
 
 jQuery 1.9 (early 2013): We’ll remove many of the interfaces already
 deprecated in version 1.8; some of them will be available as plugins or
 alternative APIs supported by the jQuery project. IE 6/7/8 will be
 supported as today.
 jQuery 1.9.x (ongoing in 2013 and beyond): This version will continue to
 get fixes for any regressions, new browser bugs, etc.
 jQuery 2.0 (early 2013, not long after 1.9): This version will support the
 same APIs as jQuery 1.9 does, but removes support for IE 6/7/8 oddities
 such as borked event model, IE7 “attroperties”, HTML5 shims, etc.
 
 
 So what does this mean for us ? I think it's wise if we closely follow
 their approach to make sure we can still deliver the IE 6/7/8 support that
 we probably will still require by that time. If there is anything we need
 to make this as efficient as possible for us, we should probably start
 talking to them about that now, instead of in 2013 ?
 
 DJ

As included in the blog post, jQuery 1.9.x will be supported in the long run.
It is completely fine to juse use 1.9.x until we can drop support for old IE as 
well.


On Jul 2, 2012, at 11:53 AM, Derk-Jan Hartman wrote:

 Well we can also include both versions and conditionally load them.
 
 DJ
 


I doubt this is possible. 1.9.x is supported in the long run and will continue 
to get
bug fixes, but it will not get the new features added from that point onwards.

So unless we somehow control that nobody anywhere accidentally uses these,
we'll have to stick with 1.9.x for now.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Extension Skeleton Program

2012-07-02 Thread Krinkle
Have you looked at the example extensions?

https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/extensions/examples.git;a=tree

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New, lower traffic, announcements only email list for Wikimedia developers

2012-06-30 Thread Krinkle
On Sat, Jun 30, 2012 at 7:23 AM, Gregory Varnum gregory.var...@gmail.comwrote:

 Greetings,

 Following discussions with Wikimedia developers more on the fringe and
 not as engaged in frequent IRC or mailing list conversations, the request
 for an announcements only mailing list came up.  I wanted to let folks know
 that this list has been created and is ready for membership:
 https://lists.wikimedia.org/mailman/listinfo/wikitech-announce  -
 wikitech-annou...@lists.wikimedia.org  There will also be a signup list
 for this and other lists at Wikimania Hackathon.

 Unlike this list, which allows for discussion, or the MediaWiki-announce
 list, which focuses exclusively on MediaWiki release announcements,
 wikitech-announce will be used for occasional announcements on both
 MediaWiki and broader Wikimedia developer related news items.  This
 includes an announcement when monthly tech or engineering reports are
 posted, important news updates on git or developer related tools, and
 information on upcoming sprints, events or other major developer
 collaborations.



A few boolean rules I'd like to have confirmed or denied (I'm not for or
against any of these per se, just want to make sure it is clear what the
intension is)

* Any and all emails to wikitech-announce will also go to wikitech-l.

Meaning, those that want to stay on the discussion list will not have to
maintain another queue.

* Any and all emails to mediawiki-announce will also go
to wikitech-announce.

I don't mind either way, just want to know if the audience is considered a
superset of mediawiki-announce. In which case developers that feel more
integrated and want to get both, only have to subscribe to one of them.

* Any and all emails to mediawiki-api-announce will also go
to wikitech-announce.

Again, just curious. I'm not vouching for or against it.




 Thank you!
 -greg aka varnent


Thank you!

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New, lower traffic, announcements only email list for Wikimedia developers

2012-06-30 Thread Krinkle
On Jun 30, 2012, at 9:17 AM, Gregory Varnum wrote:

 On 30 Jun, 2012, at 2:59 AM, Platonides platoni...@gmail.com wrote:
 
 On 30/06/12 07:23, Gregory Varnum wrote:
 wikitech-announce will be used for occasional announcements on both 
 MediaWiki and broader Wikimedia developer related news items
 
 How does the new wikitech-announce compare with the existing
 wikitech-ambassadors?
 They both seem to have a similar scope/contents.
 
 
 My understanding is that one is meant for internal, vs. external developers, 
 focusing specifically on WMF projects.
 

Indeed, that's exactly what it is for. When we are planning to deploy something 
that requires wikis to change something, we'd reach out through those 
ambassadors who translate it into their own wiki (if needed) and do or make 
sure is done, whatever needs doing.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New branch testing in operations/mediawiki-config

2012-06-29 Thread Krinkle
On Jun 29, 2012, at 1:04 PM, Chad wrote:

 On Fri, Jun 29, 2012 at 6:58 AM, Petr Bena benap...@gmail.com wrote:
 Can we create a new branch which would be speedily merged when changes
 were done to it, so that we could check out on labs and apply the
 change there in order to test if patches submitted by devs works ok?
 Thanks to Antoine we use the same repository on beta project, but
 right now it's really hard to test stuff submitted to gerrit because
 we need to merge stuff by hand.
 
 
 I don't see any problem with this really, as long as the branches
 don't get wildly out of sync like the puppet repo did.
 
 -Chad
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Note that one can also use git-review in labs (if not, lets install it then).

I'm not sure if this sounds crazy, but you could do git review -d 1234
and test it that way.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] New branch testing in operations/mediawiki-config

2012-06-29 Thread Krinkle
On Jun 29, 2012, at 2:11 PM, Petr Bena wrote:

 Current idea:
 
 someone submit a config change
 this change is merged to testing branch
 we git pull on labs
 people test if change works ok and submit review to gerrit
 we merge to master branch or reject it
 
 On Fri, Jun 29, 2012 at 2:07 PM, Petr Bena benap...@gmail.com wrote:
 Yes but that would probably overwrite any previous tests

No, git-review checks out the the remote ref/changeset in a local branch,
preserving the proper context/history of that change set.
These branches are not removed or overwritten.

It does mean, however, that you can't randomly stack different tests upon
eachother, but that's a good thing and avoids common errors.

Herein lies also immediately the advantage: If a series of commits is submitted
that depend on eachother, git-review'ing the top one will give you all, just
like in mediawiki/core.git. Because we'd checkout a commit pointer with its
parent tree, not a single commit on top of the previous test state (which what a
testing branch would enforce).

And it saves maintenance by not having to continously reset test to master –
after (re-)submission and merging it for the second time.

I'm not (yet) very active in the beta project on labs, but I imagine it would
make sense not to introduce an neccecary double-review process here. They are in
gerrit pending merge to master, and they can be checked out on labs as it is,
that's a main feature of git-review. And afterwards checkout master again and
leave your comments on gerrit and/or review, merge it.

I'm not a big fan of Gerrit's overal design, but one the great aspects is that
each change proposal is a branch by design. So in a way, every time a merge
request is pushed to gerrit, you've got a branch new testing branch ready to
be checked out.

I agree with Chad, there is no problem with an actual testing branch, but if
we can avoid it with an (what could be) a better workflow with less overhead of
rebasing, dependency manipulation and double-merging etc. then..

-- Krinkle



 
 On Fri, Jun 29, 2012 at 1:23 PM, Krinkle krinklem...@gmail.com wrote:
 On Jun 29, 2012, at 1:04 PM, Chad wrote:
 
 On Fri, Jun 29, 2012 at 6:58 AM, Petr Bena benap...@gmail.com wrote:
 Can we create a new branch which would be speedily merged when changes
 were done to it, so that we could check out on labs and apply the
 change there in order to test if patches submitted by devs works ok?
 Thanks to Antoine we use the same repository on beta project, but
 right now it's really hard to test stuff submitted to gerrit because
 we need to merge stuff by hand.
 
 
 I don't see any problem with this really, as long as the branches
 don't get wildly out of sync like the puppet repo did.
 
 -Chad
 
 
 Note that one can also use git-review in labs (if not, lets install it 
 then).
 
 I'm not sure if this sounds crazy, but you could do git review -d 1234
 and test it that way.
 
 -- Krinkle
 


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-06-28 Thread Krinkle
Replies inline:

On Jun 28, 2012, at 7:38 PM, Jon Robson wrote:

 # things rely on those inline styles whether we like it or not.
 No... They rely on styles not //inline// styles. This is my main
 problem with the current setup! I believe the majority of things done
 in inline styles that are essential would be better done in
 MediaWiki:Common.css - if we have text that is red this would be
 better down with a class markedRed otherwise some text risks being red
 or and other text #BE3B3B despite meaning the same thing. Most layout
 problems on mobile can be fixed with media queries. You can't do these
 in inline styles.
 

Hold on, we're in misunderstanding :). We agree.

So, they *do* rely on inline styles. But when I said they rely I meant: they
currently use them to do something that should not be removed. You won't hear me
say we need inline styles (there is not a single layout best done through
inline styles). I'm saying they are in use - right now - and fulfilling valid
needs to the point that they can break or make an article.

Obviously they should become css classes loaded through modules like
MediaWiki:Common.css and what not. I will +1 any movement towards deprecating
them entirely from wikitext in MediaWii core. But not for just for mobile and
not without at least a year of migration time for live sites (possibly with an
opt-in feature before that for users to preview articles without inline styles
to help fixing them).

Indeed, media queries work best for classes as well. But even then, those media
queries belong where the styles are defined (whether or not in a seperate
mobile wiki css page), but *not* in MobileFrontend, so this should not be a
concern here.

 # I believe beta users of the mobile is a very small number of
 dedicated Wikipedians.  The nostyle=true suggestion by MZMcBride would
 be a great idea but my only worry is with it that no one would use it
 as users would have to add the query string to every page. This is why
 I suggested the beta as the problem would be in front of people's
 faces on every page view and the problems would get surfaced better.
 FWIW I was thinking more of a javascript implementation -
 $([style]).removeAttr(style) - this way disabling javascript would
 get people back their styles in a worst case of scenario and it would
 not effect performance on the server.

From JavaScript it is a lot easier to implement, but does bring issues with
interactive states (such as display: none; ) - which, although even those could
be done as a css classes, are even more common.

 I'm not sure what else to say really... I could understand backlash if
 I was suggesting turning off inline styles on the desktop site or even
 the mobile site - but all I'm suggesting here is targeting the beta of
 mobile.

I'm not doubting your judgement. If you believe it is useful to experiment with
this in the beta. I'd say go ahead, deploy it today (its not like you need our
permission or anything :-P). But as I mentioned earlier, I'm not sure what we
would get out of such experiment, since we already seem to know what it will
break and make.


 Thanks for everyones contributions so far on this long thread! I
 really do appreciate this discussion and your patience with me :-).
 


Thanks for making the mobile site awesome!

-- Krinkle



 On Thu, Jun 28, 2012 at 9:37 AM, Brion Vibber br...@pobox.com wrote:
 On Wed, Jun 27, 2012 at 6:35 PM, Krinkle krinklem...@gmail.com wrote:
 
 So, stripping inline styles:
 * will not fix bad layouts made with tables (which are probably at least as
 common as bad layouts made with inline styles).
 * will break unrelated things, because inline styles are not directlty
 related
 to layout, they're used for many things.
 
 I think provided that there is the following documentation:
 * which layout patterns are problematic (whether with inline styles,
 tables or
 by other means),
 * why/how they cause problems
 * how to solve that (and how the solution is indeed better for everyone)
 
 ... then is is a matter of spreading links to that documentation and
 waiting
 for it to be incorporated on the 700+ wikis with the many many portal
 pages,
 and other structures that have bad layouts.
 
 
 I'm generally in agreement with Krinkle on this. But I have to warn that
 just spreading documentation doesn't magically make things happen -- we
 probably have to put some actual human effort into finding and fixing
 broken layouts.
 
 A one-button report bad layout on this page thingy might well be nice for
 that; as could an easy preview this page for mobile from the edit page.
 (Though to be fair, people can switch from desktop to mobile layout with
 one click -- worth trying out!)
 
 -- brion
 
 
 -- 
 Jon Robson
 http://jonrobson.me.uk
 @rakugojon


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] I hate to be that guy

2012-06-27 Thread Krinkle
On Jun 27, 2012, at 10:08 PM, Victor Vasiliev wrote:

 On Thu, Jun 28, 2012 at 12:06 AM, Derric Atzrott
 datzr...@alizeepathology.com wrote:
 So I hate to be that guy who doesn't know the simple things, but what is
 Jenkins?  The server has come up in discussion a few times since I joined
 this mailing list about a month ago.
 
 It is a form of dark magic which automatically runs all test suites
 for all revisions pushed into our revision control.
 
 See https://www.mediawiki.org/wiki/CI and 
 https://integration.mediawiki.org/ci/
 
 —Victor.

.. and it doesn't just run the tests. It also reports back to Gerrit (our code 
review tool)
with a comment linking to the test results and a flag PASSED or FAILED.

For example: https://gerrit.wikimedia.org/r/#/c/13037/ (jenkins-bot)

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-06-27 Thread Krinkle
On Jun 27, 2012, at 8:39 PM, Jon Robson wrote:

 Bump... does anyone have any objections to this experiment?
 Jon
 
 On Tue, Jun 19, 2012 at 5:28 PM, Jon Robson jdlrob...@gmail.com wrote:
 So crowdsourcing fixes for inline styles doesn't seem to be the most
 effective method [1]. I've been quite swamped with other work just as
 I'm sure others have been. As a result various wiki pages are still
 rendering badly/unreadable. I understand that there are certain
 circumstances where it is useful to be able to have complete control
 over styling as a article writer, but I'd also argue that most article
 writers are not designers so what were are doing in allowing this is
 introducing a huge variable of complexity that allows anyone to
 introduce CSS that could potentially be non-performant (transitions),
 broken or as I am finding stuff that just doesn't work on mobile [2].
 This scares me as someone who cares about providing a good experience
 to all our users on various devices.
 
 I ask you again... //Are inline styles on the __mobile site__ really
 worth the trade off?//



I'm not sure how you conclude that asking the community to fix the issues
didn't work. These things take time, that's how it is. There is a ton of
content, and the community has a lot to do and many different priorities (which
I guess is the responsibility of the community, not the foundation or the
developers!).

And no matter which path is taken, it is going to take time for the bad to
get good (either fixing bad layouts from the current perspective, or
stripping them out and ..then somehow fix everything that turned bad).

I think it makes sense to keep the inline styles untouched - as a status quo
(sorta). I've seen many good arguments go by in this thread (and other threads)
about how things rely on those inline styles whether we like it or not. I
believe we've seen enough examples that simply need to have these to the point
where I think it would be irrespondisble to just strip them (sure there is
better methods than inline styles to give those visual clues, but we already
know those methods are they are getting more common, it just takes time). I
doubt an experiment will teach us anything. We've got a pretty good idea of
what will break and what will get better by stripping them, right?

Also, here's something: inline styles in general are not the problem. Inline
styles are just a general purpose application that can be used for good and bad
(yes, there are better alternatives for the good applications of inline styles,
but that doesn't make them bad).

The real problem is outdated (or bad) layout designs that don't adapt to
different screen resolutions, device orientation and/or window size. That
problem surfaces in various different ways. One of which is (certain) inline
styles.

1) Some layouts are done with inline styles, but not all inline styles are for
layout (on the contrary).
2) Some layouts are done with tables, not all tables are for layout.
3) Some layouts are done with from stylesheets in MediaWiki:Common.cs or
MediaWiki:Skinname.css, or even depend on some JavaScript.
4) Other methods to create layouts.

So, stripping inline styles:
* will not fix bad layouts made with tables (which are probably at least as
common as bad layouts made with inline styles).
* will break unrelated things, because inline styles are not directlty related
to layout, they're used for many things.

I think provided that there is the following documentation:
* which layout patterns are problematic (whether with inline styles, tables or
by other means),
* why/how they cause problems
* how to solve that (and how the solution is indeed better for everyone)

... then is is a matter of spreading links to that documentation and waiting
for it to be incorporated on the 700+ wikis with the many many portal pages,
and other structures that have bad layouts.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Suggestions for tasks for new contributors during hackathon

2012-06-23 Thread Krinkle
On Jun 23, 2012, at 10:28 AM, Sumana Harihareswara wrote:

 On 06/22/2012 10:53 PM, Andrew Garrett wrote:
 On Fri, Jun 22, 2012 at 7:44 AM, Chris McMahon cmcma...@wikimedia.orgwrote:
 
 On the QA front, this came up in a WMF discussion recently, and I proposed
 it as a Weekend Testing Americas session, but it would work equally well at
 Wikimania, and it fits our goal of bringing in more community testing
 nicely:
 
 
 Speaking of QA, I'd love to participate in a test-writing-a-thon. Currently
 I have no idea how to write tests for my code. It would be awesome if I
 could learn that at Wikimania.
 
 —Andrew
 
 This might indeed be a good training session/topic for the pre-Wikimania
 hackathon.  We might be able to repurpose Chad Horohoe's testing
 training from the fall of 2011 - a lecture on how to write tests,
 walking attendees through the documentation and teaching them how to run
 tests. Notes and audio are available:
 
 https://www.mediawiki.org/wiki/NOLA_Hackathon/Sunday#Chad.27s_test_training
 
 https://www.mediawiki.org/wiki/File:Git_notes_-_NOLA_Hackathon_2011.oga
 
I'll find a simple function we still need a test for, and use it as
 an example. I'll briefly touch on setting up PHPUnit (with the caveat
 that *sometimes* it's harder than it should be, so ask if you need extra
 help). Then dive into how to write the test.
 
 You might also like skimming this category and re-filing/moving/creating
 pages as relevant: https://www.mediawiki.org/wiki/Category:Tutorials
 

I could also (co-)host that or another session regarding front-end unit testing.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] bot activity in #mediawiki on freenode

2012-06-22 Thread Krinkle
On Sat, Jun 23, 2012 at 12:27 AM, Chad innocentkil...@gmail.com wrote:

 On Fri, Jun 22, 2012 at 5:27 PM, Brion Vibber br...@pobox.com wrote:
  In my experience, the bots in the channel are an important part of our
  workflow -- new bug reports, bug updates, and patches in gerrit. When I'm
  discussing things in #wikimedia-dev I usually end up having to manually
 add
  references to something that a bot already sent to #mediawiki, which is
 one
  of the reasons I've always preferred using #mediawiki.
 
  Please don't make yet another split-off channel; that'll be annoying and
  make things more complicated for little if any benefit.
 

 I think I agree with Brion more than anyone else. I find the bots to
 be incredibly useful, and jumping back and forth between channels
 is a pain. Also like I've said multiple times in multiple places over
 the past couple of days--when you move channels you fracture
 discussion. It happened with #wikimedia-dev, and it'll happen again
 here if we don't do this right.

 However, I see the argument to be made that if you're not a regular
 then the bots can be rather annoying to filter out. And honestly,
 someone who drops in for a few minutes to ask a question shouldn't
 be asked to /ignore every random bot they see.

 Is there some middle ground here?

 -Chad


I agree with all the above (hm.. I see a pattern emerging :D ).

I think the middle ground is to keep the bots in a regular discussion
channel, but not the channel where most support and new-user development
takes place, but a channel where most core developers hang out. I'm talking
about #wikimedia-dev.

I'll cite myself from CR: https://gerrit.wikimedia.org/r/12388 :

We should make the bots smarter, and spread out to relevant channels
 (rather than everything in one channeL). But I don't think one (or more)
 bot channels is going to work. The main concern raised (iirc) is that new
 users don't like them. And I agree the 'user' level doesn't have much use
 for them. A few ideas:



* #mediawiki: wikibugs for Product=MediaWiki (maybe + MediaWiki extensions).
 * #wikimedia-dev: wikibugs for Product=Wikimedia.

* Other bugs are not sent to IRC (unless additional rules are inserted.
 e.g. mobile could have their component output in their channel)

 * #wikimedia-dev: gerrit-wm messages currently assigned for #mediawiki
 could be moved to #wikimedia-dev. That will reduce the flood for support,
 while keeping them in the relevant context of developers and conversations.


-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How to write a parser

2012-06-20 Thread Krinkle
On Jun 20, 2012, at 1:02 PM, Niklas Laxström wrote:

 No, this is not about a wikitext parser. Rather something much simpler.
 
 Have a look at [1] and you will see rules like:
 n in 0..1
 n is 2
 n mod 10 in 3..4,9 and n mod 100 not in 10..19,70..79,90..99
 
 Long ago when I wanted to compare the plural rules of MediaWiki and
 CLDR I wrote a parser for the CLDR rule format. Unfortunately my
 implementation uses regular expression and eval, which makes it
 unsuitable for production. Now, writing parsers is not my area of
 expertise, so can you please point me how to do this properly with
 PHP. Bonus points if it is also easily adaptable to JavaScript.
 
 [1] 
 http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/language_plural_rules.html
 
  -Niklas

You may already know this, but santhosh is working on a parser[1] in javascript
(as a node module, to be specific). I added a test suite to his repository. 
Ready
to be expanded and build upon!

-- Krinkle

[1] https://github.com/santhoshtr/CLDRPluralRuleParser

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] How to write a parser

2012-06-20 Thread Krinkle
On Jun 21, 2012, at 7:13 AM, Krinkle wrote:

 On Jun 20, 2012, at 1:02 PM, Niklas Laxström wrote:
 
 No, this is not about a wikitext parser. Rather something much simpler.
 
 Have a look at [1] and you will see rules like:
 n in 0..1
 n is 2
 n mod 10 in 3..4,9 and n mod 100 not in 10..19,70..79,90..99
 
 Long ago when I wanted to compare the plural rules of MediaWiki and
 CLDR I wrote a parser for the CLDR rule format. Unfortunately my
 implementation uses regular expression and eval, which makes it
 unsuitable for production. Now, writing parsers is not my area of
 expertise, so can you please point me how to do this properly with
 PHP. Bonus points if it is also easily adaptable to JavaScript.
 
 [1] 
 http://unicode.org/repos/cldr-tmp/trunk/diff/supplemental/language_plural_rules.html
 
  -Niklas
 
 You may already know this, but santhosh is working on a parser[1] in 
 javascript
 (as a node module, to be specific). I added a test suite to his repository. 
 Ready
 to be expanded and build upon!
 
 -- Krinkle
 
 [1] https://github.com/santhoshtr/CLDRPluralRuleParser
 


Would be nice if there was an official test suite to use as input for it, so we 
don't
have to maintain the test suite manually.

Also useful link, syntax specification:
http://unicode.org/reports/tr35/#Language_Plural_Rules

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Bugzilla database replication

2012-06-07 Thread Krinkle
On Jun 7, 2012, at 4:15 PM, John wrote:

 https://bugzilla.wikimedia.org/show_bug.cgi?id=28339 has been just sitting
 their stale for quite a while. I know as a toolserver user, that there is a
 potential for a lot of useful tools. Who do I need to bribe or murder in
 order to facilitate this process?
 
 John
 

This is not as easy as setting up replication for other databases, because
it is set up differently and there are special privacy matters to think of.

Meanwhile, may I remind that BugZilla actually does have an API,
which is also accessible from the Toolserver.

It is a little complicated to use, but provides a lot of features.

-- Krinkle



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Merge of wikitech and labsconsole

2012-06-07 Thread Krinkle
On Jun 7, 2012, at 10:49 AM, Ryan Lane wrote:

 I'm not sure if it makes sense to have the Labs/OpenStack/Nova management
 interface on this same new wikitech wiki though. This means that all the
 community projects running inside labs will/might use this same wiki to 
 document
 their internal structure - which can (and should be) a lot of projects that 
 are
 not Wikimedia engineering projects.
 
 Documentation for labs as being a Wikimedia project makes sense, but the 
 actual
 projects inside and management maybe don't fit well inside the new wikitech. 
 I
 like that of the labsconsole.
 
 
 Do you mean they aren't *staff* engineering projects? Labs is meant to
 be a stepping stone. For most projects, the idea is that people will
 implement something in Labs and it'll get moved into production. The
 documentation for that project will then be the documentation for Labs
 and production.
 
 One of the biggest reasons I wanted to merge the wikis is because I
 feel that volunteer operations engineers should be documenting their
 infrastructure changes in the same place as staff operations
 engineers.
 
 - Ryan

No, that's not what I meant.

Contributions (from whomever) to for example the production cluster puppets 
(through gerrit), that may have an RFC on wikitechwiki ahead of time sounds 
awesome. Stuff can be proposed by whomever, and then implemented by whomever. 
Then tested in labs and merged/pushed to production.

I was refering to projects that will not be foundation engineering projects, or 
at least do not intend to be that.

*cut 2 paragraphs*

...when trying to come up with examples, it turns out that those examples   
(Tool-Labs: early extension development, bot hosting, slow-query tools, ..) 
probably wouldn't put their documentation on either wikitech or labsconsole, so 
nevermind.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Merge of wikitech and labsconsole

2012-06-06 Thread Krinkle
There's been some conversation about this in the past (not sure which mailing
list). Yes, we should definitly centralize documentation about these into one
wiki:

* Documentation of production cluster (e.g. 'fenari', 'srv###', 'db###',
upload/scalers, squids etc. )
* Wikimedia engineering projects (status updates, team members etc.)
* Workflow for operations (how to deploy, how to use puppet, ...)
* ..

There are split over labsconsole, wikitech, metawiki and mediawikiwiki.

I'm not sure if it makes sense to have the Labs/OpenStack/Nova management
interface on this same new wikitech wiki though. This means that all the
community projects running inside labs will/might use this same wiki to document
their internal structure - which can (and should be) a lot of projects that are
not Wikimedia engineering projects.

Documentation for labs as being a Wikimedia project makes sense, but the actual
projects inside and management maybe don't fit well inside the new wikitech. I
like that of the labsconsole.

Also, a general note: Beware that you doesn't confuse beta with Wikimedia
Labs. beta is one of many projects hosted inside the Labs environment. The
beta project is a virtual clone of the production cluster. Any documentation
regarding beta will become obsolete as soon as it has completed in reproducing
the production cluster (in which case the wikitech docs are all that is
relevant, and any minor details specific to the application of it within beta
fit on a small page[1] in labsconsole).

-- Krinkle

[1] simpel pages such as https://labsconsole.wikimedia.org/wiki/Deployment/Help
although probably under a different name.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] git review version update, May 2012

2012-05-27 Thread Krinkle
I couldn't get it to update (Mac OS X).

Reported in Bugzilla: https://bugzilla.wikimedia.org/37135


-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] ResourceManager and Javascript in Mediawiki

2012-05-25 Thread Krinkle
On May 25, 2012, at 10:37 PM, Derric Atzrott wrote:

 Hmmm does not appear to work for me.  I added window.alert(Test); to
 the JavaScript file to make sure.
 
 The relevant sections of each file are listed out below.  All of these
 sections are in sections of their respective files that are being executed.
 
 SpecialMasterPlanForm.php: http://pastebin.com/qKmTUvYd
 MasterPlanForm.php: http://pastebin.com/GaqhP55g
 js/ext.MasterPlanForm.core.js: http://pastebin.com/sNcbrYRW
 

Those files look generally OK.

Are you sure that Special page is working (is the HTML output working)?

A few things that may help you:
* enable debugging in LocalSettings.php [1]
* Check your php error_log and/or your apache error log
* Check the browser console, perhaps there is an exeception thrown by 
ResourceLoader in the client side?
* Enable debug mode in the front-end by appending [?]debug=true to the url, 
then check the browser console again.

-- Krinkle

[1] https://www.mediawiki.org/wiki/Manual:How_to_debug

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The bugtracker problem, again

2012-05-22 Thread Krinkle
On May 22, 2012, at 12:17 AM, Tomasz Finc wrote:

 We tried the milestones and they were worse then tracking bugs.
 Sharing urls like this
 
 https://bugzilla.wikimedia.org/buglist.cgi?list_id=117138resolution=---resolution=LATERresolution=DUPLICATEquery_format=advancedtarget_milestone=1.2%20releaseproduct=Wikipedia%20App
 
 vs
 
 https://bugzilla.wikimedia.org/show_bug.cgi?id=36745 is a pain. You
 could save the search but then its only available to you. Quickly
 seeing what's resolved/blocking your release without having to go back
 to advanced search is key.
 


The findability of such url is important, but the length or prettyness should
imho not be a factor. One could even argue that the longer one is actually more
usable because it contains the relevant product and version (rather than a
seemingly arbitrary bug ID) - and makes it possible to manipulate or construct
the url by hand (whether or not assisted by autocompletion in the address bar).

We can build portals, share bookmarks, create wiki pages with links, customize
the bugzilla sidebar, set links as default search for everybody and what not,
use shortcuts in mw-bot/wm-bot, shorten them with a url-shortener to a human
memorable url (e.g. http://tinyurl.com/mw-12-open  ). Lots of options for 
sharing. 

Also, for MediaWiki core we're going more and more towards rolling releases.
Bugs and features will be scheduled and prioritized, but it is rare for
something to truly block a release. Sure we can schedule a feature for a
version, but if we don't make it, the release cycle will likely overrule the
feature implementation (or bug fix) and the bug is re-scheduled (depending on
why it wasn't done it will probably be moved to the next release, wontfixed
or perhaps left unscheduled until there is more interest or an assignee
available).

For myself I created a little portal to make working with our current workflow
easy (for MediaWiki core that is).
https://toolserver.org/~krinkle/wmfBugZillaPortal/


On May 22, 2012, at 4:05 AM, Yuvi Panda wrote:

 Also, Tracking bugs let us make comments about the release itself, something 
 that milestones don't. 

Since we're going more towards rolling releases I don't think we really need
such a place for MediaWiki core releases. Meta-discussion about the use of
milestones in general doesn't belong there anyway. And neither does discussion
about the release process. So what kind of discussion would take place on a
MediaWiki 1.X.Y release (tracking) bug that shouldn't take place on
wikitech-l, mediawiki-l, [[mw:Talk:MediaWiki 1.X]]  or some other place?

The milestones are just the schedule / planning for developers to work from.

For scouting BugZilla I think this would be the workflow that ops and developers
follow (for BugZilla).

Developers: 
* Open bugs in components other than Wikimedia (i.e. MediaWiki core/extensions)
  scheduled for the next deployment
- Cross-component tracker bug for 1.20wmf4: open[1]

* Open bugs scheduled for the currently being developed release
- MediaWiki Milestone: 1.20.0 release: open[2]


Ops:
* General tasks for the current deployment (this list should be empty by the
  time deployment happens, but if things brake after deployment that need fixing
  right now, those bugs would be scheduled for the current deployment)
- Wikimedia Milestone: 1.20wmf3: open[3]

* General tasks that need to happen before or at the next deployment window
  (this list has to be empty before deployment. Anything on there either needs 
to
  be done or re-evaluated)
- Wikimedia Milestone: 1.20wmf4: open[4]

I'm curious whether other developers / ops also work (or would like to work)
like this though, and if not, what queries you use a lot. For example, another
way would be to work solely from searches like Assigned to me and rely on
someone else to use above queries and make sure everything high up, that isn't
fixed or assigned, gets assigned or lowered in priority. 

-- Krinkle

[1] [2] [3] [4] These are all linked from 
https://toolserver.org/~krinkle/wmfBugZillaPortal/

[1] https://bugzilla.wikimedia.org/showdependencytree.cgi?hide_resolved=1
[2] 
https://bugzilla.wikimedia.org/buglist.cgi?resolution=---query_format=advancedproduct=MediaWikitarget_milestone=1.20.0%20release
[3] 
https://bugzilla.wikimedia.org/buglist.cgi?resolution=---query_format=advancedproduct=Wikimediatarget_milestone=1.20wmf3%20deployment
[4] 
https://bugzilla.wikimedia.org/buglist.cgi?resolution=---query_format=advancedproduct=Wikimediatarget_milestone=1.20wmf4%20deployment
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] reasons for api json callback restrictions

2012-05-17 Thread Krinkle
On May 17, 2012, at 7:07 PM, Daniel Friesen wrote:

 On Wed, 16 May 2012 19:46:26 -0700, Roan Kattouw roan.katt...@gmail.com 
 wrote:
 
 On Wed, May 16, 2012 at 7:32 PM, Terry Chay tc...@wikimedia.org wrote:
 I thought http://www.mediawiki.org/wiki/Manual:Edit_token protects against 
 this as it is required for an edit: http://www.mediawiki.org/wiki/API:Edit
 
 Not if you can read the data using the Object/Array constructor hacks
 you described. The potential for data leakage includes token leakage,
 and once you get the API to leak a token you can create a hidden form
 on the page that POSTs all the right data (including the token) to the
 action=edit API and call .submit() on the form.
 
 Roan
 
 Actually I don't think the object constructor or getter hacks work.
 
 jQuery('script /', {src: 
 https://en.wikipedia.org/w/api.php?action=queryprop=infotitles=Main%20Pageformat=json}).appendTo('head');
 api.php:1 Uncaught SyntaxError: Unexpected token :
 
 We don't wrap the JSON in ()'s (it would be invalid JSON). And as a result 
 the {} is in a statement scope instead of an expression scope. As a result 
 the JavaScript engine tries to parse this as a block of code rather than an 
 object. Naturally since asdf: is not valid code the JavaScript engine 
 quickly fatals considering this a SyntaxError before it can evaluate anything.
 
 It only works for array because [] doesn't have the ambiguity that {} has.
 
 -- 
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]

Indeed. Browsers that evaluate the JSON response as JavaScript (rather than
JSON, plain text) would quickly fail with a SyntaxError.

For the long-term future, though, since format=json has an application/json
header, browsers shouldn't evaluate it as javascript at all. Mozilla Firefox has
already started to adhere to this practice. Hopefully other browsers will
follow.

Because until then, although an entirely different matter not related to
MediaWiki API, the following would be evaluated in those browsers:

Content-Type:application/json; charset=utf-8

{foo: (function () { var s = document.createElement('script'); /* ... */ }())}

You get the idea..

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The bugtracker problem, again

2012-05-13 Thread Krinkle
.
And for longer lists, possibly focus on regressions first. That's pretty much
all that matters from an organizational perspective. I plan to create a similar
portal view for the developer perspective of an individual product. There
component, bug/feature/regression distinction and milestone would be the focus
of the queries.

If BugZilla would make it easier to create advanced searches, such portal
wouldn't be needed (you'd click a few buttons and have it). Take a look at a
GitHub project issue tracker. All it has is titles, assignee, milestone,
open/close and labels. And all can be queried from the overview with a single
click.

-- Krinkle


[1] I know that many of these can be disabled in BugZilla's administration
panel, and I think we should indeed start phasing some of them out.
[2] https://toolserver.org/~krinkle/wmfBugZillaPortal/
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-05-13 Thread Krinkle
On May 11, 2012, at 11:17 AM, Tei wrote:

 On 11 May 2012 10:24, Ryan Kaldari rkald...@wikimedia.org wrote:
 What about this idea: We could introduce a new CSS class called 'nomobile'
 that functioned similarly to 'noprint' — any element set to this class would
 be hidden completely on mobile devices. If someone noticed a problem with a
 specific template on mobile devices, they could either fix the template or
 set it to 'nomobile'. This would make template creators aware of the problem
 and give them an incentive to fix their inline styles.
 
 
 http://www.stuffandnonsense.co.uk/archives/images/specificitywars-05v2.jpg
 
 I think theres a limitation to that,   .nomobile  .darthvader
 .darthvader   will not work as expected (I think)
 

As far as CSS is concerned this will work just fine. Due to a logic error in 
the mobile site specifically it can fail sometimes. But CSS has no such bug or 
limitation, and in MediaWiki it will work just fine.

Not sure how that link is relevant..

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] The bugtracker problem, again

2012-05-13 Thread Krinkle
On May 14, 2012, at 5:04 AM, K. Peachey wrote:

 On Mon, May 14, 2012 at 12:55 PM, Krinkle krinklem...@gmail.com wrote:
 I'm convinced all other fields can be done without and removing them will
 improve the workflow of the developers and the Bugmeister. Including, but not
 limited to:
 - Platform (Hardware/OS)
 - See also
 - Web browser
 - URL
 These are rarely used and can just be put in the comments. Maybe URL is a 
 nice
 one to have aggregated to the top of the bug view. But even then it is 
 limited
 to a single url only (and has no label). Should still have a comment, so 
 might
 
 URL is nice and used more on the Site request type bugs, I also find
 that it being limited to one url is =(, I know there is the See Also
 field, but for some reasons the BZ devs have decided this should be
 locked to support certain urls only (Unless they have changed their
 view since the last time I went diving in the BZ BZ)

Well, one doesn't need a URL field to link to the local discussion.
One can just put that in a comment. Which many do (even in Site requests).

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] mwSnapshots (NEW)

2012-05-10 Thread Krinkle
On May 10, 2012, at 8:32 AM, Thomas Gries wrote:

 Am 10.05.2012 02:57, schrieb Krinkle:
 The MWNightly tool has been down for a while (since February, around 
 migration to Git), so I took the liberty to write a new tool for this.
 
 https://toolserver.org/~krinkle/mwSnapshots/
 
 Updates hourly, even.
 
 With Git making packages of a repository is a lot easier, but for those 
 without command-line expertise this is still a comfort.
 
 Source code and issue tracker online as usual (see links in the tool)
 
 -- Krinkle
 
 Perhaps you can automatically generate (or show)
 
 - MD5 and
 - SHA1
 
 checksums then ?

Pick a branch (or leave the default master), press Get it! and be surprised!

Okay, I admit this interface is not optimal,
but the page after pressing Get it! really shows those checksums :)

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-05-10 Thread Krinkle
)
about securing your application in one of the best ways, while actually being
lazy and not primary caring about security.

An interesting logic I learned from that is: Addressing a problem, without being
very specific to one particular downside of the cause. Because one would apply
best practices for other reasons (practices that happen to naturally also
enforce good security). You wouldn't have to care about security at all to
consider using those practices.

Similarly, back to the mobile subject, those Portal layouts and templates can be
improved in general, not just because they look bad or aren't user friendly on a
mobile device. Some of those are probably also not very usable on a desktop
browser when resizing the window very narrow or when widening it a lot on a
high-resolution monitor. Which would either show a scrollbar instead of flowing
the second column of boxes underneath, or (on the large screen) it would make
the two columns very wide instead of letting the showing the boxes underneath
next to the first two on the first row.

This example is for example about a table-code layout vs. row containers with
floating elements.

-- Krinkle


[1] Note that at the time (maybe still today) those [expand]/[collapse] buttons
from those Wikipedia templates are shown regardless of javascript (which is
another problem). So they are broken on mobile without the site scripts.
The absence of site scripts is a known issue, and from what I heard this wil be
fixed once MobileFrontend uses the built-in load stack of MediaWiki (with
ResourceLoader), instead of overruling it with a manually maintained stack.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia MediaWiki configuration files are now in a public version control system

2012-05-09 Thread Krinkle
Some of the ops that fullfill shell requests paste diffs into BugZilla comments,
which is awesome.

Hereby friendly request to those (and others!) to, from now on, paste links to
gerrit change sets (or gitweb commits) - for easy reference :)

-- Krinkle

On May 10, 2012, at 1:55 AM, Patrick Reilly wrote:

 It is fixed now.
 
 — Patrick
 
 On Wed, May 9, 2012 at 4:47 PM, K. Peachey p858sn...@gmail.com wrote:
 
 On Thu, May 10, 2012 at 9:44 AM, Patrick Reilly prei...@wikimedia.org
 wrote:
 We've got passwords in the repo.
 
 
 https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=blob;f=wmf-config/CommonSettings.php;h=ad7c776905e6074a5f415339fdba146ae3f5788a;hb=d4c704fe499bdebf36877c1f3b8cee4a8e9014f7#l2260
 
 Use PrivateSettings.php (or whatever its called for that), and read
 the headers next time  # WARNING: This file is publically viewable on
 the web. Do not put private data here.
 
 CommonSettings.php has been readable for a long time.
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] mwSnapshots (NEW)

2012-05-09 Thread Krinkle
The MWNightly tool has been down for a while (since February, around migration 
to Git), so I took the liberty to write a new tool for this.

https://toolserver.org/~krinkle/mwSnapshots/

Updates hourly, even.

With Git making packages of a repository is a lot easier, but for those without 
command-line expertise this is still a comfort.

Source code and issue tracker online as usual (see links in the tool)

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia MediaWiki configuration files are now in a public version control system

2012-05-09 Thread Krinkle
On May 10, 2012, at 4:04 AM, Liangent wrote:

 On Thu, May 10, 2012 at 8:45 AM, K. Peachey p858sn...@gmail.com wrote:
 On Thu, May 10, 2012 at 10:41 AM, Krinkle krinklem...@gmail.com wrote:
 Some of the ops that fullfill shell requests paste diffs into BugZilla 
 comments,
 which is awesome.
 Wasn't that only Jeluf(spelling?) that did that, But yes, If you do a
 request and then close it, Please link to the apprioate change set,
 And mention that its ready for merging (and not pushed to the cluster)
 
 Will there by an easy way to track them? eg. a keyword in Bugzilla.
 
 -Liangent

As for the requests themselves, those have always been easily trackable in two 
ways:

* Product: Wikimedia; Component: Site requests
- Open:
  
https://bugzilla.wikimedia.org/buglist.cgi?product=Wikimediacomponent=Site%20requestsresolution=---

- https://bugzilla.wikimedia.org/describecomponents.cgi?product=Wikimedia

* (cross-component) keyword: shell
- Open:
  https://bugzilla.wikimedia.org/buglist.cgi?keywords=shellresolution=---
- https://bugzilla.wikimedia.org/describekeywords.cgi

And now that the repository is public we can also track the actual changes:
* Commits pending review: (currently empty) 
https://gerrit.wikimedia.org/r/#/q/status:open+project:operations/mediawiki-config,n,z
* Actual commit history (accepted changes that are live): 
https://gerrit.wikimedia.org/r/gitweb?p=operations/mediawiki-config.git;a=shortlog;hb=master

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] BugZilla Portal (NEW)

2012-05-07 Thread Krinkle
On Fri, May 4, 2012 at 7:29 AM, Antoine Musso hashar+...@free.fr wrote:

 One could most probably write a ton of reports using the JSON API:

  https://bugzilla.wikimedia.org/jsonrpc.cgi

 Doc:

 https://bugzilla.wikimedia.org/docs/en/html/api/Bugzilla/WebService/Server/JSONRPC.html



Yep, however currently I'm hardcoding [1] the versions and milestones
because the API of BugZilla  4.2 does not expose them (see NOTES[2]).

After bugzilla.wikimedia.org is upgraded to 4.2 I'll let it grab the
versions and milestones dynamically from the API instead, and maybe add
some custom lists as well. That way things won't break if a version or
milestone is renamed and new ones are automatically added.

-- Krinkle

[1]
https://github.com/Krinkle/ts-krinkle-wmfBugZillaPortal/blob/c5611c966112d1eb8ff63a5651c7729161cd265a/index.php#L40

[2]
https://github.com/Krinkle/ts-krinkle-wmfBugZillaPortal/blob/553de37ce541b3283126bbe860f45fa7ee6c2295/NOTES#L1
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] BugZilla Portal (NEW)

2012-05-02 Thread Krinkle
Hey all,

Now that we're on a more regular deployment schedule, staying on top of the
blocking bugs and deviding lists into smaller, more managable chunks, is more
and more important.

For that reason I put together a quick tool:

https://toolserver.org/~krinkle/wmfBugZillaPortal/

It is already becoming clear that there is a lot of stuff left behind from past
versions. We should probably start moving stuff to later verisons and keep an
eye on it more regularly.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] BugZilla Portal (NEW)

2012-05-02 Thread Krinkle
On May 3, 2012, at 5:28 AM, Krinkle wrote:

 Hey all,
 
 Now that we're on a more regular deployment schedule, staying on top of the
 blocking bugs and deviding lists into smaller, more managable chunks, is more
 and more important.
 
 For that reason I put together a quick tool:
 
 https://toolserver.org/~krinkle/wmfBugZillaPortal/
 
 It is already becoming clear that there is a lot of stuff left behind from 
 past
 versions. We should probably start moving stuff to later verisons and keep an
 eye on it more regularly.
 
 -- Krinkle
 

This tool (among others) is in source control and running on toolserver from 
trunk:

* https://svn.toolserver.org/svnroot/krinkle/trunk/wmfBugZillaPortal/
* https://fisheye.toolserver.org/browse/krinkle/trunk/wmfBugZillaPortal

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] @param documentation

2012-04-26 Thread Krinkle
On Apr 26, 2012, at 2:25 AM, Ryan Kaldari wrote:

 Krinkle and I went back and forth on this one last year. Apparently, it's a 
 bit of a bootstrapping problem since all of our comments are currently 
 written the wrong way (due to an old bug in doxygen that has since been 
 fixed), and thus our comment parser expects them to be in the wrong format. 
 Krinkle can elaborate.
 
 I would support moving to the correct syntax though, as we shouldn't keep 
 using the wrong one forever. Plus new hires are going to use the up-to-date 
 syntax anyway (until we train them not too).
 
 Ryan Kaldari
 

What do you mean by parser expects ? We're just using Doxygen, and Doxygen
doesn't do anything fancy in the HTML output with the values in @param, it
doesn't care about the order. The only thing Doxygen does is parse the whole
text and linkify any classes. But it does that no matter where it is. So in
@param $foo Title it will turn Title into a link to class_Title. But it will
also do that in any of these cases:
* @param $foo Title: foo bar baz quux
* @param $foo Title foo bar baz quux
* @param Title $foo: foo bar baz quux
* @param Title $foo foo bar baz quux
* @param $foo foo bar baz Title quux

It doesn't appear to consider anything to be the type or the $variable.

The only thing Doxygen does is extract the first space-separated part from the
@param line and italize it and put it in a different table column so that they
align nicely, but whether this is the type or the $variable doesn't matter.

Another aspect that is often inconsistent is that some people prefer to
uppercase primative types (so instead of string, array, Title, they use
Mixed, Number, String, Array). I find that somewhat confusing, but not
sure if we should enforce that in the conventions.

Few examples of different formats used in core, and how Doxygen parses it.

1)
* Action::factory
- Uses: @param $var type
- Source: 
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=includes/Action.php;h=a25e8aa9ab3698a894f63535d6b6007cbd723044;hb=HEAD#l75
- Doxygen: 
https://svn.wikimedia.org/doc/classAction.html#a1fe6031d033c0721e4e1572938d44990
* Screenshot: http://i.imgur.com/YcJsK.png

I could find only one example of @param type $var that included a classname as
type and a colon before the description, and it didn't look very good in 
Doxygen,
no idea what's going on here:

2)
* CategoryViewer::addFragmentToTitle
* Uses: @param Type $var: description
* Source: 
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=includes/CategoryViewer.php;h=dff38028cd0844ded42623c80817029c5f6aa702#l592
* Doxygen: 
https://svn.wikimedia.org/doc/classCategoryViewer.html#ae2fd8eb7171b011437c292b93ff6636b
* Screenshot: http://i.imgur.com/Afuke.png

One other example of @param without a description that didn't break
Doxygen apparantly:

3)
* Xml:: languageSelector
* Uses: @param type $var Description
* Source: 
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=includes/Xml.php#l188
* Doxygen: 
https://svn.wikimedia.org/doc/classXml.html#a0d645accd9e487ebfa8d43819ae0d6d4
* Screenshot: http://i.imgur.com/8305q.png

I'm not confinced (yet) that we should migrate to @param type $var:
description. Putting the type after the variable works just fine and we've been
doing it for years, and in at least one case (the second one in the above
summary) the Doxygen parser actually chokes on it for whatever reason.

-- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Commits IDs, change IDs, legacy change IDs, oh my!

2012-04-25 Thread Krinkle
Also note that there are 2 kinds of SHA-1 hashes, there is the Change-ID sha-1 
hash of the merge request that is pushed to gerrit (afaik stays persisent after 
amending patches), and then there is the sha-1 hash of the commit itself into 
the repository.

Quick overview of my observations:

== Gerrit change numbers ==
- Specific to gerrit, requires gerrit to get/resolve them (might become 
problematic in long-term if we drop gerrit)
- Not usable in git cli
* One id per change set, not per patch/commit/amend
+ Easy to link in bugzilla (gerrit( change) #)
+ Link to it can be copied from the address bar in Gerrit

Example links:
* https://gerrit.wikimedia.org/r/5852
* https://gerrit.wikimedia.org/r/#q,5852,n,z
* https://gerrit.wikimedia.org/r/#change,5852

== Gerrit Change-Id (SHA-1) ==
- Specific to gerrit, requires gerrit to get/resolve them (might become 
problematic in long-term if we drop gerrit)
- Not usable in git cli
* One id per change set, not per patch/commit/amend
+ Are included in the commit message
+ Link to it can be copied from the parsed commit message in Gerrit
+ Allows easy tracking of merges of the same change to a different branch, for 
example:
https://gerrit.wikimedia.org/r/#q,Icc924785cdb394adc723666bf9f6a67e9d6a4d0d,n,z
(same Change-Id sha1 for the merge from master to wmf/1.20wmf1, but different 
git commit sha-1)

Example link:
* 
https://gerrit.wikimedia.org/r/#q,I18752aa0fe21dd3c5bc5bd4a830faaa4c836f9cd,n,z

== Git commit sha-1 ==
* Specific to each commit, changes after each amend. But all are tracable to 
the change numbers
+ Not dependant on Gerrit, but gerrit can handle them fine with the #q search
+ Usable in git cli
+ Link to it can be copied from the patch set on the gerrit change page Patch 
set # hash

Example links:
* https://gerrit.wikimedia.org/r/#q,8d6b19d8c2ed041443b9433298aa08a187ad1d83,n,z
* 
https://gerrit.wikimedia.org/r/gitweb?p=mediawiki%2Fcore.git;a=commit;h=8d6b19d8c2ed041443b9433298aa08a187ad1d83

- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Abstract schema work

2012-04-18 Thread Krinkle
On Apr 18, 2012, at 6:09 PM, Greg Sabino Mullane wrote:

 I'm jumping back into the abstract schema project, and was 
 wondering what the workflow is for something like this. Should 
 I simply create a new branch and have everybody (assuming other 
 people have an interest) collaborate on that branch until we 
 are ready to start involving gerrit? Obviously, this will 
 be a large, all-at-once patch once finished.

I'd say put it in Gerrit from the start (in a branch) so that everyone
can check it out and send suggestions (either as a commit or through the
feedback channels on the mailing list, wiki or Gerrit comments).

Gerrit reviews are also enabled for branches, so you don't have to worry
much about clashing with others, a commit to the branch on gerrit will
not end up in the actual branch until it is reviewed.

When it is getting closer to perfection you could push a patchset of the
entire branch as a squashed-commit on the  master branch to gerrit for
review. Even then there is still plenty of room for people to
cherry-pick that onto their local repositories for testing, and submit
revisions of the patch set to Gerrit etc., until it is approved.


 
 Also, I'm still not clear on which mailing list would be more 
 approriate for discussion of a feature like this. The descriptions 
 of mediawiki-l and wikitech-l both say features and development. 
 I lean towards this list (wikitech) due to the higher traffic.
 Thanks.
 

Starting an RFC page [1] is a good way to centralize the concept for
further collaboration and feedback. Discussion sometimes continues on
wikitech-l, and sometimes on the wiki talk page of the RFC.

-- Krinkle

[1] https://www.mediawiki.org/wiki/RFC
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Policy for one ore more new database tables for extensions

2012-04-17 Thread Krinkle
On Apr 17, 2012, at 9:05 AM, Thomas Gries wrote:

 Hi,
 
 for example, the extension AJAXPoll adds and uses two new database
 tables to a MediaWiki installation.
 This specific extension could be rewritten to use only one new table.
 
 My questions:
 1. Is there a policy, convention, that more than one new table should be
 avoided in extensions ?
 2. Are two or more new tables tolerated?

If it it required, then sure it's tolerated. Some of the extensions currently
deployed on Wikipedia have lots more tables even.

Of course it goes without saying, that if you can optimize the number of tables
without sacrificing performance, then by all means: Go for it.

If you could merge the tables and make it still perform well with the right
database indexes, why not :)

On the other hand, if it means the table will be significantly larger, then it
may be better to keep them separate. For example, I'd say it's better two tables
(say, 'group' and 'item', where item.it_group refers to group.gr_id). So that
you don't have to repeat all information about the group in each item-row, and
if the group has to change, no need to change all item-rows.

-- Krinkle



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Wikimedia skins directory

2012-04-17 Thread Krinkle
On Apr 17, 2012, at 5:08 AM, MZMcBride wrote:

 Hi.
 
 The URL for CSS used to be something like this:
 
 //bits.wikimedia.org/skins-1.18/monobook/main.css
 
 Now there's apparently this:
 
 //bits.wikimedia.org/skins-1.20wmf1/monobook/main.css
 
 But this also works:
 
 //bits.wikimedia.org/skins/monobook/main.css
 
 Can someone please explain which should be used? Unexpectedly getting rid of
 skins-1.18/ is actively breaking the look of certain pages (such as the www
 portals), but I'm not clear whether I should simply be switching to a new
 MediaWiki version in the URL or if there's something canonical I can use
 instead (which would obviously be preferred).


The MediaWiki skins classes and their resources are bundled in each MediaWiki
release, not supposed to be separated like this. And even then, it is best not
to load the raw files directly.

One of the advantages of ResourceLoader is that modules have symbolic names and
the end-user doesn't need to deal with any version specific file names, file
paths or root directory changes.

So loading the 'skins.monobook' module will at any given point in time load that
module in a way that it is compatible with the wiki it is loaded from.

So on www.wikipedia.org, instead of:

link rel=stylesheet 
href=//bits.wikimedia.org/skins-1.20wmf1/monobook/main.css?303-4 
type=text/css media=screen, projection /

Use the same that MediaWiki itself uses (based on sample taken from source code
of https://www.mediawiki.org/wiki/MediaWiki?useskin=monobook )

link rel=stylesheet 
href=//bits.wikimedia.org/www.mediawiki.org/load.php?debug=falseamp;lang=enamp;modules=mediawiki.legacy.commonPrint%2Cshared%7Cskins.monobookamp;only=stylesamp;skin=monobookamp;*/

That will be more stable than using the raw file from any directory. First,
because it has no version in it. Secondly, it also doesn't have any file path
specific thing. So if monobook would split the css file into multiple files or
rename main.css to screen.css or whatever, then the above will still work
because it uses the ResourceLoader module name skins.monobook, and MediaWiki
itself has the module definition of it that contians what it needs and from
where.

This also makes the load module effecient because it is minified (whereas
loading main.css directly is just a raw file).

-- Krinkle



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Policy for one ore more new database tables for extensions

2012-04-17 Thread Krinkle
On Wed, Apr 18, 2012 at 12:16 AM, Roan Kattouw roan.katt...@gmail.com
 wrote:

 On Tue, Apr 17, 2012 at 5:37 PM, Martijn Hoekstra
 martijnhoeks...@gmail.com wrote:
  On Tue, Apr 17, 2012 at 10:51 PM, Krinkle krinklem...@gmail.com wrote:
  On Apr 17, 2012, at 9:05 AM, Thomas Gries wrote:
 
 My questions:
  1. Is there a policy, convention, that more than one new table should
 be
  avoided in extensions ?
  2. Are two or more new tables tolerated?
 
  If it it required, then sure it's tolerated. Some of the extensions
 currently
  deployed on Wikipedia have lots more tables even.
 
  Of course it goes without saying, that if you can optimize the number
 of tables
  without sacrificing performance, then by all means: Go for it.
 
  If you could merge the tables and make it still perform well with the
 right
  database indexes, why not :)
 
  On the other hand, if it means the table will be significantly larger,
 then it
  may be better to keep them separate. For example, I'd say it's better
 two tables
  (say, 'group' and 'item', where item.it_group refers to group.gr_id).
 So that
  you don't have to repeat all information about the group in each
 item-row, and
  if the group has to change, no need to change all item-rows.
 
  -- Krinkle
 
 
  Am I reading this right as suggesting and encouragement of database
  denormalisation in extensions?
 
 Ignore what Krinkle said. We DO NOT encourage denormalization, except

where necessary for performance reasons.

 Your extension should have a sanely designed database schema which is
 normalized in as far as it makes sense. Don't feel bad about creating
 too many or too few tables, just try to design the schema the sanest
 way you can.


Who said anything about denormalization[1]? Maybe I'm missing something
here,
but I think we're saying the same thing.

What I meant (and thought I made clear) was that one should put a little
bit of
thinking into the database design, using as many or as few tables as it
needs to
work well. Preferably without duplication of information by splitting it
into
separate logical tables (such as the 'group' / 'item' example I mentioned,
which
is quite common in MediaWiki and in pretty much any other major SQL-backed
web
application). Maybe my description of the merge was a bit too vague, but
let
me elaborate on what I meant.

I wanted to add to the discussion that creating separate tables is not
inherently good or bad on itself. Sometimes it makes sense to use less
tables,
sometimes it makes sense to use more tables. In the above cited mail I
mentioned
a group/item relation where it is best to keep them in separate logical
tables.
Here is an example where not splitting it up might make sense: A system for
managing lists with items of a certain type (where the types are variable).
Then
it may make more sense to have a single table for the list items (with a
column
indicating the item type) and a table for lists and a table for types. So,
only
1 table for items with a column to indicate the item type, rather than
having a
separate item-table for each item-type. Again, it depends on the situation
and
on how variable variable is.

-- Krinkle

[1] https://en.wikipedia.org/wiki/Denormalization
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki.org syntaxhighlight and div of syntaxhighlight lang=php broken

2012-04-14 Thread Krinkle
Bug dupe of https://bugzilla.wikimedia.org/show_bug.cgi?id=35875

Fixed, awaiting review / deployment.

-- Krinkle

On Sat, Apr 14, 2012 at 8:15 AM, Thomas Gries m...@tgries.de wrote:

 Has someone recently changed something with the source lang=php
 extension, style ?
 Layout is broken

 filed as
 *Bug 35968* https://bugzilla.wikimedia.org/show_bug.cgi?id=35968
 -syntaxhighlight layout broken on MediaWiki.org : rendered as i) an
 empty line with borders before and ii) missing border box and background
 around the code (edit https://bugzilla.wikimedia.org/process_bug.cgi#)
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] You are invited to join the wikiHow source code cleanup project

2012-04-07 Thread Krinkle
Replies inline.

On Wed, Apr 4, 2012 at 11:33 PM, Jack Phoenix j...@countervandalism.netwrote:

 To those who don't know me yet, hi! I'm Jack Phoenix, and I've been a
 MediaWiki developer since May 2008.


Hi Jack, nice to hear from you again :)


 Yep... the site's running MediaWiki 1.12 (!), which is
 four years old. In software development, four years is an eternity.


 Just 4 years? I thought it was older but you're right. 1.12 is from 2008
and it does seem an eternity ago.


 Fortunately wikiHow publishes their source code at
 http://src.wikihow.comand a new dump will be generated each Monday.


For those wondering, the url is http://src.wikihow.com (without and)
and I'd like to add that this awesome service has been up for quite a
while (it is not new), it's been up at least for a few years now, and
doing great! However they are static (.zip) dumps of the wikihow source
code. Although wikiHow does use SVN internally, the actual repo is not
public.


 (User:Lcawte) created a Google Code repository for the project.
 You can see the project page at http://code.google.com/p/wikihow/


Aside from the importing of the weekly dumps of the real source code,
I'm not sure what is going on there or why there is a copy of MediaWiki
1.18 in there. If the goal is to basically re-construct wikiHow on a
MediaWiki 1.18 base instead of 1.12 (without core hacks this time), then
why the copy of MediaWiki? Just an idea but maybe only keep the
following in the new repository, with instructions for users that are
helping out to install MediaWiki core, and then checkout the repo and
symlink and/or include them from the wikihow repo:

* ./extensions/* [all awesome wikihow-made extensions]
* ./extensions/WikiHow/WikiHow.php [custom settings of wikihow sites]
* ./skins/WikiHow

Maybe even on top of MediaWiki core master instead of the latest
release so that if you need any additional hooks in MediaWiki core
(which likely will be the case on several occasions), you (or someone
else) can propose them and after they're reviewed/merged into core you
can use them right away.

-- Krinkle

PS: Regarding git, if you're afraid of Gerrit but have no problem with
using a third party for hosting (Google Code in this case), you could
also try GitHub which tends to be a very friendly introduction to Git
for most people I know. Especially the concept of pull request is very
well thought trough there. And as a bonus, there is no dependency on
GitHub since, contrary to SVN, everybody has the entire repository so
you can work offline and maybe even one day host it elsewhere.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] GSOC proposal: Native application uploader

2012-04-06 Thread Krinkle
On Apr 5, 2012, at 1:40 AM, Platonides wrote:

 Hello all,
 I'm presenting a GSOC proposal for a native desktop
 application designed for mass uploading files on
 upload campaigns.
 
 This follows the call by Odder at [1] for such a tool,
 and indeed the scope of the tool would be tailored to
 WikiLovesMonuments.
 
 The deliverable is such an application, which shall be:
 * A tiny autocontained program (probably in C++), with
 different versions for each target operating system.
 * Configurable defaults for uploading to Wikimedia Commons
 own images as cc-by-sa with given templates and categories.
 * The user shall be able to change the license / categories
 if needed.
 * Request the monument id for the image.
 * Validation of the monument identifier through a web
 service if available and time permits.
 * Basic documentation of the competition (rules and FAQ)
 * Contains the WLM logo somewhere.
 * Localisable through translatewiki.net for at least the
 28 countries of [2]
 * Save configuration of images description for later upload.
 * Asynchronous upload of the images in the background.
 
 Opinions? Improvements? Sexy names? Mentors?
 
 All of them are welcome!
 
 1-
 http://lists.wikimedia.org/pipermail/wikilovesmonuments/2012-March/002538.html
 2-
 http://commons.wikimedia.org/wiki/Commons:Wiki_Loves_Monuments_2012/Participating_countries
 

Blame me for loving front-end technology, but maybe one of these ideas
are useful to you:

* Not WLM specific internally, please (instead it could come with a
  number of modes, possibly extendable with plugins)

* Perhaps not a desktop application at all (nothing more mobile and
  future proof than the web[1]). Something like a MediaWiki extension or a
  standalone web application. Or extend / improve UploadWizard.

* If none of these, perhaps you can be persuaded to go for a hybrid,
  look at Adobe AIR. With AIR you can use HTML/CSS/JS but not deal with
  traditional web browsers. Instead it runs as a native application, also
  very flexible and cross-OS. And no cross-browser issues since the only
  engine it'd run on is that of AIR (uses WebKit). With AIR it still has
  most desktop application possibilities such as caching files locally,
  updating the application periodically, storing preferences, accessing
  the file system, details I/O and up/download uploading/progress
  meters etc.

-- Krinkle



[1] disclaimer, disclaimer, ..




___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] I'd prefer that you didn't submit this

2012-03-30 Thread Krinkle

On Mar 30, 2012, at 2:24 AM, Tim Starling wrote:

 On 29/03/12 00:10, Chad wrote:
 Hi everyone,
 
 There's been some comments that the phrasing for a -1 vote in
 Gerrit (I'd prefer that you didn't submit this) is kind of personal
 and we can do better.
 
 I did some testing and this is totally configurable :) It won't change
 for old comments that were already submitted, but we can pick
 some nicer wording going forward.
 
 I really don't have any good suggestions for this, so I'm opening
 this up to the list for a bit of good old fashioned bikeshedding.
 
 I don't really want Gerrit putting words into my mouth regardless of
 how nice they sound. There will always be cases where the phrase is
 inappropriate and offputting, regardless of which one you choose.
 
 How about Set code review score to -1? Then a more personal message
 can be typed by the human doing the review.
 
 -- Tim Starling

I couldn't agree more. So far all proposal make implications that sometimes
simply aren't appropriate. Either they leave no room for fixing it (Don't
submit it), or are too much foccused on fixing something small, but
implying the overal intention is wanted (Needs improvement) etc. etc.

Just say what you want to say in a comment, the numbers don't add up and
are only a brief summary (also note that you can submit a different score
at anytime and it will replace your previous score).

Can we just set it to an empty string and let the numbers and hand-written
comment speak for themselves?

-- Krinkle

On Mar 29, 2012, at 11:23 PM, Krinkle wrote:

 +1 for There is a problem with this patchset
 
 (without , please improve).
 
 I think that keeps it more neutral without saying anything the user doesn't
 intend to say. It also keeps free ambiguity in the intention (to be 
 disambiguated
 in a comment) between 'wontfix' and 'fixme'.
 
 -- Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] I'd prefer that you didn't submit this

2012-03-30 Thread Krinkle
On Mar 30, 2012, at 3:07 PM, Chad wrote:
 On Fri, Mar 30, 2012 at 8:49 AM, Krinkle krinklem...@gmail.com wrote:
 Can we just set it to an empty string and let the numbers and hand-written
 comment speak for themselves?
 
 
 I think this will be more confusing. You need some text for
 the radio field.
 
 In any case these summaries are not meant to replace a
 comment and I've never implied that they should. You
 should always take time to explain your review, especially
 if it's a -1/-2.
 
 -Chad

 On Fri, Mar 30, 2012 at 8:49 AM, Krinkle krinklem...@gmail.com wrote:

 Can we just set it to an empty string and let the numbers and hand-written
 comment speak for themselves?

Yes the radio buttons would have the +2/+1/0/-1/-2, just like they
after submission.


On Mar 30, 2012, at 3:09 PM, Chad wrote:
 On Fri, Mar 30, 2012 at 8:49 AM, Krinkle krinklem...@gmail.com wrote:
 I couldn't agree more. So far all proposal make implications that sometimes
 simply aren't appropriate. Either they leave no room for fixing it (Don't
 submit it), or are too much foccused on fixing something small, but
 implying the overal intention is wanted (Needs improvement) etc. etc.
 
 
 Perhaps we could go back to the all-encompassing fixme. If
 we did that, I'd suggest adjusting the other ones to one-word
 summaries as well.
 
 -Chad


I'm not sure fixme is entire appropiate either :D

A gerrit review rejection (which is what a down vote suggests if it is
unfixable or not fixed) effectively covers both fixme and reverted
(when compared to how we review SVN).

fixme is no different than Needs improvement (except stronger, maybe).

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] jQuery update policy

2012-03-30 Thread Krinkle
On Mar 30, 2012, at 10:14 AM, Amir E. Aharoni wrote:

 Hi,
 
 I made a little localization fix to the jQuery.ui datepicker, which is
 used by the Upload Wizard. I submitted it upstream through GitHub and
 it was merged there.
 
 Krinkle says that jQuery is supposed to be only modified upstream, and
 that is a Good Thing. What is our policy for actually merging upstream
 jQuery changes to MW code?

I wouldn't go as far as calling it a policy, but I'd recommend we don't do
merging of any kind with upstream libraries.

Only update to official (minor or major) releases.

So next time they release, we update the copy in master and from there we
make sure things are still compatible and the unit tests pass.

If they consider it an important fix, they'll make a minor release soon,
and else we'll have wait for them to release.

If they refuse to release (or if the maintainer isn't active anymore), then
we could consider forking it entirely and merging our proposed upstream
fixed to master ahead of time (like we did with the jQuery Tipsy plugin), but
fortunately that isn't a concern for jQuery UI :)

-- Krinkle
 

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] I'd prefer that you didn't submit this

2012-03-29 Thread Krinkle
+1 for There is a problem with this patchset

(without , please improve).

I think that keeps it more neutral without saying anything the user doesn't
intend to say. It also keeps free ambiguity in the intention (to be 
disambiguated
in a comment) between 'wontfix' and 'fixme'.

-- Krinkle

On Mar 29, 2012, at 4:07 PM, Chad wrote:

 On Thu, Mar 29, 2012 at 9:42 AM, Jon Robson jrob...@wikimedia.org wrote:
 +1 for There is a problem with this patchset, please improve.
 
 
 Alright, sounds good to me. Thanks for the input everyone.
 I'll get this fixed soon.
 
 -Chad
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Email addresses for wikimedians

2012-03-29 Thread Krinkle
Excuse me but it seems this seems to avoid the actual problem.

Although I know the answer for the most part, the question I think we should be
asking is more along the lines of Why are they using email
instead of on-wiki discussion threads?

I personally don't see a lot of gain in providing mailaliases for users.

@wikipedia.org seems too Wikipeida specific and may indeed look too official.
@something.wikipedia.org may not scale either because we also have projects on
other hostnames (wiktionary, wikibooks, but also species,  commons, meta and
other *.wikimedia.org wikis).

Also, when we do find a good hostname to use, I think it should be universal and
tied to a SUL username (not per-wiki or per-project), so it shouldn't contain
the name of a project (wikipedia, wiktionary, commons, ..) and not the name of
the software (mediawiki). Something like users.wikimedia.org might be
appropriate.

If that is done though, would it be an alias (forward) address or would it allow
sending (IMAP/POP3). The latter would probably also cost a significant amount of
storage over time, so alias/forward is probably better.

However that means that after you reply, your original e-mailaddress is visible.
In which case there is no advantage to using an alias over simply using
[[Special:EmailUser]], which is effectively also an alias for the first mail.

-- Krinkle

On Mar 29, 2012, at 9:41 PM, Petr Bena wrote:

 Hi,
 
 Lot of volunteers are using email to communicate when they discuss
 wikimedia related issues. Even if it's not a big problem to use
 personal email there, lot of people, especially administrators do not
 want to uncover their personal email. Lot of them even have a special
 private mail for wikipedia purposes. Although wikimedia has own email
 server on wikimedia domain, it's being given to paid staff only, so
 question is if it would be worth of having an email service for
 volunteers who request it (it should be probably limited to users who
 match some criteria) or just a forward service which can help us to
 get our personal email hidden (this would not eat space and would be
 very cheap). I understand that people from foundation might have
 concerns that volunteers with wikimedia.org emails could cause some
 troubles or people might be in thought they are employees, so why not
 to use some another domain, like wmflabs.org for developers and
 wikipedia.org for wikipedians for example (other domains for
 respective projects). What do you think?
 
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Using prototypical inheritance in dynamically loaded ResourceLoader modules

2012-03-19 Thread Krinkle
offtopic

On Mon, Mar 19, 2012 at 9:35 AM, Daniel Friesen
li...@nadir-seen-fire.comwrote:

 On Mon, 19 Mar 2012 00:40:54 -0700, Dmitriy Sintsov ques...@rambler.ru
 wrote:
 var jqgmap = [];

 for ( var mapIndex in jqgmap ) {


 This is VERY bad JavaScript coding practice. Please use $.each().


This is rather exaggerated. Even more when looking at that suggestion.

Arrays should, indeed, not be enumerated with a for-in loop. Arrays in JS
can only contain numeral indices, so they should simply be iterated with a
simple for-loop like this `for (i = 0; i  myArr.length; i += 1) { .. }`
(maybe cache length for slight performance gain by reducing property
lookups).

Using $.each has overhead (+1+n function invocations). When given an array
it will do a simple for-loop with a counter. It has overhead of 1+n
additional function invocations and context creations. In most cases there
is no use for it. There is one case where it is handy and that's when you
specifically need a local context for the loop (being careful not to create
later-called functions inside the loop, risking all variables being in the
post-loop state). If you don't need a local scope for your loop, then using
$.each (or the native [].forEach in later browsers) is pointless as it only
has overhead of additional function invocations and lowering the position
in the scope chain.

When iterating over objects, however, (not arrays) then $.each is no better
than a for-in loop because (contrary to what some people think) it is not a
shortcut for for-in + if-hasOwn wrapper. When an object is passed, it
literally just does a plain for-in loop invoking the callback with the
value. jQuery does not support environments where someone extends the
native Object.prototype because it is considered harmful (an therefore
MediaWiki inherently does not support that either), so a plain for-in loop
over an object (excluding array objects) is perfectly fine according to our
conventions.

See also http://stackoverflow.com/a/1198447/319266

 but so much for the good (and bad, evil) parts of javascript :D

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Using prototypical inheritance in dynamically loaded ResourceLoader modules

2012-03-19 Thread Krinkle
offtopic

On Mon, Mar 19, 2012 at 2:23 PM, Krinkle krinklem...@gmail.com wrote:

 On Mon, Mar 19, 2012 at 9:35 AM, Daniel Friesen li...@nadir-seen-fire.com
  wrote:

 On Mon, 19 Mar 2012 00:40:54 -0700, Dmitriy Sintsov ques...@rambler.ru
 wrote:
 var jqgmap = [];

 for ( var mapIndex in jqgmap ) {


 This is VERY bad JavaScript coding practice. Please use $.each().


 This is rather exaggerated. Even more when looking at that suggestion.

 Arrays should, indeed, not be enumerated with a for-in loop. Arrays in JS
 can only contain numeral indices, so they should simply be iterated with a


s/can/should only contain numeral indices. Arrays as just objects so they
can indeed
contain anything, and inherit functions. Also note that a for-in loop on
arrays will return
the keys as strings, not numbers:
`var a = ['foo', 'bar']; for (var b in a) {}; console.log(typeof b, b /*
string, 1 */);`


-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JS Iteration (was: Using prototypical inheritance in dynamically loaded ResourceLoader modules)

2012-03-19 Thread Krinkle
ontopic :) 
On Mon, Mar 19, 2012 at 3:03 PM, Dmitriy Sintsov ques...@rambler.ru wrote:

 On 19.03.2012 17:23, Krinkle wrote:


 On Mon, Mar 19, 2012 at 9:35 AM, Daniel Friesen
 li...@nadir-seen-fire.com**wrote:

  On Mon, 19 Mar 2012 00:40:54 -0700, Dmitriy Sintsovques...@rambler.ru
 wrote:
 var jqgmap = [];

  for ( var mapIndex in jqgmap ) {

  This is VERY bad JavaScript coding practice. Please use $.each().

  This is rather exaggerated. Even more when looking at that suggestion.

 Arrays should, indeed, not be enumerated with a for-in loop. Arrays in JS
 can only contain numeral indices, so they should simply be iterated with a
 simple for-loop like this `for (i = 0; i  myArr.length; i += 1) { .. }`
 (maybe cache length for slight performance gain by reducing property
 lookups).

  My array is numeric but sparse,
 myArr = [];
 myArr[0] = a;
 myArr[1] = b;
 So I cannot just use incremental key iteration, at least existence of
 element should be checked.


If you use an array, use `myArr[myArr.length] = value;` or
`myArr.push(value);` to add something to it. And `.slice(..)` or
`.splice(..)` to remove something from it. Never set a key directly or
remove a key directly.

If you want a non-linear array, create an object instead. JavaScript allows
to do this (because Array is just an extension of Object in javascript),
but that doesn't mean you should.

If you need non-linear keys, don't create an array!

code
var myObj = {}; // not []
myObj.property = value;

var customProp = getPropName();
myObj[customProp] = value;

for ( var key in myObj ) {
  // myObj[key]
}
/code

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Continuous integration workflow post-Git migration

2012-03-17 Thread Krinkle
Hey all,

I've created a thread at MediaWiki.org describing a proposed workflow for
how continous integration works and how it affects the Gerrit workflow
after we finished the migration to Git. Most if not all of the points were
are already discussed, but I wanted to make sure this is indeed what we
want and also to have it properly documented.

Some of it is already being brought into production as we speak.

Feel free to discuss it either on the talk page[1] or here on wikitech-l
and the workflow is also posted to this wiki page:
https://www.mediawiki.org/wiki/Continuous_integration/Workflow
so feel free to edit it as you would any other wiki page.

-- Krinkle

[1] 
https://www.mediawiki.org/wiki/Talk:Continuous_integration#Proposal_for_continuous_integration_.28post-Git_migration.29_13216
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Auto-created gerrit accounts for users that had enough info in USERINFO

2012-03-17 Thread Krinkle

On 17 mrt. 2012, at 23:28, Daniel Friesen li...@nadir-seen-fire.com wrote:

 So, people obfuscate their USERINFO in ways that any half decent bot 
 programmer could get around but no-one even bothers scanning. While the 
 moment they try to post a bug they expose their e-mail in a place that we 
 know bots do look through

Not to mention the public archives of lists.wikimedia.org :-)

--Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Re-introducing UNCONFIRMED state to bugzilla

2012-03-15 Thread Krinkle

On Mar 16, 2012, at 12:52 AM, Mark A. Hershberger wrote:

 Rob Lanphier ro...@wikimedia.org writes:
 
 Mark and I just discussed this, and he's going to look into having a
 CONFIRMED state.
 
 https://bugzillaupdate.wordpress.com/2010/07/06/bugzilla-4-0-has-a-new-default-status-workflow/
 
 describes this implementation:
 
 UNCONFIRMED - CONFIRMED - IN_PROGRESS - RESOLVED - VERIFIED
 
 Since UNCONFIRMED is used so many places, I'm worried about changing
 that to NEW.

So up until last month we had this workflow:
1. NEW
2. ASSIGNED[1]
3. RESOLVED
(sometimes) 4. VERIFIED
(or) REOPENED - step 1

I understand that last week it changed to:
1. UNCONFIRMED
2. NEW
3. ASSIGNED
4. RESOLVED
(sometimes) 5. VERIFIED
(or) REOPENED - step 2

I don't think it makes sense to use NEW as CONFIRMED, because, agreeing
with [2], NEW is not descriptive. How about using CONFIRMED and dropping
NEW completely?

-- Krinkle


[1] The difference between a confirmed bug having an assignee and status
ASSIGNED, is that ASSIGNED means someone has it on his agenda to actively
work on. Whereas the assignee in general is just whoever is currently
watching over it. ASSIGNED and IN_PROGRESS are basically there same. Except
that IN_PRORESS is slightly later than ASSIGNED but using both doesn't
make sense and we're already using ASSIGNED.
[2] 
https://bugzillaupdate.wordpress.com/2010/07/06/bugzilla-4-0-has-a-new-default-status-workflow/
[3] http://www.bugzilla.org/docs/4.0/en/html/lifecycle.html
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Secure.w.o in js on wiki sites

2012-02-27 Thread Krinkle
On Feb 17, 2012, at 4:44 AM, Mark A. Hershberger wrote:

 Today, secure.wikimedia.org was offline for a bit and, in my tour of
 Village Pumps after the rollout of 1.19, I saw some problem reports that
 were the result of secure.w.o but were mistakenly attributed to 1.19.
 
 Using Google, I found several instances refering to secure.w.o.  I can't
 fix these, but maybe some of you guys can.
 
 ** https://de.wikipedia.org/wiki/MediaWiki:Common.js
 ** https://en.wikisource.org/wiki/MediaWiki:Gadget-TemplatePreloader.js
 ** https://es.wikisource.org/wiki/MediaWiki:Common.js
 ** https://fr.wikipedia.org/wiki/MediaWiki:Common.js
 ** https://jv.wikipedia.org/wiki/MediaWiki:Common.js
 ** https://pl.wikisource.org/wiki/MediaWiki:Gadget-iw-links.js
 ** https://pt.wikipedia.org/wiki/MediaWiki:Common.js
 ** https://ru.wikisource.org/wiki/MediaWiki:Gadget-urldecoder.js
 ** https://vi.wikipedia.org/wiki/MediaWiki:Common.js
 ** https://zh.wikisource.org/wiki/MediaWiki:Common.js

These have all been taken care of now.

Either 
* the redundant code removed
* the fix adjusted/merged with the protocol-relative url case.
* Or some were left alone if they were specifically targetting
  secure.wikimedia.org and are fine doing so as long as the other case is
  protocol-relative.

-- Krinkle


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Postponing Git migration until March 21

2012-02-27 Thread Krinkle
On Feb 28, 2012, at 5:44 AM, Rob Lanphier wrote:

 So, there's a machine deployment we need to do as well.  
 The good thing about this
 

The good thing about this… is yes ?

 Thank you everyone for your patience on this transition.
 

Thank you!

 Rob
 
 [1] http://www.mediawiki.org/wiki/Git/Conversion#Unscheduled_items
 [2] 
 https://bugzilla.wikimedia.org/showdependencytree.cgi?id=22596hide_resolved=1
 

-- Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] MediaWiki RC2UDP feature/bugfix frozen (aka irc recent changes feed)

2012-02-22 Thread Krinkle
Hi,

Following recent events that unexpectatly broke many wiki bots monitoring
wikis in real-time via the irc recent changes feed (powered by routing
localized strings emitted by MediaWiki's rc2udp output to an IRC server),

The feed is mostly used by bots which have hardcoded most enviromental
variables, and had to do so since MediaWiki never offered a way to get this
parse information dynamically from an API (i.e. to the i18n messages used
and the meaning of the numbered replacement variables).

To avoid future breakages or mass migration while a replacement[1] is
already on the horizon, I think it's a good time as any to declare this
feature as legacy and therefor feature and bugfix froozen until 
deprecated/superseeded by a more modern system[1].

Pretty much the only aspect that is still free to change (and always has) is
the content of the i18n messages (e.g. it's totally fine if translatewiki
commits a patch that changes [[MediaWiki:1movedto2/de]] from `verschob
„[[$1]]“ nach „[[$2]]“` to `verschieb [[$1]] auf [[$2]]` (which would
affect log comments of German content-langauge wikis such as in
irc.wikimedia.org/#de.wikipedia).. as long as the message is still stored at
message-key 1movedto2 and $1 is origin and $2 is target. Same goes for
messages like MediaWiki:Revertpage,
MediaWiki:Autosumm-blank and  MediaWiki:Autosumm-replace. Which aren't
log messages, but are used the same way (edit summary is parsed and action
is determined).

I hope we can soon start focussing on the new system [1], start
elaborating on what the needs are, use cases, requirements and come up with
a design specification and implementation.

Related events: bug 34508[2], bug 30245[3].

-- Krinkle

[1] 
https://www.mediawiki.org/wiki/Requests_for_comment/Structured_data_push_notification_support_for_recent_changes
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=34508#c16
[3] https://bugzilla.wikimedia.org/show_bug.cgi?id=30245
[3] http://etherpad.wikimedia.org/IRCBot-Messages
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki 1.19, Resource Loader, Gadgets, and User Scripts

2012-01-21 Thread Krinkle
Hi,

Thanks for bringing this to wikitech-l.

On Sat, Jan 21, 2012 at 4:25 AM, Mark A. Hershberger 
mhershber...@wikimedia.org wrote:


 I adjusted the [[MediaWiki:Gadgets-definition]] on enwiki.beta and let
 the people on enwiki know what I had found, but I think this sort of
 adjustment will be needed in more places.  For proof, just look at
 http://de.wikipedia.beta.wmflabs.org/wiki/Wikipedia:Hauptseite?debug=true
 in FireBug.

 You'll see (or, at least, I see) two un-fulfilled dependencies on
 mw.util.


One exception is thrown due to references to code from the mediawiki.util
module (missing dependency) and two errors due to missing a dependency on
mediawiki.user. Which shows that the problem is not limited nor specific
to
the mediawiki.util module.

I've also spotted a fair number of gadgets missing dependencies on
jquery.cookie during the tour. Basically any references to module code
written by users that didn't know about dependencies is going to fail. As of
writing there are 115 different modules in core that contain one or more
javascript files.

Some sort of dependency needs to be added on mw.util -- either just
 preload it or make it log a message when there is an unenumerated
 dependency on it (and other necessary js dependencies).


I think we should attempt to complete the tour (at least the adding of
dependency information) for all wikis before deployment, that's the only
real
fix for the problem and should've happened in the first place. If after
deployment it would turn out that there is still a large number of scripts
floating around without dependency info, we could add the most common ones
unconditionally to the pre-load startup stack which currently only contains
jquery and mediawiki.

Such measure should be temporary, because it would cause confusion if some
modules work without dependencies info and others don't. i.e. what if next
up
is a bug report about another dozen modules that people often use but aren't
declared in their dependencies and causing problems. It should not become
normal that code works without declaring dependencies. Especially if new
gadgets are written based on others that don't either. These copied
habbits
are very common, and is how most users learn what is right. In my
perception
this is how the vast majority of javascript code has been saved to wiki
pages
so far (both to good and bad ends).

Or to put it boldly vice versa: It should be perfectly normal for code to
explode if it's missing a dependency (I'd rather have it explode on-save for
it to be fixed right away, then it silently working because another gadget
happens to be loaded first is providing the required module to the scope).

However such explosion (or logging a message as you suggest) about a
dependency missing is unlikely to occur when it should. JavaScript is simply
too dynamic to extract this information. Objects and functions can be
aliased,
code can be conditionally loaded, code an be lazy-loaded with inline
dependencies and more.
And most commonly, modules loaded as dependency for one module are also
available to all future modules (i.e. if module A and B both use module C,
but
only A is declaring it's dependency on C. Then loading A and B will likely
succeed without error because C is only loaded once and when B is executed,
it's already there). To complicate it further, this scenario (only having to
load each module once) is actually a major feature of resource loader, not a
bug.

Some examples here may be exaggerated, I'm not saying they are going to
happen
if this and that etc., but I'm being cautious because I'm afraid there's no
harmless way back.

This, plus a message on WP:VPT or the like, would be a way for users and
 gadget authors to update their javascript.  It be a great way to notify
 users of the deprecation from 1.18 to 1.20 (or 1.21) without providing a
 horribly shocking experience after we upgrade the cluster.


I agree. I've updated the migration guide[1] with Edokter over the past
week,
including common mistakes and their (simple) solutions. So let's spread the
word about it[1] now. So that it may complete before 1.19 deployment. We
should include in communication that this is required for ResourceLoader
since
1.17 and becomes extra important with the load queue improvements coming in
1.19. Since faster loading makes race conditions in dependency resolving
more
likely to fail,  that is - if dependencies are not declared for all or only
few of gadgets.
Thanks,
- Krinkle

[1] https://www.mediawiki.org/wiki/ResourceLoader/Migration_guide_(users%29
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] [TestSwarm] Mr. Epiphany (Ubuntu 11.04), please leave the swarm

2012-01-17 Thread Krinkle
Hey all,

After two weeks of annoyance I'll just dump this here.
There have been many people who have donated their browser resources to our
TestSwarm[1] continuos integration project for MediaWiki.

First and foremost, thanks to all who've been doing so. Please keep doing
that :)

However there's one little thing I'd like to get rid of:

Someone with Ubunto has an Epiphany (WebKit based) browser that is
polluting[4] our swarm with false positives.

Epiphany is not a supported[2] browser, but due it lying[3] about it's
identity it is not blocked from joining the swarm and instead submits
results under Safari.

User agent of suspect:
Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.26+ (KHTML, like Gecko)
Version/5.0 Safari/534.26+ Ubuntu/11.04 (3.0.4-1ubuntu1) Epiphany/3.0.4

Please whoever operates this machine, keep it out of the swarm.

(in the future we might get a system to blacklist this but for now..)

Thanks :)

- Krinkle


[1] http://integration.mediawiki.org/testswarm/
[2] http://www.mediawiki.org/wiki/Compatibility#Browser
[3] http://webaim.org/blog/user-agent-string-history/
[4] http://integration.mediawiki.org/testswarm/user/mediawiki/ (if you
don't see a red column to the right here, it means the problem was fixed in
the mean time)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] SOPA banner implementation

2012-01-15 Thread Krinkle
On Mon, Jan 16, 2012 at 2:34 AM, Tim Starling tstarl...@wikimedia.orgwrote:

 On 15/01/12 06:33, MZMcBride wrote:
  Hi.
 
  Skimming https://en.wikipedia.org/wiki/Wikipedia:SOPA_initiative/Action
 ,
  it seems inevitable that some kind of banner (or blackout banner,
 which is
  apparently equivalent to an extra-large banner) will be implemented.
 
  The question becomes: how will this be implemented? I assume some kind of
  CentralNotice banner with some CSS absolute positioning or something? Is
  that right? Or will it be part of a separate extension?

 I am not aware of any such discussion. I suppose the underlying
 content could be hidden by just overlaying a blackout div with a high
 z-index, but that would cause the content to appear while the site is
 loading, to be removed later, and the scrollbars would be visible.


I've solved the scroll issue in my fork:
https://test.wikipedia.org/?banner=SOPA_blackout_alt
(fork of https://test.wikipedia.org/?banner=blackout)

Using overflow:hidden on body while banner is visible.


On Mon, Jan 16, 2012 at 2:34 AM, Tim Starling tstarl...@wikimedia.org
 wrote:

 On 15/01/12 06:33, MZMcBride wrote:
  Primarily I'd like to know if #siteNotice {display:none !important;}
 will
  continue to work. If so, there's no further action that needs to be
 taken.
  If it's going to be put into a weird extension or something, I'd
 personally
  favor an edit count check or a leave me alone user preference. The
  regulars really don't need to be bothered by this obnoxiousness.
 
  And, click-through banner or not, I think obscuring Special:UserLogin is
 a
  poor idea.

 You should raise this on the wiki. I don't see any discussion there
 about whether logged-in users should be allowed to view the site.


The default behavior taken with central notices is a close button
which will set a cookie. Once the cookie is set, the banner
is no longer shown.

Right now the test-wiki banner is using CentralNotice. However
the overlay is inserted outside #centralNotice due to layout constraints.
(although those could perhaps be worked around)

Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript on Special:UserLogin?

2012-01-15 Thread Krinkle
On Thu, Jan 12, 2012 at 11:00 PM, Daniel Friesen
li...@nadir-seen-fire.comwrote:

 On Wed, 11 Jan 2012 14:09:23 -0800, Chad innocentkil...@gmail.com wrote:

  On Wed, Jan 11, 2012 at 4:43 PM, Happy Melon
  happy.melon.w...@gmail.com wrote:
  Yes, no user-editable scripts are run on pages where password forms
  reside,
  because it is trivially easy for users to use them to introduce
  password-sniffing JS attacks, either deliberately or inadvertantly.  Or
  that's the idea, at least; IIRC there's an open bug about gadgets
  running
  somewhere they probably shouldn't, etc.
 
 
  Yep, you're looking at bug 10005[0]. This applies to password reset
  pages,
  preferences (last I checked) and user login.
 
  -Chad
 
  [0] https://bugzilla.wikimedia.org/10005

 That bug appears to be about user-js.

 I used to be in the camp that wanted scripts to not be run on login pages
 for security, and opposed ajax logins on the grounds that scripts would be
 run on the page.

 But awhile ago I learned about the history.pushState api. I found that
 it's almost pointless to hide scripts exclusively from pages with password
 forms on them. Since if a script is run on 'any' page on the wiki it's
 possible to use xhr and pushState together to fake the entire page
 browsing functionality, including the address bar changing as if you
 actually went to the url. So it's possible to hijack the native page
 browsing and make it look like the user went to the user login page when
 in reality the page never changed, the whole login page was actually
 loaded by xhr, and the malicious script is still running ready to swipe
 your password.

 --
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


Just to point it out explicitly, scripts are not and never were hidden
completely
from any page in MediaWiki (at least not for 5+ years).

OutputPage decides what the trust level of queued modules must be for them
to be loaded. SpecialUserLogin and others raise this from the default
anything
to not site and below. Leaving only core and extensions.

So JavaScript enhancements on the login page would work fine, as long as
it's origin
is core or extension. I believe a few ajax logins have been floating around
the net in the
past already.

I agree the thought behind it getting dated though. Aside from the malicious
way of faking the user front, one can actually make calls to the API from
any page,
directing.

Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] MediaWiki 1.18 learnings from a wiki admin extension writer

2012-01-13 Thread Krinkle
On Thu, Jan 12, 2012 at 5:57 PM, Chad innocentkil...@gmail.com wrote:

 On Thu, Jan 12, 2012 at 11:51 AM, Daniel Barrett d...@vistaprint.com
 wrote:
  Me:
  8.  Our MediaWiki:common.js stopped running on the login page. I
 realize this was a security fix; it just took me by surprise.  Fixed by
 writing a custom extension using the hook UserLoginForm to inject the few
 lines of JS we needed, and I'm evaluating other non-JS solutions for more
 security.
 
  Chad writes:
 This hasn't changed any time recently as far as I can tell...we've had
 this
 in place for quite awhile.
 
  Thanks Chad. FYI, MediaWiki:common.js definitely runs on
 Special:UserLogin in 1.17.1, the immediately previous release.
  DanB
 

 Hrm...I distinctly remember user's personal JS was disabled on that page.
 I wonder if ResourceLoader by grouping the JS also ends up disabling it.
 In either case, it is a security issue and there's not much we can do about
 it right now.

 -Chad


You're both right. It's basically for ever that those special pages call
OutputPage::disallowUserJs().

In 1.18 Happy-melon implemented something I think we should've had a long
time
ago, proper origin recognizition on a module/script level. So it is known
about
each module where it comes from and to what extend it should be trusted or
not.

When rewriting security implementation from the basic
OutputPage::disallowUserJs to this more elaborate way (using ORIGIN
constants
defined in the ResourceLoader class) it was probably (unconsciously?)
switched
from just JS by users, to modules (js/css) by origin = site (which also
matches user JS).

I'm not sure if that's how it happened, but that what I remember and it was
kept.

Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] should we keep $wgDeprecationWhitelist

2012-01-12 Thread Krinkle
Just my 2 cents:

I don't think we need this kind of deprecation-warning filter in core.

As far as I know deprecation warnings are never shown more than once for
every method,
so it isn't going to pollute or obfuscate the error output when you're
working with the code.

And users/production wouldn't be seeing them anyway, right ?

And if you maintain compatibility with an older version of MediaWiki, you
might also find
it useful to raise $wgDeprecationReleaseLimit ocasionally to only if
there's any usage of
more problematic / longest deprecated methods, and lower it again when you
want to see
them all.

-Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2012-01-09 Thread Krinkle
On Fri, Jan 6, 2012 at 8:06 PM, OQ overlo...@gmail.com wrote:

 On Fri, Jan 6, 2012 at 12:56 PM, Dan Nessett dness...@yahoo.com wrote:
  On Thu, 05 Jan 2012 14:03:14 -0600, OQ wrote:
  uninstall the pear version and do a make install.
 
  Didn't work.
 
  # make install
  ./install-phpunit.sh
  Installing phpunit with pear
  Channel pear.phpunit.de is already initialized
  Adding Channel components.ez.no succeeded
  Discovery of channel components.ez.no succeeded
  Channel pear.symfony-project.com is already initialized
  Did not download optional dependencies: pear/Image_GraphViz, pear/Log,
  symfony/YAML, use --alldeps to download automatically
  phpunit/PHPUnit can optionally use package pear/Image_GraphViz (version
 = 1.2.1)
  phpunit/PHPUnit can optionally use package pear/Log
  phpunit/PHPUnit can optionally use package symfony/YAML (version =
  1.0.2)
  downloading PHPUnit-3.4.15.tgz ...
  Starting to download PHPUnit-3.4.15.tgz (255,036 bytes)
  .done: 255,036 bytes
  install ok: channel://pear.phpunit.de/PHPUnit-3.4.15

 Dunno then, it installed 3.6.3 for me. Hopefully somebody here knows a
 bit more about pear :)


I remember having a similar issue when I first installed phpunit on my Mac.
Although I don't know the exact command, I remember having to update some
central installer proces by PEAR (channel ?) to the latest version, which
still had an old
version in it's (local?) database.

Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] JavaScript is interesting (was: Mediawiki 2.0)

2012-01-04 Thread Krinkle
On Wed, Jan 4, 2012 at 7:42 AM, Daniel Friesen li...@nadir-seen-fire.comwrote:

 On Tue, 03 Jan 2012 06:14:36 -0800, Antoine Musso hashar+...@free.fr
 wrote:

  Le Fri, 30 Dec 2011 18:31:30 +0100, Krinkle krinklem...@gmail.com a
  écrit:
  Since virtually any value other than null and undefined is an object,
  including numbers, strings and functions.
 
  Much like ruby!   http://ruby-doc.org/core/Integer.html
 
 $ irb
  5.upto( 10 ) { |num| print #{num}ber, }
 5ber,6ber,7ber,8ber,9ber,10ber,= 5
  print 4.even?
 true= nil
 
 
  You can change the 'even?' behavior to do something else of course :D
 
 

 ;) Oh no, in Ruby EVERYTHING is an object, there is no 'virtually' or
 'almost'.

  nil.class
 = NilClass
  puts nil is nil if nil.nil?
 nil is nil
 = nil
  nil.is_a? NilClass
 = true

 Although, their booleans are awkward.
  true.class
 = TrueClass
  false.class
 = FalseClass
  true.class.superclass
 = Object
  false.class.superclass
 = Object
 Last I checked the way to say Is this a boolean? in Ruby was `value ===
 true || value === false`. Ugh.

 In JavaScript we have Boolean instead.

 --
 ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]


JavaScript:

 true.constructor
 Boolean()

and Boolean inherits from Object too.

Difference though is that in constrary to Array and Object, JavaScript
treats a literal
strings, numbers and booleans always strictly equal to a similar one.

In that 5 === 5. Wheares new Number(5) === new Number(5) is false.

So eventhough numbers, strings and booleans are objects, they do not have
typeof
object, only those that are declared object through typeof are compared
by reference.

So both [1, 2] === [1, 2] and new Array(1,2) === new Array(1,2) is false.

yay!
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Mediawiki 2.0

2011-12-30 Thread Krinkle
On Dec 7, 2011, at 10:33 AM, Dmitriy Sintsov wrote:

 * Trevor Parscal tpars...@wikimedia.org [Tue, 6 Dec 2011 17:21:43 
 -0800]:
 The hype of 2.0 aside, is there a guideline for what should 
 constitute
 a
 major version number change?
 
 It looks like we are doing something like: Major.Minor.Release
 
 1.18 = Major: 1, Minor: 18, (alpha|beta|etc.)
 
 I'm just curious what people think would constitue a major version.
 We've
 certainly had major rewrites of systems in the past that didn't seem 
 to
 justify a version bump. Is there anything wrong with having version
 1.249?
 Is there a practical reason for bumping the version at some point 
 (like
 when the minor version hits tripple digits)?
 
 Also, a rewrite of MediaWiki should for sure be done in Node.js :)
 
 - Trevor
 
 Is Javascript really that good? Some people dislike prototypical 
 inheritance, it seems that jQuery prefers to use wrappers instead 
 (that's a kind of suboptimal architecture). Also, Google had some 
 complains about Javascript flaws (for example primitive types don't 
 allow high performance available in Java / C#), suggesting to replace it 
 with something else.. Although having common clientside / serverside 
 codebase is nice thing, for sure. And there's nothing more widespread 
 than Javascript at client side. Also, it's object side is strong 
 (something like Lisp with C-syntax), however it does not have generics, 
 named parameters etc..
 Dmitriy

I don't know how much you know about JavaScript but in my opinion it's often
misunderstood. I think it's superior than most other programming
languages because it's very expressive and it's simplicity is what allows
great complexity. Bad parts are kept in for compatibility, but once you start
treating those bad parts like they don't exist (by not using them, ever) one is
left with a language that is still as powerful. (btw, that's the great thing 
for being
a developer, we have the power to basically remove parts of the language
without changing the standards or breaking other peoples code).

It's fairly easy to use it in a classical inheritance way, (i.e. use classes 
that
extend from classes and use constructors for creating objects), which can't
be said for languages that use classical inheritance, there is no way to do
prototypal stuff there.

JavaScript's true power comes into play when using prototypal inheritance
directly (creating objects that inherit directly from other objects).

jQuery uses prototypal inheritance as well, it's what makes jQuery what it
is.

jQuery('#bodyContent').find( 'a' ).addClass( 'foobar' );

That chain is possible because the jQuery constructor (which is tucked away
and calling jQuery( .. ) just does return new jQuery.fn.init( .. );) which
creates an object that inherits all members in jQuery.prototype and does so
by reference (not by value)

So when a jQuery plugin is loaded onto the page at any time (that defines
jQuery.prototype.myPlugin), all existing jQuery objects will have that method.
And because all those methods return this, you can call another method on
the return value in a chain, and so on.

jQuery did choose not to extend the browsers' native objects'  prototypes but
that's purely a policy created because of how browsers work not because of
how the language itself work, it's technically possible and other libraries such
as MooTools do do that.


Indeed it's functions are primary citizens and object system is what makes it 
so strong.
Since virtually any value other than null and undefined is an object, including 
numbers,
strings and functions.





___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Instructions for setting up regression tests on local machine?

2011-12-30 Thread Krinkle
QUnit tests (for our JavaScript modules) can be ran by opening

./tests/qunit/index.html

in your browser.

Note that this is about to change in a few days, but check
https://www.mediawiki.org/wiki/Manual:JavaScript_unit_testing
for the latest info.


On Fri, Dec 30, 2011 at 9:29 PM, Roan Kattouw roan.katt...@gmail.comwrote:

 On Fri, Dec 30, 2011 at 9:11 PM, Dan Nessett dness...@yahoo.com wrote:
  I have poked around a bit (using Google), but have not found instructions
  for setting up the MW regression test framework (e.g., CruiseControl or
  Jenkins or whatever is now being used + PHPUnit tests + Selenium tests)
  on a local machine (so new code can be regression tested before
  submitting patches to Bugzilla). Do such instructions exist and if so,
  would someone provide a pointer to them?
 
 Jenkins is only really used to run the tests automatically when
 someone commits. You ran run the PHPUnit tests locally without
 Jenkins. Instructions on installing PHPUnit and running the tests is
 at https://www.mediawiki.org/wiki/Manual:PHP_unit_testing .

 I don't have URLs for you offhand, but QUnit and Selenium are probably
 also documented on mediawiki.org .

 Roan

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Standards for communication between antivandalism tools

2011-12-27 Thread Krinkle
MediaWiki already has a native inteface meant for marking edits as
patrolled.
That system is ideal for anti-vandalism communication, and is already used
as such on many wikis (other than en.wikipedia.org)

It works like this:
* Tool gets edit-feed from irc.wikimedia.org, or recentchanges API or
Special:RecentChanges (depending on whether it is a standalone app, IRC bot,
standalone gadget or enhancement on top of Special:RecentChanges)
* Tool gets diff or links to it
* User marks it as patrolled OR User fixes edit / warns user and then marks
as patrolled
* Patrolling is done by either following a link to action=markpatrolled from
within the wiki or the tool uses the action=patrol API

Right now tools using the recentchanges API or Special:RecentChanges to
retrieve the edit list already have a way to filter out patrolled edits
(such
as RTRC [1]). Tools using the IRC feed can't do it as easily, although they
could parse Special:Log/patrol actions (which are also sent to IRC) and
identify the rc_id of the markpatrolled action with the edit action
previously recorded and then hide that edit it from their live queue.

On en.wikipedia.org the markpatrolled feature was disabled in 2005 because
the red exlaimation marks that some admins didn't like (note that at the
time
admins were the only users with the ability to see the patrol marks) - and
as
a result of the way things were in 2005 it was disabled by default globally
and still is and wikis have to request it to be enabled individually, 57
wikis
have done so already including Wikimedia Commons and most Dutch,
German, French and Italian projects.

Using the patrol feature for this makes sense for more reasons, as it is
also
used internally by MediaWiki. It it also integrated into the Watchlist and
user right 'patrol' is assigned to allow users to know whether a revision is
patrolled or not and allow them to mark it as such (which can potentially
replace the duplication of 'trusted user' lists, simply check if the user
has
this user right on the target wiki).

Krinkle

[1]
https://meta.wikimedia.org/wiki/User:Krinkle/Tools/Real-Time_Recent_Changes
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Standards for communication between antivandalism tools

2011-12-27 Thread Krinkle
On Tue, Dec 27, 2011 at 11:12 AM, Petr Bena benap...@gmail.com wrote:

 Hi, that's a nice feature. But if it's disabled I don't think it's
 useful then... Anyway this communication protocol would allow us to
 share way more info, we could share the list of potentially
 problematic edits and such, Clue Bot is skipping a lot of edits which
 may be a vandalism but it's not able to detect it correctly.


Well, I don't believe in a new big all-in-one system. We have many that
we should
build upon to avoid wasting a lot of efforts.

MediaWiki has a patrol flag on each recent change. That should simply be
used. Otherwise double work occurs as other extensions or gadgets do use
that. It also has a stable API that can be relied on, and any extensions
written to utilize this MediaWiki feature will automatically work.

By not using that, double work occurs and time is being wasted.

Aside from MediaWiki in general, for the Wikimedia Foundation wikis there
is an unofficial organization named Countervandalism Network (CVN), which
pretty manages communication protocols and centralization in
countervandalism efforts, I'd recommend staying in close touch with them[5]
as well.

The CVN already has a global countervandalism database that contains much
useful information that is being kept up to date with recent events[6]:
* a global blacklist (list of usernames with an expiration date for the
entry and a reason)
* a global whitelist (basically a mirror listing all users that are
'patroller' or 'sysop' on one or wikis)
* a global and a per-wiki watchlist (page names, expiration date and
reason)

These are currently used by all CVNBots and SWBots in the #cvn-* channels
on irc.freenode.net such as #cvn-commons and #cvn-wp-nl as well as
#cvn-wp-en. In those channels edits by blacklisted users and/or on watched
pages are highlighted. So it's a common watchlist for all CVN
vandal-fighters. This database is replicated to the Toolserver and has a
primitive API [3] (could be expanded) allowing gadgets to use this
information as well. For example RTRC [1] and CVN_SimpleOverlay [2] use
this API.

Some ideas I've been having for the long term:

* Switch to a machine-readable push notification service for recent changes
(i.e. the best of the API (XML/JSON) and the best of IRC (push/subscribe).
Something like WebSockets / PubSubHubBub / Jabber.

* Replace monitoring systems such as irc-vandalism bots, Huggle, STiki and
the like with a Web-based framework (e.g. Extension:ActivityMonitor
providing Special:ActivityMonitor) that allows following this stream live
and ability to filter things out (e.g. patrolled edits) and highlight stuff
(things that are on your watchlist, things on the CVN blacklist /
watchlist) and other things we would expect from a monitor tool.

* Move the CVN database to WMF, making it available through the API.

* Enable cross-domain AJAX whitelist on WMF (CORS)

* Option in SpecialActivityMonitor to monitor multiple wikis (right now the
CVN has an IRC channel #cvn-sw that monitors 600 small wikis that don't
have a stable anti-vandalism team - monitored mostly by stewards and global
sysops as well as global rollbackers, it is also the key to catching
cross-wiki vandalism)

* In additional to monitoring a stream and hand-picking interesting edits,
we should also implement a workflow like WikiHow's Special:RCPatrol [4] for
wikis that are capable of patrolling all edits. Right now they do so
through Special:RecentChanges by hiding unpatrolled edits and start at the
back and work their way to the top, but that workflow sucks a lot and often
causes double work as people click the same links.

I could go on like this but I better stop now. Hop in on #countervandalism
for more.

Krinkle

[1]
https://meta.wikimedia.org/wiki/User:Krinkle/Tools/Real-Time_Recent_Changes

[2] https://meta.wikimedia.org/wiki/User:Krinkle/Scripts/CVNSimpleOverlay

[3] https://wiki.toolserver.org/view/CVN_API

[4] http://www.wikihow.com/Patrol-Recent-Changes-on-wikiHow

[5] them, or us, as I'm one of them -
https://meta.wikimedia.org/wiki/CVN

[6] CVN has several bots. For example users that are blocked by admins on a
wiki are automatically added to the global blacklist so that if a user is
blocked on one wiki, edits by that account on other wikis will be
highlighted (this is currently the only working defense that I know of
against cross-wiki vandalism). CVN also has an irc-bridge with ClueNet, any
user flagged in the ClueNet stream (which is populated by by ClueBot_NG) is
also added to the CVN blacklist for the 2*duration that ClueBot monitors
it.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Successor to wikibits.js::ts_makeSortable?

2011-12-27 Thread Krinkle
On Tue, Dec 20, 2011 at 6:49 PM, Bergi a.d.be...@web.de wrote:

 Daniel Barrett schrieb:
  Now that skins/common/wikibits.js is gone, so is the function
 ts_makeSortable. What is the equivalent call to ts_makeSortable(table) in
 MediaWiki 1.18?

 The tablesorter is available as a jQuery plugin function, see

 http://svn.wikimedia.org/viewvc/mediawiki/trunk/phase3/resources/jquery/jquery.tablesorter.js?revision=105727view=markup#l738

  Here is our code we need to convert:
 
 function makesortable( context ) {
   $('table.sortable', context)
 .each(function(index, table){
   if (!$(table).find('th a span.sortarrow').length) {
 ts_makeSortable(table);
   }
 });
 }
 
  This is from an extension that sorts a table by a specified column when
 the page is rendered.

 I'd propose $('table.sortable:not(.jquery-tablesorter)').tablesorter();

  Bergi



Please keep in mind that if your table is outputted by PHP and thus in
the document before the document ready event, MediaWiki standard JavaScript
will pick it up and apply the jQuery tablesorter plugin for you, do not
apply it again!

You only need to call .tablesorter() yourself on elements that you inserted
through AJAX or for other reasons are not covered by $( 'table.sortable' )
at document ready.

Krinkle
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Proposal for new table image_metadata

2011-12-05 Thread Krinkle
On Thu, Dec 1, 2011 at 6:34 PM, William Lee w...@wikia-inc.com wrote:

 We propose expanding the metadata field into a new table. We propose the
 name image_metadata. It will have three columns: img_name, attribute
 (varchar) and value (varchar). It can be joined with Image on img_name.


Per convention this should probably read file instead of image, (like is
already
done with namespaces and the filearchive table). Anyway, that's just
naming.

A major problem as mentioned before in this thread is a key. Right now
files (both the files as an abstract thing or the versions) have a no unique
key. All they have is a page title and a timestamp.

This is related to the License-integration project[1] (that name is a bit
outdated,
it started for license information, but it basically aiming at storing all
kinds of
file properties).

The first blocker bug would be
https://bugzilla.wikimedia.org/show_bug.cgi?id=26741
(image/oldimage to filerevision).

And another one would be to make the file system even more like
page/revisions.
By giving implementing file ids and filerevision ids.

- Krinkle

[1] http://www.mediawiki.org/wiki/License_integration
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


<    1   2   3   4   5   >