Re: [Wikitech-l] More fun and games with file position relative code

2009-08-17 Thread Jim Hu
I confess that we have our extensions in /usr/local/wiki-extensions,  
which is an svn checkout of our local repository.  For us, it makes  
two things easier:
1) I've been known to stupidly delete the extensions directory when  
upgrading MW
2) different wikis share the single directory.

But when something breaks, and I suspect it's because it needs  
javascript or css, I try putting it in the extensions directory.

Jim


On Aug 13, 2009, at 11:25 AM, Brion Vibber wrote:

> On 8/13/09 10:55 AM, Aryeh Gregor wrote:
>> On Thu, Aug 13, 2009 at 1:06 PM, Brion Vibber   
>> wrote:
>>> In any case they need to be able to find and reach that setup info,
>>> which is what a stable directory tree provides.
>>
>> Yes, that's the puzzle.  I guess extensions could special-case the
>> trunk checkout case by checking ../../phase3/ if ../../ fails,
>
> Why? Just put your extensions in the extensions folder where they  
> belong.
>
> -- brion
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

=
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-17 Thread Dmitriy Sintsov
* Brion Vibber  [Mon, 17 Aug 2009 09:27:26 -0700]:
> There should be no need for multiple servers in any case; web software
> is pretty good about coexisting. ;)
>
Wouldn't a truly robust comparsion require two separate AMP's instead of 
simple apache virtual hosts?

> Multiple versions of MediaWiki can sit side by side in separate
> directories with no trouble. You can even run multiple versions of PHP
> simply by setting them up as FastCGI applied to different paths.
>
Interesting, I didn't knew that it is achievable via FastCGI (I've used 
only mod_php). Then, that might be enough. What about using different 
versions of PHP?!

> Really though, this thread has gotten extremely unfocused; it's not
> clear what's being proposed to begin with and we've wandered off to a
> lot of confusion.
>
I believe than Dan wanted to compare the results of different SVN 
snapshots execution at the function/methods level, too, not just the 
final output. What can be used for that, XML RPC, perhaps?

> We should probably start up a 'testing center' page on mediawiki.org
> and begin by summarizing what we've already got -- then we can move on
> to figuring out what else we need.
>
I remember that random syntax generators were used back in 90's for 
testing purposes.
Sorry if my messages pollute the mail list, I'll try to post less often. 
I am definitely not an expert, it's just interesting to read sometimes.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-17 Thread dan nessett
--- On Mon, 8/17/09, Brion Vibber  wrote:

> Really though, this thread has gotten extremely unfocused;
> it's not  
> clear what's being proposed to begin with and we've
> wandered off to a  
> lot of confusion.

I'll take partial responsibility for the confusion. Like I said recently, I 
think it is pretty ridiculous that a newbie like me is pushing the discussion 
on MW QA. I am trying to learn the underlying technologies as fast as I can, 
but it is a steep learning curve. Also, let me reiterate. If someone more 
knowledgeable about these technologies than I is willing to step up and lead, I 
have no problem whatsoever following. On the other hand, I need a MW product 
regression test suite. If getting it means I have to expose my ignorance to an 
international audience, I'm willing to take that hit.

> We should probably start up a 'testing center' page on
> mediawiki.org  
> and begin by summarizing what we've already got -- then we
> can move on  
> to figuring out what else we need.
> 

I think this is a great idea.

> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
> 


  

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Browser sniffing in JavaScript (and elsewhere)

2009-08-17 Thread Brion Vibber
On Aug 17, 2009, at 9:30, Roan Kattouw  wrote:
> Both the usabilty and new upload JS use jQuery, which hides most
> browser sniffing you need in abstraction layers. To my knowledge the
> jQuery devs do sniff browsers properly, and if something should go
> wrong in jQuery, it's not just Wikipedia that'll break.

Yeah, the current jquery release claims to be entirely using  
capability checks rather than agent sniffing, which makes me warm and  
fuzzy. :)

-- brion vibber (brion @ wikimedia.org)


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Browser sniffing in JavaScript (and elsewhere)

2009-08-17 Thread Roan Kattouw
2009/8/17 Aryeh Gregor :
> I tried to remove some browser-sniffing from wikibits.js, but there's
> undoubtedly some I missed.  Especially with the large amounts of JS
> being added recently for usability/new upload/etc., could everyone
> *please* check to make sure that there are no broken browser checks
> being committed?
Both the usabilty and new upload JS use jQuery, which hides most
browser sniffing you need in abstraction layers. To my knowledge the
jQuery devs do sniff browsers properly, and if something should go
wrong in jQuery, it's not just Wikipedia that'll break.

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-17 Thread Brion Vibber
There should be no need for multiple servers in any case; web software  
is pretty good about coexisting. ;)

Multiple versions of MediaWiki can sit side by side in separate  
directories with no trouble. You can even run multiple versions of PHP  
simply by setting them up as FastCGI applied to different paths.

Really though, this thread has gotten extremely unfocused; it's not  
clear what's being proposed to begin with and we've wandered off to a  
lot of confusion.

We should probably start up a 'testing center' page on mediawiki.org  
and begin by summarizing what we've already got -- then we can move on  
to figuring out what else we need.

-- brion vibber (brion @ wikimedia.org)

On Aug 17, 2009, at 4:25, Dmitriy Sintsov  wrote:

> * Platonides  [Mon, 17 Aug 2009 00:06:59 +0200]:
>> In fact, I'm not sure to have understood the problem. I find the
>> proposed options quite bizarre. So if you have understood the
>> "specification", please enlighten me. :)
>>
> If I had to check whether there are any regressions between versions  
> at
> the same (single) server, I'd probably run two apaches from different
> installation dirs listening to different ports (eg. one at :80,  
> another
> at :82) and then compare the results via some script, probably with
> wget|diff or something like that (of course without the skin). The
> biggest question is, which pages are better to request for comparsion?
> Some pseudo-random generator of wiki-text, or maybe a special
> manually-crafted pages, or the both, perhaps? I don't know.
> Dmitriy
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] mediawiki/ IRC cloaks now available

2009-08-17 Thread Brion Vibber
Whee! Thanks for the update, Sean! Lots of folks very happy to see  
this. :)

-- brion vibber (brion @ wikimedia.org)

On Aug 16, 2009, at 7:10, Sean Whitton  wrote:

> Hi,
>
> By popular demand, mediawiki/ cloaks are now available for all
> developers with access to commit to svn. There will probably be future
> exceptions made if we have lots of artists and translators who want to
> get cloaks too but initially we're using this as an indicator of
> contributions.
>
> To get such a cloak, make an edit somewhere on mediawiki.org saying "I
> am [[User:Xyz]] and my nick on IRC is xyz". Your userpage is a good
> choice. You can then revert this edit and then send a link to the diff
> to one of the IRC Group Contacts when you request your cloak. This is
> anyone of seanw, Rjd0060, kibble or dungodung on IRC. Please check
> idle times and contact one who is active. Once they have confirmed
> your access and identity (you'll need to state your on-wiki username
> on IRC too and be identified to NickServ) you'll be cloaked with
> mediawiki/On-Wiki-Username or MediaWiki/On-Wiki-Username (your
> choice).
>
> The reason this has taken so long to come through is because we were
> waiting on our developer to add features to verify this sort of cloak
> automatically to the cloak request system. Unfortunately we seem to
> have lost touch with him so are switching to doing all cloaks
> manually.
>
> S
>
> -- 
> Sean Whitton / 
> OpenPGP KeyID: 0x25F4EAB7
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Browser sniffing in JavaScript (and elsewhere)

2009-08-17 Thread Brion Vibber
In theory at least all new checks in the last few years have been  
specific in this way -- either testing for capabilities or if that's  
not doable looking for particular versions (like the IE fixes which  
are tied to known releases of IE, or the Gecko and WebKit fixes which  
check if they're on a version of the engine prior to the bug fix.)

Definitely keep an eye out and make sure we don't regress on that,  
everybody! :)

-- brion vibber (brion @ wikimedia.org)

On Aug 17, 2009, at 7:41, Aryeh Gregor   
wrote:

> Back in May 2004, Gabriel Wicke was creating a neat new skin called
> Monobook.  Unlike the old skins, it used good semantic markup with CSS
> 2 for style.  Gabriel made sure to test in a lot of browsers and made
> up files full of extensive fixes for browsers that had problems.
>
> One such browser was the default KDE browser, Konqueror.  Even
> relatively web-savvy people have barely heard of it and never used it.
> He was nice and checked whether it worked properly in his new skin
> anyway.  Since it didn't, he committed a quick fix in r3532 to
> eliminate some horizontal scrollbars.  Then everyone forgot about it,
> because nobody uses KHTML.
>
> It turns out there was a slight problem with his fix.  He loaded it
> based on this code:
>
> var is_khtml = (navigator.vendor == 'KDE' || ( document.childNodes &&
> !document.all && !navigator.taintEnabled ));
>
> The problem here is pretty straightforward.  A bug fix is being
> loaded, without checking to see whether the bug exists.  The fix is
> loaded for all versions of KHTML past, present, and *future*.  If the
> KHTML devs fixed the bug, then they'd have a bogus stylesheet being
> loaded that would mess up their display, and they couldn't do anything
> about it.
>
> Well, nobody much used or uses KHTML.  But it just so happens that in
> 2003, Apple debuted a new web browser based on a fork of KHTML.  And
> in 2008, Google debuted another browser based on the same rendering
> engine.  And if you add them together, they now have 6% market share
> or more.  And we've still been serving them this broken KHTML fixes
> file for something that was fixed eons ago.
>
> Just recently, in WebKit r47255, they changed their code to better
> match other browsers' handling of "almost standards mode".  They
> removed some quirk that was allowing them to render correctly despite
> the bogus CSS we were serving them.  And so suddenly they're faced
> with the prospect of having to use a site-specific hack ("if path ends
> in /KHTMLFixes.css, ignore the file") because we screwed up.  See
> their bug here: 
>
> I had already killed KHTMLFixes.css in r53141, but it's still in every
> MediaWiki release since 1.5.  And this isn't the only time this has
> happened.  A while back someone committed some fixes for Opera RTL.
> They loaded the fixes for, yes, Opera version 9 or greater, or some
> similar check.  When I checked on Opera 9.6, I found that the fix was
> degrading display, not improving it.
>
> Sometimes we need to do browser sniffing of some kind, because
> sometimes browsers don't implement standards properly.  There are two
> ways to do it that are okay:
>
> 1) Capability testing.  If possible, just check directly whether the
> browser can do it.  This works best with JS functionality, for
> instance in getElementsByClassName in wikibits.js:
>
>if ( typeof( oElm.getElementsByClassName ) == "function" ) {
>/* Use a native implementation where possible FF3, Saf3.2,  
> Opera 9.5 */
>
> It can also be used in other cases sometimes.  For instance, in r53347
> I made this change:
>
> -// TODO: better css2 incompatibility detection here
> -if(is_opera || is_khtml ||
> navigator.userAgent.toLowerCase().indexOf('firefox/1')!=-1){
> -return 30; // opera&konqueror & old firefox don't understand
> overflow-x, estimate scrollbar width
> +// For browsers that don't understand overflow-x, estimate  
> scrollbar width
> +if(typeof document.body.style.overflowX != "string"){
> +return 30;
>
> Instead of using a hardcoded list of browsers that didn't support
> overflow-x, I checked whether the overflowX property existed.  This
> isn't totally foolproof, but it sure bets assuming that no future
> version of Opera or KHTML will support overflow-x.  (I'm pretty sure
> both already do, in fact.)
>
> 2) "Version <= X."  If it's not reasonable to check capabilities, then
> at least allow browser implementers to fix their bugs in future
> versions.  If you find that all current versions of Firefox do
> something or other incorrectly, then don't serve incorrect content to
> all versions of Firefox.  In that case, during Firefox 3.6
> development, they'll find out that their improvements to standards
> compliance cause Wikipedia to break!  Instead, serve incorrect content
> to Firefox 3.5 or less, and standard markup to all greater versions.
> That way, during 3.6 development, they'll f

[Wikitech-l] Browser sniffing in JavaScript (and elsewhere)

2009-08-17 Thread Aryeh Gregor
Back in May 2004, Gabriel Wicke was creating a neat new skin called
Monobook.  Unlike the old skins, it used good semantic markup with CSS
2 for style.  Gabriel made sure to test in a lot of browsers and made
up files full of extensive fixes for browsers that had problems.

One such browser was the default KDE browser, Konqueror.  Even
relatively web-savvy people have barely heard of it and never used it.
 He was nice and checked whether it worked properly in his new skin
anyway.  Since it didn't, he committed a quick fix in r3532 to
eliminate some horizontal scrollbars.  Then everyone forgot about it,
because nobody uses KHTML.

It turns out there was a slight problem with his fix.  He loaded it
based on this code:

var is_khtml = (navigator.vendor == 'KDE' || ( document.childNodes &&
!document.all && !navigator.taintEnabled ));

The problem here is pretty straightforward.  A bug fix is being
loaded, without checking to see whether the bug exists.  The fix is
loaded for all versions of KHTML past, present, and *future*.  If the
KHTML devs fixed the bug, then they'd have a bogus stylesheet being
loaded that would mess up their display, and they couldn't do anything
about it.

Well, nobody much used or uses KHTML.  But it just so happens that in
2003, Apple debuted a new web browser based on a fork of KHTML.  And
in 2008, Google debuted another browser based on the same rendering
engine.  And if you add them together, they now have 6% market share
or more.  And we've still been serving them this broken KHTML fixes
file for something that was fixed eons ago.

Just recently, in WebKit r47255, they changed their code to better
match other browsers' handling of "almost standards mode".  They
removed some quirk that was allowing them to render correctly despite
the bogus CSS we were serving them.  And so suddenly they're faced
with the prospect of having to use a site-specific hack ("if path ends
in /KHTMLFixes.css, ignore the file") because we screwed up.  See
their bug here: 

I had already killed KHTMLFixes.css in r53141, but it's still in every
MediaWiki release since 1.5.  And this isn't the only time this has
happened.  A while back someone committed some fixes for Opera RTL.
They loaded the fixes for, yes, Opera version 9 or greater, or some
similar check.  When I checked on Opera 9.6, I found that the fix was
degrading display, not improving it.

Sometimes we need to do browser sniffing of some kind, because
sometimes browsers don't implement standards properly.  There are two
ways to do it that are okay:

1) Capability testing.  If possible, just check directly whether the
browser can do it.  This works best with JS functionality, for
instance in getElementsByClassName in wikibits.js:

if ( typeof( oElm.getElementsByClassName ) == "function" ) {
/* Use a native implementation where possible FF3, Saf3.2, Opera 9.5 */

It can also be used in other cases sometimes.  For instance, in r53347
I made this change:

-   // TODO: better css2 incompatibility detection here
-   if(is_opera || is_khtml ||
navigator.userAgent.toLowerCase().indexOf('firefox/1')!=-1){
-   return 30; // opera&konqueror & old firefox don't understand
overflow-x, estimate scrollbar width
+   // For browsers that don't understand overflow-x, estimate scrollbar 
width
+   if(typeof document.body.style.overflowX != "string"){
+   return 30;

Instead of using a hardcoded list of browsers that didn't support
overflow-x, I checked whether the overflowX property existed.  This
isn't totally foolproof, but it sure bets assuming that no future
version of Opera or KHTML will support overflow-x.  (I'm pretty sure
both already do, in fact.)

2) "Version <= X."  If it's not reasonable to check capabilities, then
at least allow browser implementers to fix their bugs in future
versions.  If you find that all current versions of Firefox do
something or other incorrectly, then don't serve incorrect content to
all versions of Firefox.  In that case, during Firefox 3.6
development, they'll find out that their improvements to standards
compliance cause Wikipedia to break!  Instead, serve incorrect content
to Firefox 3.5 or less, and standard markup to all greater versions.
That way, during 3.6 development, they'll find out that their
*failure* to comply with standards causes Wikipedia to break.  With
any luck, that will encourage them to fix the problem instead of
punishing them.  It's not as good as being able to automatically serve
the right content if they haven't fixed things, but it's better than
serving bad content forever.


I tried to remove some browser-sniffing from wikibits.js, but there's
undoubtedly some I missed.  Especially with the large amounts of JS
being added recently for usability/new upload/etc., could everyone
*please* check to make sure that there are no broken browser checks
being committed?  This kind of thing hurts our users in the lo

Re: [Wikitech-l] identifier collisions

2009-08-17 Thread Dmitriy Sintsov
* Platonides  [Mon, 17 Aug 2009 00:06:59 +0200]:
> In fact, I'm not sure to have understood the problem. I find the
> proposed options quite bizarre. So if you have understood the
> "specification", please enlighten me. :)
>
If I had to check whether there are any regressions between versions at 
the same (single) server, I'd probably run two apaches from different 
installation dirs listening to different ports (eg. one at :80, another 
at :82) and then compare the results via some script, probably with 
wget|diff or something like that (of course without the skin). The 
biggest question is, which pages are better to request for comparsion? 
Some pseudo-random generator of wiki-text, or maybe a special 
manually-crafted pages, or the both, perhaps? I don't know.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] identifier collisions

2009-08-17 Thread Trevor Parscal
I feel uncomfortable ignoring this kind of response on this or any  
list, so I will say this at the risk of being personally attacked.

I'm certain that treating people this way detracts from both  
motivation and productivity - for volunteers and staff alike.

Responses like this lack any sense of professionalism, and are  
certainly better off discarded than sent.

Life is too short to be so easily angered and impatient.

- Trevor

On Aug 15, 2009, at 9:08 PM, Tim Starling   
wrote:

> dan nessett wrote:
>> On the other hand, maybe you would rather code than think
>> strategically. Fine. Commit yourself to fixing the parser so all of
>> the disabled tests run and also all or most of the pages on
>> Wikipedia do not break and I will shut up about the CPRT. Commit
>> yourself to creating a test harness that other developers can use
>> to write unit tests and I will gladly stop writing emails about it.
>> Commit yourself to develop the software the organizes the unit
>> tests into a product regression test that developers can easily run
>> and I will no longer bother you about MW QA.
>
> Maybe you should apply for the CTO job, if you get it then you'll be
> able to make such demands for my time.
>
> For the time being I'll continue to develop my own priorities, in
> consultation with Wikimedia management and the community, and you can
> continue to post your stirring diatribes. Right now, we urgently need
> review, deployment and release, and I'm not going to delay any of that
> just to make you shut up.
>
> -- Tim Starling
>
>
> ___
> Wikitech-l mailing list
> Wikitech-l@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Batik SVG-to-PNG server revisited

2009-08-17 Thread David Gerard
2009/8/17 Hk kng :

> New test results were added at
> http://www.mediawiki.org/wiki/SVG_benchmarks
> This looks even better than my first attempt. Nonetheless, it is clear
> that batikd is not ready to use but needs to be worked on.


Nice one!

Has anyone set up Inkscape so it doesn't have to be set up/torn down
for each image? (Is this even possible?)


- d.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l