Re: [Wikitech-l] [Selenium] How to use?
- Original Message - From: Benedikt Kaempgen benedikt.kaemp...@kit.edu Newsgroups: gmane.science.linguistics.wikipedia.technical To: wikitech-l@lists.wikimedia.org Sent: Wednesday, February 02, 2011 6:27 PM Subject: [Selenium] How to use? I got following for your answer. Hi Janesh, We checked with latest trunk and test scripts available only at tests/selenium. Earlier test were located at maintenance/tests/selenium but later moved to one level up. So now the tests should be available only at tests/selenium level. The tests were written against latest code because the idea is to regress test the system after latest changes. We can use the scripts against older versions if there are no major changes which would break the script. Details of Selenium framework is available at http://www.mediawiki.org/wiki/SeleniumFramework and there is a readme file which describes the behavior for installer test scripts. Regards, Jinesh De Silva ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Pywikipedia-l] Humans vs bots: 0:1
2011/2/3 John phoenixoverr...@gmail.com: On Wed, Feb 2, 2011 at 8:01 PM, Marcin Cieslak sa...@saper.info wrote: snip This was a human mistake and it was reverted later. However, there seems to be no way to tell all the interwiki bots running to stop re-adding this removed link to articles. Yeah, all you need to do is remove the incorrect links from all affected articles. Of course, the right way to solve this problem once and for all is to fix bug 15607 https://bugzilla.wikimedia.org/show_bug.cgi?id=15607 Installing the Interlanguage extension will revolutionize the way humans fight bots. What is currently stopping the developers from implementing it? ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Pywikipedia-l] Humans vs bots: 0:1
The first issue is indeed that the wrong interwiki has to be removed on _all_ languages to stop it from returning, but even with that one could still get into problems because there might be bots that visited some languages _before_ your removal, and others _after_ it. They would then consider the wrong interwiki to be a missing one on the languages visited afterward, and re-add them there. Working with {{nobots}} as you have done is not a good solution, I think. Adding it on the Polish page could be justified, but on the English one it also stops a good amount of correct edits. This particular issue I have now resolved by finding that there is a Dutch page on the same subject as the Polish one, and adding an interwiki to that one. This way, even if someone mistakenly adds the incorrect link again, for the bots this will lead to an interwiki conflict, so they will not automatically propagate the wrong link any more. -- André Engels, andreeng...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] NNTP access for Wikimedia mailing lists
In article 87wrlh3et9@jidanni.org, jida...@jidanni.org wrote: Better set the Reply-to headers, like they do on http://article.gmane.org/gmane.org.wikimedia.mediawiki/36699/raw ! What do you think it should be set to? Gmane retains the original Reply-To header from the mail (which is set to the list address by Mailman), but this means that anyone who replies to a Usenet article by email will actually end up replying to the mailing list. If they wanted to do that, they would have just followed up in the group. So, I explicitly remove any reply-to header in the original mail. (Unfortunately this means we have to drop any reply-to header the user might have set, but I see no other way to work around this list configuration error.) - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
[Wikitech-l] Wikipedia Dump
Dear All, I have used two dumps from english Wikipedia as below, the count results turn out like this, Would you please let me know which one is completed and can be analyzed? and I am confused why the 2001-2009 had different number? Thanks very much !! select count (1), to_char(rev_timestamp,'') from enwiki.revision group by to_char(rev_timestamp,'') order by (to_char(rev_timestamp,'')) resource is : http://download.wikimedia.org/enwiki/20100130/enwiki-20100130-stub-meta-history.xml.gz +--+-+ | count(1) | year(rev_timestamp) | +--+-+ |57559 |2001 | | 616878 |2002 | | 1598363 |2003 | | 6999869 |2004 | | 20697477 |2005 | | 57214741 |2006 | | 75235972 |2007 | | 74757575 |2008 | | 70600627 |2009 | | 6017974 |2010 | +--+-+ resource is : http://download.wikimedia.org/enwiki/20101011/enwiki-20101011-stub-meta-history.xml.gz 64305 2001 616257 2002 15966122003 69794942004 20642853 2005 57043694 2006 74936692 2007 74387391 2008 70085652 2009 53054853 2010 - Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
[Wikitech-l] [Selenium] Issue with importing a test database
Hello everybody, for the Selenium Framework I have a very specific database related issue which is hard for me to decide. This is the problem: In order to have a fresh state for every test, we agreed to have a test database (and image folder, but this is a sidetrack now) for every test suite run. The fresh database is created from a SQL file which can be attached to a test as a resource. Now, to make the creation of such SQL files as easy as possible, I wanted to be able to simply use SQL dumps created with mysqldump. The import of the data is done via the existing databse abstraction layer. I use the method DatabaseBase::sourceFile which in turn calls DatabaseBase::sourceStream. The problem is now, that some of the SQL INSERT statements seem to be too long for this method. Platonides pointed me to the source of the problem (thanks!). It lies currently in line 2506 (Database.php) : $line = trim( fgets( $fp, 1024 ) ); So the lines read are limited to 1024 characters. If I remove this limitation, everything works fine. PHP manual tells me that the length parameter is optional as of PHP version 4.2.0. Since I don't know enough about how fgets works and what its security issues are, I wonder, is there a reason not to remove the parameter? Cheers, Markus (mglaser) ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Selenium] How to use?
Thanks for the quick answer. Unfortunately, I still don't know how to apply testing to older MW versions. I am familiar with the documentation, it is good, but does not answer all relevant questions. But I will figure out... Keep up the good work! Best, Benedikt -- Karlsruhe Institute of Technology (KIT) Institute of Applied Informatics and Formal Description Methods (AIFB) Benedikt Kämpgen Research Associate Kaiserstraße 12 Building 11.40 76131 Karlsruhe, Germany Phone: +49 721 608-47946 (!new since 1 January 2011!) Fax: +49 721 608-46580 (!new since 1 January 2011!) Email: benedikt.kaemp...@kit.edu Web: http://www.kit.edu/ KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association -Original Message- From: wikitech-l-boun...@lists.wikimedia.org [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of Janesh Kodikara Sent: Thursday, February 03, 2011 9:11 AM To: Wikimedia developers Subject: Re: [Wikitech-l] [Selenium] How to use? - Original Message - From: Benedikt Kaempgen benedikt.kaemp...@kit.edu Newsgroups: gmane.science.linguistics.wikipedia.technical To: wikitech-l@lists.wikimedia.org Sent: Wednesday, February 02, 2011 6:27 PM Subject: [Selenium] How to use? I got following for your answer. Hi Janesh, We checked with latest trunk and test scripts available only at tests/selenium. Earlier test were located at maintenance/tests/selenium but later moved to one level up. So now the tests should be available only at tests/selenium level. The tests were written against latest code because the idea is to regress test the system after latest changes. We can use the scripts against older versions if there are no major changes which would break the script. Details of Selenium framework is available at http://www.mediawiki.org/wiki/SeleniumFramework and there is a readme file which describes the behavior for installer test scripts. Regards, Jinesh De Silva ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Selenium] How to use?
Hi Benedict, at the moment, the framework is still work in progress, so it is not shipped with any current releases (afaik). Also, using it requires some changes in the includes folder as well as the new maintenance class, which is not available until MW 1.16. But there is hope for you, I know at least one implementation of the framework with MW 1.15.3 ;) I put some notes on backporting the framework on http://www.mediawiki.org/wiki/Selenium_Framework#Backporting, although this may not yet be exhaustive. Cheers, Markus -Ursprüngliche Nachricht- Von: wikitech-l-boun...@lists.wikimedia.org [mailto:wikitech-l-boun...@lists.wikimedia.org] Im Auftrag von Benedikt Kaempgen Gesendet: Donnerstag, 3. Februar 2011 16:17 An: Janesh Kodikara; Wikimedia developers Betreff: Re: [Wikitech-l] [Selenium] How to use? Thanks for the quick answer. Unfortunately, I still don't know how to apply testing to older MW versions. I am familiar with the documentation, it is good, but does not answer all relevant questions. But I will figure out... Keep up the good work! Best, Benedikt -- Karlsruhe Institute of Technology (KIT) Institute of Applied Informatics and Formal Description Methods (AIFB) Benedikt Kämpgen Research Associate Kaiserstraße 12 Building 11.40 76131 Karlsruhe, Germany Phone: +49 721 608-47946 (!new since 1 January 2011!) Fax: +49 721 608-46580 (!new since 1 January 2011!) Email: benedikt.kaemp...@kit.edu Web: http://www.kit.edu/ KIT - University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association -Original Message- From: wikitech-l-boun...@lists.wikimedia.org [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of Janesh Kodikara Sent: Thursday, February 03, 2011 9:11 AM To: Wikimedia developers Subject: Re: [Wikitech-l] [Selenium] How to use? - Original Message - From: Benedikt Kaempgen benedikt.kaemp...@kit.edu Newsgroups: gmane.science.linguistics.wikipedia.technical To: wikitech-l@lists.wikimedia.org Sent: Wednesday, February 02, 2011 6:27 PM Subject: [Selenium] How to use? I got following for your answer. Hi Janesh, We checked with latest trunk and test scripts available only at tests/selenium. Earlier test were located at maintenance/tests/selenium but later moved to one level up. So now the tests should be available only at tests/selenium level. The tests were written against latest code because the idea is to regress test the system after latest changes. We can use the scripts against older versions if there are no major changes which would break the script. Details of Selenium framework is available at http://www.mediawiki.org/wiki/SeleniumFramework and there is a readme file which describes the behavior for installer test scripts. Regards, Jinesh De Silva ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Selenium] Issue with importing a test database
Markus Glaser wrote: Hello everybody, for the Selenium Framework I have a very specific database related issue which is hard for me to decide. This is the problem: In order to have a fresh state for every test, we agreed to have a test database (and image folder, but this is a sidetrack now) for every test suite run. The fresh database is created from a SQL file which can be attached to a test as a resource. Now, to make the creation of such SQL files as easy as possible, I wanted to be able to simply use SQL dumps created with mysqldump. The import of the data is done via the existing databse abstraction layer. I use the method DatabaseBase::sourceFile which in turn calls DatabaseBase::sourceStream. The problem is now, that some of the SQL INSERT statements seem to be too long for this method. Platonides pointed me to the source of the problem (thanks!). It lies currently in line 2506 (Database.php) : $line = trim( fgets( $fp, 1024 ) ); So the lines read are limited to 1024 characters. If I remove this limitation, everything works fine. PHP manual tells me that the length parameter is optional as of PHP version 4.2.0. Since I don't know enough about how fgets works and what its security issues are, I wonder, is there a reason not to remove the parameter? Cheers, Markus (mglaser) I think it can be removed safely. Although in this case I would just run mysqldump with --skip-extended-insert so that it doesn't create such long lines. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] NNTP access for Wikimedia mailing lists
River Tarnell wrote: What do you think it should be set to? Gmane retains the original Reply-To header from the mail (which is set to the list address by Mailman), but this means that anyone who replies to a Usenet article by email will actually end up replying to the mailing list. If they wanted to do that, they would have just followed up in the group. So, I explicitly remove any reply-to header in the original mail. (Unfortunately this means we have to drop any reply-to header the user might have set, but I see no other way to work around this list configuration error.) - river. I don't understand the problem. It doesn't matter if the MUA sends the email to the list or to the news server (mine seems to prefer the news server even in gmane), as they both arrive at the same place. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] NNTP access for Wikimedia mailing lists
In article iietdr$2sm$1...@dough.gmane.org, Platonides platoni...@gmail.com wrote: River Tarnell wrote: What do you think it should be set to? Gmane retains the original Reply-To header from the mail (which is set to the list address by Mailman), but this means that anyone who replies to a Usenet article by email will actually end up replying to the mailing list. I don't understand the problem. It doesn't matter if the MUA sends the email to the list or to the news server (mine seems to prefer the news server even in gmane), as they both arrive at the same place. Yes, that's exactly the problem. Unlike email, when responding to a post on Usenet you have two options: Followup and Reply. Following up posts a reply to the group, while replying sends a (private) reply, by email, to the author of the post (or to the address in the reply-to header, if one is present). If I left Mailman's reply-to header in place, then trying to send a private reply by email would actually send the post back to the group, unless the user manually edited the To: address. Not only would this be confusing (because it's not how Usenet normally works), it's also pointless, because if someone wants to reply to the group, they will just follow up instead of replying. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
Roan Kattouw (2011-02-01 10:14): 2011/2/1 Rob Lanphierro...@robla.net: Can you explain why you're rolling out when it's the middle of the night where Wikimedia is headquartered? I have a few different theories (site traffic, time zones of the operations team, etc.), but a clarification here would be good. We lost the game of rock/paper/scissors. :) We decided to do this very late U.S. west coast time so that our European and Australian contingents would be well rested in case there are problems. Given that we have key personnel pretty much all over the globe, there wasn't going to be a great time for this, and this has the added advantage of being a relatively low traffic time for us. Look at http://torrus.wikimedia.org/torrus/CDN?path=%2FTotals%2F and you'll see that, for the past two days, the time of lowest traffic was between 06:00 and 07:00 UTC. This has been a quite reliable pattern for quite some time now (except that it shifts by an hour in Northern Hemisphere summer, due to DST), and we've also used this time for the first few Vector deployments. [...] Can you set a different deploy date for different projects? E.g. 18.00 UTC for Poland. I will not be able to be there when hell will brake loose as I will be working and I'm sure most of the Polish tech admins will be too. Note that we was able test Vector with current scripts before the deploment so this is a bit different. And I still remember the ammount of complaints bouncing here and there when Vector came and broke Wikipedia and what not... Not that they were all valid and could have been avoided, but maybe some could. Not that I'm complaining ;-), but prototype is... well it's empty for now and it would be good if we could test scripts and do it as fast (and as soon) as possible. I've already asked Leinad, but maybe someone could import current Mediawiki namespace to prototype quicker. For one thing - I think our script for moving the search bar to the left side panel will be probably broken (as you make it wider now) and this will probably have to be fixed right after deployment... Regards, Nux. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 2:48 PM, Maciej Jaros e...@wp.pl wrote: Can you set a different deploy date for different projects? E.g. 18.00 UTC for Poland. Not easily. As a general rule all software goes to all sites at the same time. I will not be able to be there when hell will brake loose as I will be working and I'm sure most of the Polish tech admins will be too. Well luckily none of you need to be there for this ;-) This is a normal software deployment of the latest fixes and features. There's nothing special or different about this one from the dozens that have happened over the years. -Chad ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 12:02 PM, Maciej Jaros e...@wp.pl wrote: Chad (2011-02-03 20:52): On Thu, Feb 3, 2011 at 2:48 PM, Maciej Jarose...@wp.pl wrote: Can you set a different deploy date for different projects? E.g. 18.00 UTC for Poland. Not easily. As a general rule all software goes to all sites at the same time. How hard is not easily? :-) Hard enough the ops team would rather not try to rush it together at the last minute, as the infrastructure tweaks needed could break more stuff than the upgrade... [Fun fact: there *are* some leftover bits in the Wikimedia server infrastructure from our big 1.4-1.5 upgrade in '05 for serving different sites out of different versions, but it hasn't been exercised since. Some of the pieces are gone, others are ignored by other bits running maintenance scripts and such, and some would need to be recreated differently to deal with today's higher-scale traffic. It might happen for the next quarterly release, but not this time around.] Theoretically you're right ;-). But to my knowledge RL is in this release and this make this release almost as special as making Vector default. And Vector was easier because we were able to test it live with our scripts. But maybe I'm just superstitious ;-). Think of the first day or two after the upgrade as your chance to collaboratively track down anything you didn't miss! ;) In theory, folks should be testing their scripts already on the various prototype wikis and any personal test wikis y'all might have set up, but of course that's never going to catch everything. -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] NNTP access for Wikimedia mailing lists
In article 20110202090948.gd97...@ilythia.tcx.org.uk, River Tarnell r.tarn...@ieee.org wrote: I've added all current public mailing lists, and retention is set to forever, so it also acts as an archive. I've now added a basic web interface: http://news.tcx.org.uk/group/wikimedia. as well as a search index: http://news.tcx.org.uk/search (since lists.wikimedia.org disables Google searching and doesn't provide its own search interface). Obviously this will become more useful once the archives are imported. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
Maciej Jaros wrote: Can you set a different deploy date for different projects? E.g. 18.00 UTC for Poland. I will not be able to be there when hell will brake loose as I will be working and I'm sure most of the Polish tech admins will be too. Note that we was able test Vector with current scripts before the deploment so this is a bit different. And I still remember the ammount of complaints bouncing here and there when Vector came and broke Wikipedia and what not... Not that they were all valid and could have been avoided, but maybe some could. Not that I'm complaining ;-), but prototype is... well it's empty for now and it would be good if we could test scripts and do it as fast (and as soon) as possible. I've already asked Leinad, but maybe someone could import current Mediawiki namespace to prototype quicker. For one thing - I think our script for moving the search bar to the left side panel will be probably broken (as you make it wider now) and this will probably have to be fixed right after deployment... Regards, Nux. You could also test the scripts in advance. If you think a particular script will break/need to be disabled with the switchover, you can always warp it in a if (wgVersion==1.16wmf4) { check. That kind of action shouldn't be needed, though. I don't see that plwiki have its search bar at the left sidebar, I don't know what does that script. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
[Wikitech-l] WMF and IPv6
I just checked and determined that there appear to be no records yet for the WMF servers. I have to admit to having been negligent in examining the IPv6 readiness of the Mediawiki software. Is it generally working and ready to go on IPv6? Does the Foundation have a IPv6 support plan ready to go? The importance of this is going to be high in the Asia-Pacific region within a few months: http://www.potaroo.net/tools/ipv4/rir.jpg (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). In each region, ISPs then will start running out of IPv4 to hand out within a month to three months of the registry exhaustion. We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
- Original Message - From: George Herbert george.herb...@gmail.com I just checked and determined that there appear to be no records yet for the WMF servers. I have to admit to having been negligent in examining the IPv6 readiness of the Mediawiki software. Is it generally working and ready to go on IPv6? Is Apache? That's the base question, is it not? I think the answer is yes. The importance of this is going to be high in the Asia-Pacific region within a few months: http://www.potaroo.net/tools/ipv4/rir.jpg (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. In each region, ISPs then will start running out of IPv4 to hand out within a month to three months of the registry exhaustion. We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? Oh yeah; that's what triggered this. :-) Cheers, -- jra ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
I believe the WMF intends to participate in World IPv6 Day [1], additionally they publish some IPv6 statistics [2]. See also the IPv6 deployment page [3]. [1] http://isoc.org/wp/worldipv6day/ [2] http://ipv6and4.labs.wikimedia.org/ [3] http://wikitech.wikimedia.org/view/IPv6_deployment Robert On 2011-02-03, George Herbert wrote: I just checked and determined that there appear to be no records yet for the WMF servers. I have to admit to having been negligent in examining the IPv6 readiness of the Mediawiki software. Is it generally working and ready to go on IPv6? Does the Foundation have a IPv6 support plan ready to go? The importance of this is going to be high in the Asia-Pacific region within a few months: http://www.potaroo.net/tools/ipv4/rir.jpg (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). In each region, ISPs then will start running out of IPv4 to hand out within a month to three months of the registry exhaustion. We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article 19663836.4613.1296766691647.javamail.r...@benjamin.baylink.com, Jay Ashworth j...@baylink.com wrote: - Original Message - From: George Herbert george.herb...@gmail.com I have to admit to having been negligent in examining the IPv6 readiness of the Mediawiki software. Is it generally working and ready to go on IPv6? Is Apache? That's the base question, is it not? It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. Apache does support IPv6, though; some other content which is served using Apache, like lists.wm.o, is available over IPv6. MediaWiki itself supports IPv6 fine, including for blocking. This was implemented a while ago. Training admins to handle IPv6 IPs could be interesting. (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. Not exactly. IANA issued the last 5 /8s to RIRs, of which ARIN is one, today. But George is talking about RIR exhaustion, which is still some months away. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? Oh yeah; that's what triggered this. :-) Does any useful discussion still take place on that list? - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 12:58 PM, Jay Ashworth j...@baylink.com wrote: - Original Message - From: George Herbert george.herb...@gmail.com I just checked and determined that there appear to be no records yet for the WMF servers. I have to admit to having been negligent in examining the IPv6 readiness of the Mediawiki software. Is it generally working and ready to go on IPv6? Is Apache? That's the base question, is it not? I think the answer is yes. The importance of this is going to be high in the Asia-Pacific region within a few months: http://www.potaroo.net/tools/ipv4/rir.jpg (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. In each region, ISPs then will start running out of IPv4 to hand out within a month to three months of the registry exhaustion. We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? Oh yeah; that's what triggered this. :-) Cheers, -- jra Yes, I know YOU are Jay, and presumably I count as I was on NANOG in 1995, but I was asking about WMF staff / ops department. -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 3:19 PM, Brion Vibber br...@pobox.com wrote: ... It might happen for the next quarterly release, but not this time around.] Haha, quarterly releases :p -Chad ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
- Original Message - From: River Tarnell r.tarn...@ieee.org It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. Oh, of course. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might; how would a 6to4NAT affect blocking? Apache does support IPv6, though; some other content which is served using Apache, like lists.wm.o, is available over IPv6. MediaWiki itself supports IPv6 fine, including for blocking. This was implemented a while ago. Training admins to handle IPv6 IPs could be interesting. I mused on NANOG yesterday as to what was going to happen when network techs started realizing they couldn't carry around a bunch of IPs in their heads anymore... (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. Not exactly. IANA issued the last 5 /8s to RIRs, of which ARIN is one, today. But George is talking about RIR exhaustion, which is still some months away. His phrasing seemed a bit.. insufficiently clear, to me. That was me, attempting to clarify. Out of curiosity, is anyone from the Foundation on the NANOG mailing lists? Oh yeah; that's what triggered this. :-) Does any useful discussion still take place on that list? Sure. The S/N is still lower than the Hats would prefer, but that's the nature of an expanding universe. Cheers, - jra ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:05 PM, River Tarnell r.tarn...@ieee.org wrote: Does any useful discussion still take place on that list? - river. I don't know; did any ever? 8-) It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. Ah, yes. That problem. We're using that hacked up Squid 2.7, right? I'm not as involved as I was a couple of years ago, but I was running a large Squid 3.0 and experimental 3.1 site for about 3 years. Squid wiki says we need any 3.1 release (latest have some significant bugfixes): http://wiki.squid-cache.org/Features/IPv6 -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:11 PM, Jay Ashworth j...@baylink.com wrote: (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. Not exactly. IANA issued the last 5 /8s to RIRs, of which ARIN is one, today. But George is talking about RIR exhaustion, which is still some months away. His phrasing seemed a bit.. insufficiently clear, to me. That was me, attempting to clarify. I was trying to explain the situation without trying to braindump the totality of how IP space allocation works structurally, globally, politically, and organizationally, which would have us up all day attempting to get people to understand it all (much less what the acronym list expands to). This list is fortunately not NANOG, and hopefully never will be 8-) -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article 30181972.4621.1296767510190.javamail.r...@benjamin.baylink.com, Jay Ashworth j...@baylink.com wrote: - Original Message - From: River Tarnell r.tarn...@ieee.org As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might No, it won't. The internal network IPs (which are used for communication between the proxy and the back-end Apache) are not publicly visible and are completely inconsequential to users. how would a 6to4NAT affect blocking? ISP NATs are a separate issue, and might be interesting; if nothing else, as one reason (however small) for ISPs to provide IPv6 to end users. (Help! I can't edit Wikipedia because my ISP's CGNAT pool was blocked!.) The general situation with existing ISPs that use transparent proxies is that sometimes users just can't edit. Admins try to document such addresses and avoid blocking them for too long. (APNIC runs out of IPv4 space to give to providers somewhere around August, statistically; RIPE in Feb or March 2012, ARIN in July 2012). ARIN issued the last 5 available /8s to RIRs *today*; we've been talking about it all day on NANOG. Not exactly. IANA issued the last 5 /8s to RIRs, of which ARIN is one, today. But George is talking about RIR exhaustion, which is still some months away. His phrasing seemed a bit.. insufficiently clear, to me. That was me, attempting to clarify. Okay. I feel your clarification was not very clear ;-) ARIN didn't issue any /8s today, IANA did. ARIN was one of the *recipients* of those /8s. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
[Wikitech-l] Skin specific logos
Our site has 4 skins that display the logo - 3 standard and 1 site- specific. The site-specific skin uses rounded edges for the individual page area frames, while the standard skins use square edges. This means a logo with square edges looks fine for the standard skins, but not for the site-specific skin. A logo with rounded edges has the opposite characteristic. The programmer who designed the site-specific skin solved this problem with a hack. The absolute url to a different logo with rounded edges is hardwired into the skin code. Therefore, if we want to reorganize where we keep the site logos (which we have done once already), we have to modify the site-specific skin code. While it is possible that no one else has this problem, I would imagine there are skins out there that would look better if they were able to use a skin specific logo (e.g., using a different color scheme or a different font). My question is: has this issue been addressed before? If so, and there is a good solution, I would appreciate hearing of it. Regards, -- -- Dan Nessett ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:11 PM, Jay Ashworth j...@baylink.com wrote: - Original Message - From: River Tarnell r.tarn...@ieee.org It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. Oh, of course. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might; how would a 6to4NAT affect blocking? It's not really a 6to4 NAT per se - it's a 6to4 application level proxy. The question is, what does Squid hand off to Apache via a IPv4 back end connection if the front end connection is IPv6. Which, frankly, I have no idea (and am off investigating...). -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article aanlktikbwloyhzy4jln6jwkphfjotgo-ppqxfwupf...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. Ah, yes. That problem. We're using that hacked up Squid 2.7, right? As far as I know, yes. I don't know if the plan is to update to a newer Squid, or to switch to Varnish entirely. http://wikitech.wikimedia.org/view/IPv6_deployment mentions either using another front-end proxy, or upgrading. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:21 PM, George Herbert george.herb...@gmail.com wrote: On Thu, Feb 3, 2011 at 1:11 PM, Jay Ashworth j...@baylink.com wrote: - Original Message - From: River Tarnell r.tarn...@ieee.org It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. Oh, of course. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might; how would a 6to4NAT affect blocking? It's not really a 6to4 NAT per se - it's a 6to4 application level proxy. The question is, what does Squid hand off to Apache via a IPv4 back end connection if the front end connection is IPv6. Which, frankly, I have no idea (and am off investigating...). Q: Are we doing tproxy between the squids and apache servers? That's the obvious not-supported situation with Squid and IPv6 with IPv4 backends. (That would be solved by adding IPv6 addresses to the Apaches, however). -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTinQPPu_j=0emuaf2xojthqsxdluw0btggu8z...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: It's not really a 6to4 NAT per se - it's a 6to4 application level proxy. The question is, what does Squid hand off to Apache via a IPv4 back end connection if the front end connection is IPv6. I don't think it's useful to think of it in these terms (6to4 anything). All it is is an HTTP proxy; it receives one HTTP request from a client, then open a new connection itself to a web server and sends the same request, then sends the reply back. Whether the client connection comes via IPv6 has no impact on the backend connection, and vice versa. Here's a diagram: request client proxy IPv6 request proxy --- backend IPv4 response proxy --- backend IPv4 response client proxy IPv6 - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTi=OnSreaXMi3Gc+0==tzoq1jfix63xrkthv6...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: Q: Are we doing tproxy between the squids and apache servers? No. But since you mention it, LVS (Linux kernel-level load balancer) is used for load balancing, for both Squid and Apache. LVS supports IPv6, so that shouldn't be an issue. (That would be solved by adding IPv6 addresses to the Apaches, however). That would be another way to do it. I don't know what the plan is; my only point originally was that Apache doesn't actually need to know/care about IPv6. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
- Original Message - From: River Tarnell r.tarn...@ieee.org Jay Ashworth j...@baylink.com wrote: - Original Message - From: River Tarnell r.tarn...@ieee.org As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might No, it won't. The internal network IPs (which are used for communication between the proxy and the back-end Apache) are not publicly visible and are completely inconsequential to users. how would a 6to4NAT affect blocking? ISP NATs are a separate issue, and might be interesting; if nothing else, as one reason (however small) for ISPs to provide IPv6 to end users. (Help! I can't edit Wikipedia because my ISP's CGNAT pool was blocked!.) You misunderstood me. If we NAT between the squids and the apaches, will that adversely affect the ability of MW to *know* the outside site's IP address when that's v6? You're not just changing addresses, you're changing address *families*; is there a standard wrapper for the entire IPv4 address space into v6? (I should know that, but I don't.) His phrasing seemed a bit.. insufficiently clear, to me. That was me, attempting to clarify. Okay. I feel your clarification was not very clear ;-) ARIN didn't issue any /8s today, IANA did. ARIN was one of the *recipients* of those /8s. Acronym failure; sorry. Yes; Something-vaguely-resembling-IANA issued those last 5 blocks, in keeping with a long-standing sunset policy. Cheers, -- jra ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:35 PM, River Tarnell r.tarn...@ieee.org wrote: In article AANLkTi=OnSreaXMi3Gc+0==tzoq1jfix63xrkthv6...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: Q: Are we doing tproxy between the squids and apache servers? No. But since you mention it, LVS (Linux kernel-level load balancer) is used for load balancing, for both Squid and Apache. LVS supports IPv6, so that shouldn't be an issue. (That would be solved by adding IPv6 addresses to the Apaches, however). That would be another way to do it. I don't know what the plan is; my only point originally was that Apache doesn't actually need to know/care about IPv6. As Jay pointed out - handling of blocks (and logins) is an issue (at least, strongly potentially). But without knowing which shaped bricks are in use... -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
- Original Message - From: George Herbert george.herb...@gmail.com It might; how would a 6to4NAT affect blocking? It's not really a 6to4 NAT per se - it's a 6to4 application level proxy. The question is, what does Squid hand off to Apache via a IPv4 back end connection if the front end connection is IPv6. Which, frankly, I have no idea (and am off investigating...). I rarely have answer, but I do try to ask good questions. And yes, NAT was a poor choice of terms. Cheers, -- jra ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Skin specific logos
On Thu, 03 Feb 2011 21:19:58 +, Dan Nessett wrote: Our site has 4 skins that display the logo - 3 standard and 1 site- specific. The site-specific skin uses rounded edges for the individual page area frames, while the standard skins use square edges. This means a logo with square edges looks fine for the standard skins, but not for the site-specific skin. A logo with rounded edges has the opposite characteristic. The programmer who designed the site-specific skin solved this problem with a hack. The absolute url to a different logo with rounded edges is hardwired into the skin code. Therefore, if we want to reorganize where we keep the site logos (which we have done once already), we have to modify the site-specific skin code. While it is possible that no one else has this problem, I would imagine there are skins out there that would look better if they were able to use a skin specific logo (e.g., using a different color scheme or a different font). My question is: has this issue been addressed before? If so, and there is a good solution, I would appreciate hearing of it. Regards, I need to correct a mistake I made in this post (sorry for replying to my own question). The site-specific skin keeps its skin specific logo in the skin directory and the skin code uses ?php $this-text('stylepath') ?/? php $this-text('stylename') ? to get to that directory. So, the url is not hardwired. However, we would like to keep all of the logos in one place, so I think the question is still pertinent. -- -- Dan Nessett ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:41 PM, Jay Ashworth j...@baylink.com wrote: If we NAT between the squids and the apaches, will that adversely affect the ability of MW to *know* the outside site's IP address when that's v6? You're not just changing addresses, you're changing address *families*; is there a standard wrapper for the entire IPv4 address space into v6? (I should know that, but I don't.) There's no reason to NAT between the squid proxies and apaches -- they share a private network, with a private IPv4 address space which is nowhere near being exhausted. Front-end proxies need to speak IPv6 to the outside world so they can accept connections from IPv6 clients, add the clients' IPv6 addresses to the HTTP X-Forwarded-For header which gets passed to the Apaches, and then return the response body back to the client. The actual backend Apache servers can happily hum along on IPv4 internally, with no impact on IPv6 accessibility of the site. -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article 9259756.4629.1296769269783.javamail.r...@benjamin.baylink.com, Jay Ashworth j...@baylink.com wrote: - Original Message - From: River Tarnell r.tarn...@ieee.org Jay Ashworth j...@baylink.com wrote: how would a 6to4NAT affect blocking? ISP NATs are a separate issue, and might be interesting[...] You misunderstood me. If we NAT between the squids and the apaches, will that adversely affect the ability of MW to *know* the outside site's IP address when that's v6? No, since the client IP is passed via the XFF header. (In any case, putting NAT there doesn't seem very likely to me.) - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Skin specific logos
On Thu, Feb 3, 2011 at 1:19 PM, Dan Nessett dness...@yahoo.com wrote: Our site has 4 skins that display the logo - 3 standard and 1 site- specific. The site-specific skin uses rounded edges for the individual page area frames, while the standard skins use square edges. This means a logo with square edges looks fine for the standard skins, but not for the site-specific skin. A logo with rounded edges has the opposite characteristic. The programmer who designed the site-specific skin solved this problem with a hack. The absolute url to a different logo with rounded edges is hardwired into the skin code. Therefore, if we want to reorganize where we keep the site logos (which we have done once already), we have to modify the site-specific skin code. While it is possible that no one else has this problem, I would imagine there are skins out there that would look better if they were able to use a skin specific logo (e.g., using a different color scheme or a different font). My question is: has this issue been addressed before? If so, and there is a good solution, I would appreciate hearing of it. A couple ideas off the top of my head: * You could use CSS to apply rounded corners with border-radius and its -vendor-* variants. (May not work on all browsers, but requires no upkeep other than double-checking that the rounded variant still looks good. Doesn't help with related issues like an alternate color scheme for the logo in different skins.) * Your custom skin could use a custom configuration variable, say $wgAwesomeSkinLogo. Have it use this instead of the default logo, and make sure both settings get updated together. * You could use a fixed alternate path which can be determined by modifying the string in $wgLogo. Be sure to always store and update the second logo image correctly. * You could create a script that applies rounded corners or changes colors in an existing image file and saves a new one, then find some way to help automate your process of creating alternate logo images in the above. -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 3:50 PM, George Herbert george.herb...@gmail.com wrote: We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Unlikely. ISPs are just going to start forcing users to use NAT more aggressively, use tunnelling, etc. No residential client is going to be given a connection that's incapable of accessing IPv4-only sites until virtually all sites have switched, which is probably at least a decade from now. They'd (rightfully) cancel their subscription on the grounds that the Internet doesn't work. Of course, it would be great if we could switch sooner, and I hope we will. But it's not like we'll *need* to. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article aanlktikpg8sdnmgwkn2xmw2agqok1gdyuiopf7qbm...@mail.gmail.com, Brion Vibber br...@pobox.com wrote: On Thu, Feb 3, 2011 at 1:41 PM, Jay Ashworth j...@baylink.com wrote: If we NAT between the squids and the apaches, will that adversely affect the ability of MW to *know* the outside site's IP address when that's v6? You're not just changing addresses, you're changing address *families*; is there a standard wrapper for the entire IPv4 address space into v6? (I should know that, but I don't.) There's no reason to NAT between the squid proxies and apaches -- they share a private network, with a private IPv4 address space which is nowhere near being exhausted. I almost said this, but we do have Squids in esams, which has only a /24; and from what I've heard, probably won't be getting any more space, ever. So depending on how many Squids are added in the future, communication between esams and sdtpa could be fun. (The obvious fix there is to use IPv6 for that...) - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article aanlktikgm845zovsgqpdvq81juhn8wm3rwzcxvbqn...@mail.gmail.com, Aryeh Gregor simetrical+wikil...@gmail.com wrote: ISPs are just going to start forcing users to use NAT more aggressively, use tunnelling, etc. ISPs will probably do this, but I don't think it's right to say they'll *just* do this. In the US, for example, Comcast has been running IPv6 trials for a while, and expects to start giving end-user IPv6 addresses this year. So IPv6 for end users is coming, it's just taking longer than it should have. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:45 PM, Brion Vibber br...@pobox.com wrote: On Thu, Feb 3, 2011 at 1:41 PM, Jay Ashworth j...@baylink.com wrote: If we NAT between the squids and the apaches, will that adversely affect the ability of MW to *know* the outside site's IP address when that's v6? You're not just changing addresses, you're changing address *families*; is there a standard wrapper for the entire IPv4 address space into v6? (I should know that, but I don't.) There's no reason to NAT between the squid proxies and apaches -- they share a private network, with a private IPv4 address space which is nowhere near being exhausted. Front-end proxies need to speak IPv6 to the outside world so they can accept connections from IPv6 clients, add the clients' IPv6 addresses to the HTTP X-Forwarded-For header which gets passed to the Apaches, and then return the response body back to the client. The actual backend Apache servers can happily hum along on IPv4 internally, with no impact on IPv6 accessibility of the site. XFF mode forwarding seems to make the problem pretty much go away, yes. Thanks for confirming that's what's in use. -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 4:45 PM, Brion Vibber br...@pobox.com wrote: Front-end proxies need to speak IPv6 to the outside world so they can accept connections from IPv6 clients, add the clients' IPv6 addresses to the HTTP X-Forwarded-For header which gets passed to the Apaches, and then return the response body back to the client. Interesting. Is there a standard for using IPv6 inside X-Forwarded-For headers? I would think you'd need a new header altogether. (Yes, this is just used internally so it doesn't matter, but I'm still curious. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 1:53 PM, River Tarnell r.tarn...@ieee.org wrote: In article aanlktikpg8sdnmgwkn2xmw2agqok1gdyuiopf7qbm...@mail.gmail.com, Brion Vibber br...@pobox.com wrote: There's no reason to NAT between the squid proxies and apaches -- they share a private network, with a private IPv4 address space which is nowhere near being exhausted. I almost said this, but we do have Squids in esams, which has only a /24; and from what I've heard, probably won't be getting any more space, ever. So depending on how many Squids are added in the future, communication between esams and sdtpa could be fun. (The obvious fix there is to use IPv6 for that...) IIRC the Amsterdam proxies connect to the Tampa proxies on the internet, not directly to the back-end Tampa Apaches on the internal network. Something NAT-ish actually shouldn't hurt here, since the Amsterdam proxy addresses are whitelisted and thus skipped over for IP tracking -- we'd still have the original requestor's native IPv4 or IPv6 address in the X-Forwarded-For, and it won't matter if we see a particular proxy's IP or the proxy cluster's NAT IP on the other end. I'm not sure offhand how the cache-clearing signals are working these days, so not sure if that'd be affected (notifications from MediaWiki that particular pages have been updated and their URLs must be purged from cache need to be delivered to all our front-end proxies; this at least used to be done with local-network multicast and a proxy that rebroadcasted the multicast over in Amsterdam; if it still works like that then I don't think it'll be too affected). -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTi=nsymtrlv7dwrpixj-wnrpjkvgwyixs+zjc...@mail.gmail.com, Anthony wikim...@inbox.org wrote: Is there a standard for using IPv6 inside X-Forwarded-For headers? There is no standard for X-Forwarded-For at all. I would think you'd need a new header altogether. Since there's nothing to say what can and can't be put in an XFF header, the existing header works fine: X-Forwarded-For: 2a01:348:56:0:214:4fff:fe4a:ae17, 77.75.105.169 (That would be for an request from an IPv6 client, through two proxies, the first of which connected to the second via IPv4.) - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
I'm glad this thread soon got to the point where we realise the problem is on the application layer level. So what are exactly the implications for blocking and related issues when we will start to see ISP level NATing? Am I right to assume that we will start seeing requests from say a global ISP NAT which may cover many clients, XFF 10.x.x.x? If so, do we need to be able to send both the ISP NAT IP, and the XFF IP to the servers, and amend the software so that we are able to block on the combination (so we can block, for example IP 9.10.11.12 XFF 10.45.68.15?) Will we be needing anon user- and user talk pages for a combination of ISP NAT IP and XFF IP? when ISP level NAT's show up? kind regards, Martijn. On Thu, Feb 3, 2011 at 11:01 PM, George Herbert george.herb...@gmail.com wrote: On Thu, Feb 3, 2011 at 1:53 PM, Aryeh Gregor simetrical+wikil...@gmail.com wrote: On Thu, Feb 3, 2011 at 3:50 PM, George Herbert george.herb...@gmail.com wrote: We have a few months, but by the end of 2012, any major site needs to be serving IPv6. Unlikely. ISPs are just going to start forcing users to use NAT more aggressively, use tunnelling, etc. No residential client is going to be given a connection that's incapable of accessing IPv4-only sites until virtually all sites have switched, which is probably at least a decade from now. They'd (rightfully) cancel their subscription on the grounds that the Internet doesn't work. Of course, it would be great if we could switch sooner, and I hope we will. But it's not like we'll *need* to. You're making assumptions here that the residential ISPs in the US and Asia have stated aren't true... -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 5:10 PM, River Tarnell r.tarn...@ieee.org wrote: In article AANLkTi=nsymtrlv7dwrpixj-wnrpjkvgwyixs+zjc...@mail.gmail.com, Anthony wikim...@inbox.org wrote: Is there a standard for using IPv6 inside X-Forwarded-For headers? There is no standard for X-Forwarded-For at all. Not even a de-facto one? ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 2:14 PM, Anthony wikim...@inbox.org wrote: On Thu, Feb 3, 2011 at 5:10 PM, River Tarnell r.tarn...@ieee.org wrote: In article AANLkTi=nsymtrlv7dwrpixj-wnrpjkvgwyixs+zjc...@mail.gmail.comnsymtrlv7dwrpixj-wnrpjkvgwyixs%2bzjc...@mail.gmail.com , Anthony wikim...@inbox.org wrote: Is there a standard for using IPv6 inside X-Forwarded-For headers? There is no standard for X-Forwarded-For at all. Not even a de-facto one? http://en.wikipedia.org/wiki/X-Forwarded-For -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 1:07 PM, Chad innocentkil...@gmail.com wrote: On Thu, Feb 3, 2011 at 3:19 PM, Brion Vibber br...@pobox.com wrote: ... It might happen for the next quarterly release, but not this time around.] Haha, quarterly releases :p It's never too late to get back on track. :) -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article aanlktim3ht9hxau3sgwmfu9mph9gb2rx2misg3vmc...@mail.gmail.com, Martijn Hoekstra martijnhoeks...@gmail.com wrote: I'm glad this thread soon got to the point where we realise the problem is on the application layer level. If that was the only problem, this would be much simpler. So what are exactly the implications for blocking and related issues when we will start to see ISP level NATing? Users will either need to move to an ISP that supports IPv6, or accept that they will be frequently blocked on Wikipedia for no reason. Am I right to assume that we will start seeing requests from say a global ISP NAT which may cover many clients, XFF 10.x.x.x? NATs cannot send XFF headers. If ISPs deploy transparent proxies for HTTP (in conjunction with CGNAT for other traffic), then they might start sending XFF. At the moment I don't think it's clear how ISPs are going to handle this. If so, do we need to be able to send both the ISP NAT IP, and the XFF IP to the servers, and amend the software so that we are able to block on the combination (so we can block, for example IP 9.10.11.12 XFF 10.45.68.15?) Most NATs use a pool of addresses rather than a single address; this means that the ISP address could change on every request, even from the same user. So, I don't know if the RFC1918 address will have any value at all. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
Jay Asworth wrote: As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. It might; how would a 6to4NAT affect blocking? If the XFF header is right, from mediawiki POV an IPv4 internal NAT is no different than being a native IPv6 server. George Herbert wrote: On Thu, Feb 3, 2011 at 1:05 PM, River Tarnell r.tarn...@ieee.org wrote: Does any useful discussion still take place on that list? - river. I don't know; did any ever? 8-) It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. Ah, yes. That problem. We're using that hacked up Squid 2.7, right? They are two different branches. Seems WMF will need to move from squid package to squid3. It is running with 9 custom patches. I thought more of them would have been included upstream. 02-dfl-error-dir.dpatch Trivial. But ./configure --datadir=/path should be used instead. 10-nozerobufs.dpatch It's probably merged in, since we got it from upstream. 20-wikimedia-errors.dpatch Easy. It uses language codes now. 21-nomangle-requestCC.dpatch Simple to patch. 21-nomanglerequestheaders.dpatch No longer needed. squid3 has a configuration option for this. 22-normalize-requestAE.dpatch It's easy to strip the other encodings. I'd drop the first piece. 22-udplog.dpatch Candidate for manual patching. parse_sockaddr_in_list is now parse_IpAddress_list_token, remember that parse_sockaddr is made from the piece taken from the previous function. 25-coss-remove-swap-log.dpatch Not applicable. No COSS support in squid3 23-variant-invalidation.dpatch 26-vary_options.dpatch Need to be reimplemented. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 5:20 PM, River Tarnell r.tarn...@ieee.org wrote: In article aanlktim3ht9hxau3sgwmfu9mph9gb2rx2misg3vmc...@mail.gmail.com, Martijn Hoekstra martijnhoeks...@gmail.com wrote: So what are exactly the implications for blocking and related issues when we will start to see ISP level NATing? Users will either need to move to an ISP that supports IPv6, or accept that they will be frequently blocked on Wikipedia for no reason. But, supports IPv6 could be as simple as having an http proxy server which sends (fake) IPv6 XFF headers. By fake, I mean that there's not even a need for the client to actually use that IPv6 address, so long as each user/session gets a different IP within a block controlled by that ISP. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Skin specific logos
On Thu, 03 Feb 2011 13:52:30 -0800, Brion Vibber wrote: On Thu, Feb 3, 2011 at 1:19 PM, Dan Nessett dness...@yahoo.com wrote: Our site has 4 skins that display the logo - 3 standard and 1 site- specific. The site-specific skin uses rounded edges for the individual page area frames, while the standard skins use square edges. This means a logo with square edges looks fine for the standard skins, but not for the site-specific skin. A logo with rounded edges has the opposite characteristic. The programmer who designed the site-specific skin solved this problem with a hack. The absolute url to a different logo with rounded edges is hardwired into the skin code. Therefore, if we want to reorganize where we keep the site logos (which we have done once already), we have to modify the site-specific skin code. While it is possible that no one else has this problem, I would imagine there are skins out there that would look better if they were able to use a skin specific logo (e.g., using a different color scheme or a different font). My question is: has this issue been addressed before? If so, and there is a good solution, I would appreciate hearing of it. A couple ideas off the top of my head: * You could use CSS to apply rounded corners with border-radius and its -vendor-* variants. (May not work on all browsers, but requires no upkeep other than double-checking that the rounded variant still looks good. Doesn't help with related issues like an alternate color scheme for the logo in different skins.) * Your custom skin could use a custom configuration variable, say $wgAwesomeSkinLogo. Have it use this instead of the default logo, and make sure both settings get updated together. * You could use a fixed alternate path which can be determined by modifying the string in $wgLogo. Be sure to always store and update the second logo image correctly. * You could create a script that applies rounded corners or changes colors in an existing image file and saves a new one, then find some way to help automate your process of creating alternate logo images in the above. -- brion Thanks. I think the second idea works best for us. It also suggests the use of a global $wgSkinLogos that points to a directory where all of the skin logos are kept. Any reason why this is a bad idea? -- -- Dan Nessett ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 5:29 PM, Anthony wikim...@inbox.org wrote: But, supports IPv6 could be as simple as having an http proxy server which sends (fake) IPv6 XFF headers. By fake, I mean that there's not even a need for the client to actually use that IPv6 address, so long as each user/session gets a different IP within a block controlled by that ISP. And as an added bonus by using these proxies they can be more easily tracked for corporate marketing and government surveillance purposes! ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Skin specific logos
On 11-02-03 01:52 PM, Brion Vibber wrote: On Thu, Feb 3, 2011 at 1:19 PM, Dan Nessettdness...@yahoo.com wrote: Our site has 4 skins that display the logo - 3 standard and 1 site- specific. The site-specific skin uses rounded edges for the individual page area frames, while the standard skins use square edges. This means a logo with square edges looks fine for the standard skins, but not for the site-specific skin. A logo with rounded edges has the opposite characteristic. The programmer who designed the site-specific skin solved this problem with a hack. The absolute url to a different logo with rounded edges is hardwired into the skin code. Therefore, if we want to reorganize where we keep the site logos (which we have done once already), we have to modify the site-specific skin code. While it is possible that no one else has this problem, I would imagine there are skins out there that would look better if they were able to use a skin specific logo (e.g., using a different color scheme or a different font). My question is: has this issue been addressed before? If so, and there is a good solution, I would appreciate hearing of it. A couple ideas off the top of my head: * You could use CSS to apply rounded corners with border-radius and its -vendor-* variants. (May not work on all browsers, but requires no upkeep other than double-checking that the rounded variant still looks good. Doesn't help with related issues like an alternate color scheme for the logo in different skins.) * Your custom skin could use a custom configuration variable, say $wgAwesomeSkinLogo. Have it use this instead of the default logo, and make sure both settings get updated together. * You could use a fixed alternate path which can be determined by modifying the string in $wgLogo. Be sure to always store and update the second logo image correctly. * You could create a script that applies rounded corners or changes colors in an existing image file and saves a new one, then find some way to help automate your process of creating alternate logo images in the above. -- brion ;) not on all browsers is basically old versions of Opera and IE before 9. Border radius is supported by everything from Firefox, to WebKit based browsers, to Konqueror, to Opera, and even IE9 implements it... IF you use the vendor prefixes properly. And for reference:|-moz-border-radius: 10px; -khtml-border-radius: 10px; -webkit-border-radius: 10px; border-radius: 10px; |-moz- works for Gecko. -khtml- works in Konqueror (Linux versions, iirc there were some notes that the Windows versions [wait there's a Windows version?] doesn't have it), and some early versions of WebKit use it too. -webkit- of course works in WebKit (Safari, Chrome, etc...), and Opera 10.50 and IE9 implement the standard border-radius. Recent versions of Gecko and WebKit are now starting to use the standard border-radius too (but that's real recent, ie: Firefox 4 which isn't even released yet). And be sure to keep that order. You can move the -moz- after the -webkit- if you want. But -khtml- should always be before -webkit- and all vendor prefixes should always be before the standard properties. Since with css' cascading rules, if you say boder-radius: 5px; -moz-border-radius: 5px; you're essentially saying Hey, give me a border radius, but if you still have a buggy non-standard implementation lying around, use it instead of the correct standard behavior. Logos are something that's been on my mind a bit with skin improvements. Not really skin-specific tweaks, in fact I don't quite like that idea. If every skin defines a different logo, we no longer have a standard logo and you can't rely on simply being able to upload a logo and have it work everywhere. However what has come to mind is varying sizes of logos, in one use-case I suppose you could point out, some of Wikia's skins have used a different logo than the standard one, since it has a different size. I've been contemplating something like `$wgLogo['150x150'] = '...';` and letting skins define what kind of logo region they have so that SkinTemplate can make a best-pick out of the different sizes that have been configured. ~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name] ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 5:17 PM, Brion Vibber br...@pobox.com wrote: On Thu, Feb 3, 2011 at 1:07 PM, Chad innocentkil...@gmail.com wrote: On Thu, Feb 3, 2011 at 3:19 PM, Brion Vibber br...@pobox.com wrote: ... It might happen for the next quarterly release, but not this time around.] Haha, quarterly releases :p It's never too late to get back on track. :) Were we ever on track? -Chad ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 5:01 PM, George Herbert george.herb...@gmail.com wrote: You're making assumptions here that the residential ISPs in the US and Asia have stated aren't true... I'm awfully sure the assumption customers will not pay for an Internet connection that only connects to IPv6 addresses is true, and will remain true for at least five to ten years. How ISPs deal with it is up to them, but it's not going to be anything that stops customers from accessing IPv4-only sites. Once they have too few IPv4 addresses to assign all customers unique IPv4 addresses, then they'll share IPv4 addresses, such as via NAT -- as well as possibly giving out unique, stable IPv6 addresses. On Thu, Feb 3, 2011 at 5:02 PM, River Tarnell r.tarn...@ieee.org wrote: ISPs will probably do this, but I don't think it's right to say they'll *just* do this. In the US, for example, Comcast has been running IPv6 trials for a while, and expects to start giving end-user IPv6 addresses this year. Yes, but they'll have IPv4 access as well. Comcast's trial is dual-stack, not IPv6-only: http://www.comcast6.net/ There's not going to be any market for IPv6-only residential connections for the foreseeable future. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Selenium] Issue with importing a test database
Hi, I think it can be removed safely. Great to hear! Although in this case I would just run mysqldump with --skip-extended-insert so that it doesn't create such long lines. Yes, I tried that. But there are tables like l10ncache or objectcache that store serialized objects which produce long lines even in that case. Still, I think that should be a recommendation for creating the dumps. Cheers, Markus ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
I think it's accidentally happened that two MediaWiki versions were released less than 4 months apart, but I'd really like to see us get back to releasing three versions a years again instead of barely 2. Siebrand Op 04-02-11 00:17 schreef Chad innocentkil...@gmail.com: On Thu, Feb 3, 2011 at 5:17 PM, Brion Vibber br...@pobox.com wrote: On Thu, Feb 3, 2011 at 1:07 PM, Chad innocentkil...@gmail.com wrote: Haha, quarterly releases :p It's never too late to get back on track. :) Were we ever on track? -Chad ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 3:21 PM, Aryeh Gregor simetrical+wikil...@gmail.com wrote: On Thu, Feb 3, 2011 at 5:01 PM, George Herbert george.herb...@gmail.com wrote: You're making assumptions here that the residential ISPs in the US and Asia have stated aren't true... I'm awfully sure the assumption customers will not pay for an Internet connection that only connects to IPv6 addresses is true, and will remain true for at least five to ten years. How ISPs deal with it is up to them, but it's not going to be anything that stops customers from accessing IPv4-only sites. Once they have too few IPv4 addresses to assign all customers unique IPv4 addresses, then they'll share IPv4 addresses, such as via NAT -- as well as possibly giving out unique, stable IPv6 addresses. On Thu, Feb 3, 2011 at 5:02 PM, River Tarnell r.tarn...@ieee.org wrote: ISPs will probably do this, but I don't think it's right to say they'll *just* do this. In the US, for example, Comcast has been running IPv6 trials for a while, and expects to start giving end-user IPv6 addresses this year. Yes, but they'll have IPv4 access as well. Comcast's trial is dual-stack, not IPv6-only: http://www.comcast6.net/ There's not going to be any market for IPv6-only residential connections for the foreseeable future. There won't be much choice when the ISPs run out of IPv4 space to allocate new users. As I said - we'll see it in Asia soon enough, and then the US down the road a bit longer. -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] [Selenium] Issue with importing a test database
Although in this case I would just run mysqldump with --skip-extended-insert so that it doesn't create such long lines. Yes, I tried that. But there are tables like l10ncache or objectcache that store serialized objects which produce long lines even in that case. Still, I think that should be a recommendation for creating the dumps. Do you really want to dump those caching tables? Good question... I don't think it's generally necessary. But then again, I think there are two reasons why we should allow for dumps that contain these tables: * it's easy to use plain old mysql dumps. The easier we make it for developers to test their code, the better. An alternative would be to provide a dumper script that removes these tables or doesn't dump them in the first place. I could dive into that if people here think that's a better way to go. * maybe someone wants to write a regression test that reproduces some specific caching issue. I agree this is unlikely, most probably unit tests would be a better way here. But then again, the mechanism I am working on might be a possible basis for unit test resources as well. The point is (and I did not see that in my previous post) that even if I skip caching tables in the import, there is most certainly one other table which might exceed the 1024 byte limit and that is text. Cheers, Markus ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
Platonides (2011-02-03 21:53): Maciej Jaros wrote: Can you set a different deploy date for different projects? E.g. 18.00 UTC for Poland. I will not be able to be there when hell will brake loose as I will be working and I'm sure most of the Polish tech admins will be too. Note that we was able test Vector with current scripts before the deploment so this is a bit different. And I still remember the ammount of complaints bouncing here and there when Vector came and broke Wikipedia and what not... Not that they were all valid and could have been avoided, but maybe some could. Not that I'm complaining ;-), but prototype is... well it's empty for now and it would be good if we could test scripts and do it as fast (and as soon) as possible. I've already asked Leinad, but maybe someone could import current Mediawiki namespace to prototype quicker. For one thing - I think our script for moving the search bar to the left side panel will be probably broken (as you make it wider now) and this will probably have to be fixed right after deployment... Regards, Nux. You could also test the scripts in advance. If you think a particular script will break/need to be disabled with the switchover, you can always warp it in a if (wgVersion==1.16wmf4) { check. That kind of action shouldn't be needed, though. Hm... Not a bad idea. I don't see that plwiki have its search bar at the left sidebar, I don't know what does that script. You can switch with a link at the top of the page (in p-personal) on the left. It stays that way thanks to the cookie setting and you don't have to be logged in... Just a little something I've done for those wanting to go back a bit to monobook but not all the way ;-). I can already see it's broken on prototype (we now have MediaWiki imported) but that's an easy fix with that wgVersion hack. Cheers, Nux. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTi=1foHsEOh25Dr+Df2N4DFXj4iKU0SWXg1xXWP=@mail.gmail.com, Aryeh Gregor simetrical+wikil...@gmail.com wrote: On Thu, Feb 3, 2011 at 5:02 PM, River Tarnell r.tarn...@ieee.org wrote: ISPs will probably do this, but I don't think it's right to say they'll *just* do this. Â In the US, for example, Comcast has been running IPv6 trials for a while, and expects to start giving end-user IPv6 addresses this year. Yes, but they'll have IPv4 access as well. That's what I said. They'll do this -- meaning IPv4 with CGNAT -- as well as providing IPv6 access. - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTi=enp2_sy+g2dt_sw0oq8-05_jjcojgxsdt0...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: On Thu, Feb 3, 2011 at 3:21 PM, Aryeh Gregor simetrical+wikil...@gmail.com wrote: Yes, but they'll have IPv4 access as well. There won't be much choice when the ISPs run out of IPv4 space to allocate new users. Don't underestimate the ability of ISPs to sell really bad service to users who don't know any better. 99% of home users already use NAT for their Internet access; they aren't going to know or care that their ISP is now using CGNAT. (At least until they get blocked on Wikipedia...) - river. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 6:29 PM, George Herbert george.herb...@gmail.com wrote: There won't be much choice when the ISPs run out of IPv4 space to allocate new users. As I said - we'll see it in Asia soon enough, and then the US down the road a bit longer. You mean, when they have so little IPv4 space that they can't even fit all of their customers behind NAT? If they have enough IPv4 addresses at present to give out dedicated addresses to all users, they'll run out of addresses using NAT when they have maybe 10,000 to 100,000 times as many users as now, which seems unlikely to be anytime in the foreseeable future -- especially if traffic starts shifting to IPv6. NAT isn't a cure-all, but it works fine for browsing websites, which is all that directly concerns Wikipedia. The point remains, websites that are only accessible via IPv4 are not in any danger of becoming unreachable anytime soon by a large number of people. On Thu, Feb 3, 2011 at 7:02 PM, River Tarnell r.tarn...@ieee.org wrote: That's what I said. They'll do this -- meaning IPv4 with CGNAT -- as well as providing IPv6 access. Right. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] Planned 1.17 deployment on February 8
On Thu, Feb 3, 2011 at 3:17 PM, Chad innocentkil...@gmail.com wrote: On Thu, Feb 3, 2011 at 5:17 PM, Brion Vibber br...@pobox.com wrote: On Thu, Feb 3, 2011 at 1:07 PM, Chad innocentkil...@gmail.com wrote: On Thu, Feb 3, 2011 at 3:19 PM, Brion Vibber br...@pobox.com wrote: ... It might happen for the next quarterly release, but not this time around.] Haha, quarterly releases :p It's never too late to get back on track. :) Were we ever on track? 1.5.0: 2005-10-05 1.6.0: 2006-04-05 (first scheduled quarterly release) 1.7.0: 2006-07-06 - 3 months 1.8.0: 2006-10-10 - 3 months 1.9.0: 2007-01-10 - 3 months 1.10.0: 2007-05-09 - 4 months (a little late) 1.11.0: 2007-09-10 - 4 months (a little late) 1.12.0: 2008-03-20 - ~6 months (missed one quarter) 1.13.0: 2008-08-14 - ~5 months (missed one quarter) 1.14.0: 2009-02-22 - ~6 months (missed one quarter) 1.15.0: 2009-06-10 - 3.5 months 1.16.0: 2010-07-28 - 13.5 months (missed 3 quarters) So roughly: 2006 2007 were pretty well on track, with some slight slides due to extra beta release candidate testing on 1.10 and 1.11. Things then started to slide a bit with 2008 early 2009's updates slipping to semiannual, but we got back on quarterly track with summer 2009's 1.15 release. We then have a big empty spot where it took over a year to get 1.16 pushed through to stable release. I get the impression that a large part of this delay was that there was no clear consensus on the js2 stuff; once that got re-imagined as 1.17's ResourceLoader which got more buy-in, 1.16 was able to get a cleaner release without the next-gen JS code, and 1.17 has been able to concentrate more on that layer of things. 1.17 releasing soon should bring the schedule back to semi-annual, but there's no firm impediment other than our own self-organization to pushing 1.18 out 3 months later instead of 6 or 13. -- brion ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On 04/02/11 08:13, George Herbert wrote: On Thu, Feb 3, 2011 at 1:05 PM, River Tarnell r.tarn...@ieee.org wrote: Does any useful discussion still take place on that list? - river. I don't know; did any ever? 8-) It doesn't matter if Apache supports IPv6, since the Internet-facing HTTP servers for wikis are reverse proxies, either Squid or Varnish. I believe the version of Squid that WMF is using doesn't support IPv6. As long as the proxy supports IPv6, it can continue to talk to Apache via IPv4; since WMF's internal network uses RFC1918 addresses, it won't be affected by IPv4 exhaustion. Ah, yes. That problem. We're using that hacked up Squid 2.7, right? I'm not as involved as I was a couple of years ago, but I was running a large Squid 3.0 and experimental 3.1 site for about 3 years. Squid wiki says we need any 3.1 release (latest have some significant bugfixes): http://wiki.squid-cache.org/Features/IPv6 It's not necessary for the main Squid cluster to support IPv6 in order to serve the main website via IPv6. The amount of IPv6 traffic will presumably be very small in the short term. We can just set up a single proxy server in each location (Tampa and Amsterdam), and point all of the relevant records to it. All the proxy has to do is add an X-Forwarded-For header, and then forward the request on to the relevant IPv4 virtual IP. The request will then be routed by LVS to a frontend squid. MediaWiki already supports IPv6, so that's it, that's all you have to do. It would be trivial, except for the need to handle complaints from users and ISPs with broken IPv6 routing. What will be more difficult is setting up IPv6 support for all our miscellaneous services: Bugzilla, OTRS, Subversion, mail, etc. Many of those will be harder to set up than the main website. -- Tim Starling ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 4:32 PM, Tim Starling tstarl...@wikimedia.org wrote: On 04/02/11 08:13, George Herbert wrote: [...] Ah, yes. That problem. We're using that hacked up Squid 2.7, right? I'm not as involved as I was a couple of years ago, but I was running a large Squid 3.0 and experimental 3.1 site for about 3 years. Squid wiki says we need any 3.1 release (latest have some significant bugfixes): http://wiki.squid-cache.org/Features/IPv6 It's not necessary for the main Squid cluster to support IPv6 in order to serve the main website via IPv6. The amount of IPv6 traffic will presumably be very small in the short term. We can just set up a single proxy server in each location (Tampa and Amsterdam), and point all of the relevant records to it. All the proxy has to do is add an X-Forwarded-For header, and then forward the request on to the relevant IPv4 virtual IP. The request will then be routed by LVS to a frontend squid. MediaWiki already supports IPv6, so that's it, that's all you have to do. It would be trivial, except for the need to handle complaints from users and ISPs with broken IPv6 routing. Broken IPv6 routing will be evident to the providers and users, because nothing will work. I would expect few complaints to us... (perhaps naively...) As a general question - is there any reason not to move to Squid 3.1 and just be done with it that way? What will be more difficult is setting up IPv6 support for all our miscellaneous services: Bugzilla, OTRS, Subversion, mail, etc. Many of those will be harder to set up than the main website. Yes. 80/20 rule... -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
In article AANLkTikS7Kcenbz94UjhfOYi6usRGSSf5VBrQCpK=v...@mail.gmail.com, George Herbert george.herb...@gmail.com wrote: Broken IPv6 routing will be evident to the providers and users, because nothing will work. I would expect few complaints to us... (perhaps naively...) This is actually more of an issue than you might think... many users *already* have broken IPv6 connectivity[0], and it's only going to get worse with early adopters, since most (IPv4) users won't notice the problem. That might not be too bad, except most users tend not to report problems, and just assume the site is broken. Of course, in a couple of years when more sites support IPv6, broken connectivity will be much more obvious, and users will just complain to their ISP. - river. [0] Several years back I gave en.wikipedia.org an record for testing. (In hindsight, that was probably a bad idea, but anyway.) One of the users who was unable to access the site, and couldn't work out why, was another Wikimedia sysadmin. ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On 04/02/11 11:39, George Herbert wrote: Broken IPv6 routing will be evident to the providers and users, because nothing will work. I would expect few complaints to us... (perhaps naively...) There will be complaints. That's what World IPv6 Day is for, besides raising awareness: it's a day when complaints can be handled in a streamlined way. Speaking of which, I don't see us on this list: http://isoc.org/wp/worldipv6day/participants/ As a general question - is there any reason not to move to Squid 3.1 and just be done with it that way? Upgrading our Squid cluster is complex and time-consuming. It would be a lot of trouble to go to just for IPv6 support. -- Tim Starling ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
- Original Message - From: Tim Starling tstarl...@wikimedia.org It's not necessary for the main Squid cluster to support IPv6 in order to serve the main website via IPv6. The amount of IPv6 traffic will presumably be very small in the short term. We can just set up a single proxy server in each location (Tampa and Amsterdam), and point all of the relevant records to it. All the proxy has to do is add an X-Forwarded-For header, and then forward the request on to the relevant IPv4 virtual IP. The request will then be routed by LVS to a frontend squid. That's so obvious I'm embarassed I didn't think of it. Given how big we are, though very small may be most websites medium traffic day. :-) Cheers, -- jra ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
On Thu, Feb 3, 2011 at 6:29 PM, Tim Starling tstarl...@wikimedia.org wrote: On 04/02/11 11:39, George Herbert wrote: Broken IPv6 routing will be evident to the providers and users, because nothing will work. I would expect few complaints to us... (perhaps naively...) There will be complaints. That's what World IPv6 Day is for, besides raising awareness: it's a day when complaints can be handled in a streamlined way. Speaking of which, I don't see us on this list: http://isoc.org/wp/worldipv6day/participants/ As a general question - is there any reason not to move to Squid 3.1 and just be done with it that way? Upgrading our Squid cluster is complex and time-consuming. It would be a lot of trouble to go to just for IPv6 support. I would recommend upgrading the Squid cluster because it's run on a very significantly old version of the software, lacks several years worth of general patches and maintenance, and because it's not THAT big a deal. As I mentioned earlier in thread, I spent several years running Squid (at the time, 3.0-stablevarious and 3.1 beta tests) at a large site, and it didn't take that much time and effort despite working actively with Amos and others on what turned out to be an uninitialized buffer problem for over a year and having to compile, tune, and seriously test all the versions from 3.0-STABLE3 through ... 19, it looks like. It was perhaps 20% of my total work for about 3 years, and would have been far less had it not been for the one persistent bug (going from the prior 2.6 squids to 3.0 took about 3 months of me 1/4 time-ish). Performance was noticeably better with 3.0 vs 2.6 and 2.7. Avoidance of obsolete version software rot is a key operations technique. My current main commercial consulting customer has 5 years-past-end-of-support key enterprise infrastructure software that they don't even quite know how to upgrade, it's so old now. Don't let your versions get that old... Yes, 2.7 is still getting necessary Squid project patches, latest to STABLE9 in March 2010, but still. It's old 8-) -- -george william herbert george.herb...@gmail.com ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Re: [Wikitech-l] WMF and IPv6
I would recommend upgrading the Squid cluster because it's run on a very significantly old version of the software, lacks several years worth of general patches and maintenance, and because it's not THAT big a deal. As I mentioned earlier in thread, I spent several years running Squid (at the time, 3.0-stablevarious and 3.1 beta tests) at a large site, and it didn't take that much time and effort despite working actively with Amos and others on what turned out to be an uninitialized buffer problem for over a year and having to compile, tune, and seriously test all the versions from 3.0-STABLE3 through ... 19, it looks like. It was perhaps 20% of my total work for about 3 years, and would have been far less had it not been for the one persistent bug (going from the prior 2.6 squids to 3.0 took about 3 months of me 1/4 time-ish). Performance was noticeably better with 3.0 vs 2.6 and 2.7. Well, by all means, add our patches in to the newer version, and provide a source package. Avoidance of obsolete version software rot is a key operations technique. My current main commercial consulting customer has 5 years-past-end-of-support key enterprise infrastructure software that they don't even quite know how to upgrade, it's so old now. Don't let your versions get that old... Yes, 2.7 is still getting necessary Squid project patches, latest to STABLE9 in March 2010, but still. It's old 8-) Our plan is to eventually move away from squid and to varnish. Upgrading squid really isn't amazingly high on our priority list right now. Respectfully, Ryan Lane ___ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l