[Wikitech-l] The new Community Wishlist is accepting submissions

2024-07-18 Thread Sandister Tei
Hello everyone,

Community Wishlist <https://meta.wikimedia.org/wiki/Community_Wishlist> is
open and you can submit your wishes for technical improvements now. Visit
our new and simple wish form
<https://meta.wikimedia.org/wiki/Community_Wishlist/Intake> to submit your
ideas.


The first set of Focus Areas (groups of related wishes) will be announced
in August 2024.


Some submissions are already in, join and let’s prioritize which products
and technical improvements we should focus on next.


If you have some feedback or questions, please leave them on the project
talk page <https://meta.wikimedia.org/wiki/Talk:Community_Wishlist>.

Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Movement Communications Specialist (Community Tech & Trust and Safety
Product)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Community Wishlist is re-opening July 15. Here's what to expect, and how to prepare.

2024-07-09 Thread Sandister Tei
Hello everyone, the new Community Wishlist (formerly Community Wishlist
Survey) opens on 15 July for piloting. We have an update on what to expect,
and how to prepare
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Preview_of_the_New_Wishlist#July_1,_2024:_The_Community_Wishlist_is_re-opening_Jul_15,_2024._Here's_what_to_expect,_and_how_to_prepare.>
.


In case you have missed earlier updates, here is an FAQ to help resolve
questions you may have:

*Q:* How long do I have to submit wishes?

*A:* As part of the changes, Wishlist will remain open. There is no
deadline for wish submission.

*Q:* What is this ‘Focus Area’ thing?

*A:* The Foundation will identify patterns with Wishes that share a
collective problem and group them into areas known as ‘Focus Areas’
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Preview_of_the_New_Wishlist#Introducing_%E2%80%9CFocus_Areas%E2%80%9D>
. *The grouping of wishes will begin in August*.

*Q:* At what point do we vote? Are we even still voting?

*A:* Contributors are encouraged to discuss and vote on Focus Areas to
highlight the areas

*Q:* How will this new system move wishes forward for addressing?

*A:* The Foundation, affiliates, and volunteer developers can adopt Focus
Areas. The Wikimedia Foundation is committed to integrating Focus Areas
into our Annual Planning for 2025-26.

Focus Areas align to hypotheses (specific projects, typically taking up to
one quarter) and/or Key Results (broader projects taking up to one year).

*Q:* How do I submit a wish? Has anything changed about submissions?

*A:* Yes there are some changes. Please have a look at the guide
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Updates#July_1,_2024:_The_Community_Wishlist_is_re-opening_Jul_15,_2024._Here%E2%80%99s_what_to_expect,_and_how_to_prepare.>
.

I hope the FAQ helped.

You are encouraged to start drafting your wishes at your pace. Please
consult the guide as you do so. Also if you have an earlier unfulfilled
wish that you want to re-submit, we are happy to assist you.

Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Movement Communications Specialist (Community Tech & Trust and Safety
Product)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Renaming the Wishlist Survey: Please vote for your preferred name

2024-06-12 Thread Sandister Tei
Hello.

Thank you to everyone who has provided feedback on renaming the Community
Wishlist Survey
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Renaming>
so far. We now have 3 names for you to choose from:

1. Community Ideas Exchange
2. Community Feature Requests
3. Community Suggestions Portal

Please visit the voting page
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Renaming/Voting>
to select a name which resonates with you.

Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Movement Communications Specialist (Community Tech & Trust and Safety)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Renaming the Community Wishlist Survey

2024-06-06 Thread Sandister Tei
Hello everyone,

Community engagement around the Wishlist's redesign is still in progress to
help make decisions ahead of the relaunch of the survey in July 2024. The
revised survey will need a new name that reflects its new direction. You
are invited to help choose a name.

There are some early renaming ideas like Wikimedia Opportunities Registry,
Wikimedia Collaboration Hub and ImagineWiki. These names may not resonate
with you all, hence, please join the discussions and suggest your own name
if need be.

Looking forward to hearing from you on the discussion page
<https://meta.wikimedia.org/wiki/Talk:Community_Wishlist_Survey/Future_Of_The_Wishlist#Renaming_the_Wishlist>
.

NB: In case you have missed previous discussions and announcements
regarding the future of the Wishlist, please see them as bullets below:

   -

   Call to Dismantle the Wishlist
   
<https://meta.wikimedia.org/wiki/Special:MyLanguage/Community_Wishlist_Survey_2023/Larger_suggestions/Dismantling_of_the_annual_Wishlist_Survey_system>
   in January 2023
   -

   The Wikimedia Foundation's response
   
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/January_4,_2024_Update>
   to dismantling the Wishlist
   -

   Introduction and timeline
   
<https://meta.wikimedia.org/wiki/Special:MyLanguage/Community_Wishlist_Survey/Future_Of_The_Wishlist/Introduction>
   of the Future of the Wishlist project to redesign the Wishlist
   -

   The new Lead Community Tech Manager's conversations with the community
   
<https://meta.wikimedia.org/wiki/Special:MyLanguage/Community_Wishlist_Survey/Future_Of_The_Wishlist/Redesigning_the_Wishlist>
   on the way forward for the Wishlist.
   -

   Proposal of
   
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Preview_of_the_New_Wishlist>
   a new wish submission form; a new method for grouping wishes for voting
   called Focus Areas, and demonstration of Focus Areas with wishes around
   template use.
   -

   Lastly, the renaming
   
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Renaming>
   of the survey.


Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Movement Communications Specialist (Community Tech & Trust and Safety)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Community Wishlist Survey: Upcoming Changes

2024-05-10 Thread Sandister Tei
Hello everyone,

We have an update on some changes coming to the Wishlist:

   1. The new edition of the survey will *open in July and remain open
   year-round*.
   2. *There will be a new intake form*. Volunteers can submit a wish in
   their preferred language and do not need to know Wikitext.
   3. Volunteers will be able to submit wishes, review wishes, edit
   existing wishes, and discuss wishes with one another and the Foundation
   staff.
   4. Wishes will be grouped into "*Focus Areas"*.
   5. Participants will *vote for the Focus Areas.*
   6. Wishes can be categorized by project(s) and by “type” (bug, feature
   request, optimization, and so on)
   7. We’ll eventually have a *dashboard* which will allow users to search
   for wishes and filter by project or wish type.

For the full update and screenshots, please read our announcement on Meta
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Preview_of_the_New_Wishlist>
.
Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Movement Communications Specialist (Community Tech & Trust and Safety)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Redesigning the Community Wishlist Survey

2024-03-05 Thread Sandister Tei
Hello everyone, how are you? I'm writing because of your interest in
Wikimedia technical development.


Community Tech, the team that organizes the Community Wishlist Survey
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey>, has been
tasked to design a new Wishlist that is continuous, better integrated into
annual planning, and can efficiently run the intake of community-submitted
technical problems.


Our goal is to pilot this new Wishlist in July 2024.


We invite you to have a chat with Jack Wheeler
<https://en.wikipedia.org/wiki/User:JWheeler-WMF>, who has recently joined
the Wikimedia Foundation as the Lead Community Tech Manager and is
responsible for the Future of the Wishlist
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist>
.


Jack would like to have a conversation with you, to get input for the
design of the new survey, starting with how to define a "Wish."


Community Tech would appreciate you chatting with him; your input will be
invaluable.


You can check out Jack's first message to the community
<https://meta.wikimedia.org/wiki/Community_Wishlist_Survey/Future_Of_The_Wishlist/Conversations>,
where you can find a link to proceed to book time to talk to him, or share
your ideas directly on the talkpage with him.

Best regards,
Sandister Tei <https://meta.wikimedia.org/wiki/User:STei_(WMF)> (she/her)
Community Relations Specialist (Community Tech & Trust and Safety)
Wikimedia Foundation
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

[Wikitech-l] Noteworthy update on IP masking project

2022-03-29 Thread Sandister Tei
In case you have missed it, the Wikimedia Foundation (WMF) has been laying
the groundwork for IP Masking for some time now. We have had to do this due
to changing norms and regulations on the internet
<https://meta.wikimedia.org/wiki/IP_Editing:_Privacy_Enhancement_and_Abuse_Mitigation#What_is_IP_Masking_and_why_is_the_Wikimedia_Foundation_masking_IPs?>
.


Once IPs are masked, the addresses of editors who don't log in on Wikimedia
projects will be fully or partially hidden. Those who need IP access
<https://meta.wikimedia.org/wiki/IP_Editing:_Privacy_Enhancement_and_Abuse_Mitigation#Q:_Following_implementation_of_IP_Masking,_who_will_be_able_to_see_IP_addresses?>
to
fight spam, vandalism, harassment and disinformation will still be able to
view them.


We have recently announced the implementation strategy
<https://meta.wikimedia.org/wiki/IP_Editing:_Privacy_Enhancement_and_Abuse_Mitigation#Implementation_Strategy_and_next_steps_(25_February_2022)>
and
next steps for the project.


If you want more information about the changes please read and watch the IP
Masking project page
<https://meta.wikimedia.org/wiki/IP_Editing:_Privacy_Enhancement_and_Abuse_Mitigation>
.
––
Sandister Tei (she/her)
Community Relations Specialist (Anti-Harassment & Trust and Safety)
meta.wikimedia.org/wiki/User:STei_(WMF)
___
Wikitech-l mailing list -- wikitech-l@lists.wikimedia.org
To unsubscribe send an email to wikitech-l-le...@lists.wikimedia.org
https://lists.wikimedia.org/postorius/lists/wikitech-l.lists.wikimedia.org/

Re: [Wikitech-l] Historical use of latin1 fields in MySQL

2017-05-02 Thread Tei
On 2 May 2017 at 19:10, Mark Clements (HappyDog) 
wrote:

> Hi all,
>
> I seem to recall that a long, long time ago MediaWiki was using UTF-8
> internally but storing the data in 'latin1' fields in MySQL.
>

I remember a old thread in 2009.

https://lists.gt.net/wiki/wikitech/160875




-- 
--
ℱin del ℳensaje.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Brion's role change within WMF

2015-01-20 Thread Tei
Haha..  great.

Wellcome back Brion.

I am just a lurker on this mail list, but your post are always  so full of
energy. I love if that energy is the energy of mediawiki.

I hope that energy inspire others to not just the day to day process but
the crazy things you mention.  I remember when the idea of a wiki (a
website any visitor can edit?!) was totally freak, alien to everyone, I
hope projects like MediaWiki still mantain a open door to things that can
freak us again.


-- 
--
ℱin del ℳensaje.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Long term strategy for math on wikipedia

2013-07-26 Thread tei''
some mussing,

Why the exact size is needed? can't the formula be put inside a box
big enough, so 90% of the time the browser don't have to re-layout all
the page?.
Its other re-layour happening here?  maybe the MathJax build the
formula incrementally and the browser try to render every iteration?
If that where the case, then It would be solvable with visibility:
none;  slow render magic happends here visibility: normal;
What DOM is required? all of it?   .cloneNode is very fast at cloning
DOM trees. Code can operate over a clone, then copy the result.  If
the code is not attached to the page, maybe nothing will be rendered
until you .cloneNode back your new tree.

.cloneNode is faster than WeepingAngels :D

On 26 July 2013 04:04, Peter Krautzberger
peter.krautzber...@mathjax.org wrote:
 Ok this is getting off-topic -- sorry -- but glad you like it :)
 Unfortunately, webworker isn't an option, we need the DOM. Using the PNG
 for size is an nice idea, but only saves one measurement, all others occur
 within the equation. IIRC, the basic problem is that browser are not
 reliable enough when it comes to em to pixel conversion; the only way to
 get those correctly is to layoutmeasure -- recursively, of course,
 building the equation bottom up. But you should talk to our devs if you
 need more information on MathJax internals.

 Peter.


 On Thu, Jul 25, 2013 at 7:40 AM, tei'' oscar.vi...@gmail.com wrote:

 On 24 July 2013 21:12, Peter Krautzberger
 peter.krautzber...@mathjax.org wrote:
 ..
  @Oscar that's the idea of bug
  48036https://bugzilla.wikimedia.org/show_bug.cgi?id=48036 To
  test the user experience try this
  bookmarklethttps://gist.github.com/pkra/5500316
 

 :-O

 This is pretty.  And if it still affect the browser (small freezes wen
 the user is scrolling) maybe the javascript can be moved to a iframe
 or a web worker, so it don't run on the main javascript thread.
 About re-layouts, can't smart use of min-width min-height avoid
 that? you already have the size of the png as reference.




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Long term strategy for math on wikipedia

2013-07-25 Thread tei''
On 24 July 2013 21:12, Peter Krautzberger
peter.krautzber...@mathjax.org wrote:
..
 @Oscar that's the idea of bug
 48036https://bugzilla.wikimedia.org/show_bug.cgi?id=48036 To
 test the user experience try this
 bookmarklethttps://gist.github.com/pkra/5500316


:-O

This is pretty.  And if it still affect the browser (small freezes wen
the user is scrolling) maybe the javascript can be moved to a iframe
or a web worker, so it don't run on the main javascript thread.
About re-layouts, can't smart use of min-width min-height avoid
that? you already have the size of the png as reference.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Long term strategy for math on wikipedia

2013-07-24 Thread tei''
On 23 July 2013 11:20, Derk-Jan Hartman d.j.hartman+wmf...@gmail.com wrote:
 I'm wondering if the lack of reactions so far is positive or negative.

 It's negative, it shows that few people have the confidence to think they
 have something worthwhile to contribute on this niche area. :(


I read this as a invitation for more random feedback. Even if is not
100% worthwhile :P

So heres something, a plan:

Two styles of rendering. The formulas that are simple and are embedded
in paragraph, are rendered using HTML  with a magical MathML to HTML
converter.
Complex formulas are rendered as a PNG image,  a scripts autoload
something better if the user click on the image.
The user can opt-in to render as MathML or render to canvas with js
automatically with the complex formulas.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] scaled media (thumbs) as *temporary* files, not stored forever

2012-09-03 Thread Tei
On 31 August 2012 19:45, Isarra Yos zhoris...@gmail.com wrote:
 On 31/08/2012 08:57, Brion Vibber wrote:

 Heck yes. Generate some standard sizes at upload time and let the browser
 scale if a funny size is demanded. Modern browsers scale photos nicely, not
 like the nearest-neighbor ugliness from 2002.


 As a graphist, I must say this does not seem like a good idea. Only
 rendering certain sizes and having the browser then scale the weird ones
 will still result in fuzzy images, because no matter how good the renderer,
 every time a bitmap image is scaled down, sharpness is lost. This is part of
 why there is so much emphasis placed on using vectors even in a static
 environment - with those, the first scale down is also avoided, and there is
 a very visible difference in clarity even there. But while only rendering
 certain sizes and then having the browser scale those would defeat that
 purpose, having to scale down bitmaps twice would look even worse,
 regardless of subject.

 --
 -— Isarra

A possible scenario where a human intervention is always needed is
generating icons from svg files that draw flags or ..icons.

A 16x16 pixels USA flag rendered from a SVG by some naive rescaling
(mipmaping?) will look worse than wrong. Perhaps you still want to
have this icon generated from or inspired by the SVG file.
In videogames this sort of problems are sometimes solved by having all
the scaled versions precalculated in a single file: mipmaps.

http://en.wikipedia.org/wiki/Mipmap

Modern videogames uses other more advanced techniques, but the beauty
of mipmaps is that can be artist edited (perhaps the artist can edit
the 16x16 pixels version to still make sense.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] $( 'div' ) vs. $( 'div /') in coding conventions (and methods for building DOMs with jQuery)

2012-08-29 Thread Tei
I will rescue two facts listed in this thread, about using jquery and
creating tags

[quote][1]
Basically $( 'span class=foo' ) will break completely in IE7/IE8.

[quote][2]
It's important to note however that IE required that input and button tags
are created with a type (if they are going to have a specific one)

$( 'input type=password', { 'class', 'example' } );






-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] $( 'div' ) vs. $( 'div /') in coding conventions

2012-08-28 Thread Tei
On 28 August 2012 09:57, Tim Starling tstarl...@wikimedia.org wrote:
 On 28/08/12 13:04, Daniel Friesen wrote:
 I still can't believe the high-level jQuery answer after all these
 years to Select a div with an id provided by the user is Use `$(
 div# + userInput )` and hope there are no special characters. Or
 find some way to escape it yourself. when low-level dom can just
 query by ID and there is no reason for jQuery to force people to
 express everything in querys they parse when they could actually
 declare portions of a query with object notations.

 I share your reservations about jQuery, I voiced them at the time it
 was introduced to MediaWiki. I trolled the proponents by musing about
 how awesome jQuery would be if the selector engine (Sizzle) were removed.

 Personally, I would use document.getElementById() to do that. It's
 standard, and it's faster and more secure. More complex selectors
 derived from user input can be replaced with jQuery.filter() etc. with
 no loss of performance.


The selector thing is a query language, and very powerful / abusable.
Pretty much like SQL or any other 4th generation programming language.

Is high level,  so you always have the risk of people doing something
weird, but normally allow for JQuery programs to do in 3 lines of code
what normally will take 30 or 50 lines.  These 3 lines have less
probability to be bug free, and shows intention better than the low
level enhanced javascript code.   The Javascript language is hard in a
non obvious way, and this help is necessary.




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] best way to clean up the wiki from 3Gb of spam

2012-08-24 Thread Tei
On 24 August 2012 09:27, Yury Katkov katkov.ju...@gmail.com wrote:
 Hi everyone!

 I have found myself in the following situation several times: I
 created a wiki for some event or small project, everything works fine
 and after the event or project was done - nobody have seen this wiki
 for several months and does nothing on it. After several months
 somebody needs the wiki once again and realizes that the wiki database
 now have 3 Gb of text spam. Suppose that there is no back-up or
 rollback option in a wiki hosting. So here is the question: how to


No backups, no way to roolback to a date? thats bad.
You could start a wiki from scratch, copy manually from the old one
whatever was good.  Maybe share this task with a few selected
voluntaries.
Start the new one without anonymous edits, a sexy theme and a huge
campaign to attract people. No like the old wiki!, this is actually
good and maintaned!.
Maybe the lack of maintenance contributed to the decay. I wonder if a
wiki without enough contributors is worth existing, like a garden
without anyone to cut the grass.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Lua deployed to www.mediawiki.org

2012-08-23 Thread Tei
On 23 August 2012 08:18, Siebrand Mazeland (WMF)
smazel...@wikimedia.org wrote:
 On Thu, Aug 23, 2012 at 4:00 AM, Tim Starling tstarl...@wikimedia.org wrote:
 But Lua is so fast compared to
 wikitext that our Lua developers will have to exercise a lot of
 creativity to find applications that will exceed the performance limits.

 Famous words, kept in the archives forever :).

How they will know?  Theres some way to get feedback about this, like
rendering time: 3.8 seconds.

...

I read this mail-list for pure entertainment. I am trying to imagine
what cool things lua will allow. But seems more a improvement of
speed.  Speed will allow cool things to happen. So is more like a
indirect improvement (and probably a huge one).  Speed is sexy but not
much entertaining at first, seems a enabler.

So..  This is great news!, and herald of other awesome things to come :DDD



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedians are rightfully wary

2012-08-23 Thread Tei
Well. duh.

A community will always give incremental features. This is the bazaar
thing, where you can find everything, and is not a bad thing if the
architecture support a bazaar (like a command line).
When you are actually building a cathedral, you need a central entity
that take all the input, and then proceed to do whatever he damn
please.

PR as a role here, as you can tell people we are taking all the
input, studying it, and designing a system with the best ideas that
make the more sense, actually reserving to you the role to design,
not acting as a proxy for others.

http://theoatmeal.com/comics/design_hell

Bazaar is not always the solution.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Captcha for non-English speakers II

2012-07-31 Thread Tei
Sounds like captchas is something you want to make plug and play, and
use some external project that is evolving quickly to stay in the
winning side of a arms race.
Also sounds like captchas is something you want to be handled by
locals, to avoid the situation a chinese wiki with a english captcha.

Is pretty much proved that small self-made captchas don't do for
something like mediawiki, because attackers target it and is a huge
delicious target.


Has experience of people with AI and computer power raise, perhaps
this will become a lost battle*. The other option is anon can't edit
articles, ...anon edits are invisible and waiting for moderation,
..anon changes are satinified in some way (perhaps not allowing new
external links / modiying links ).


* I can imagine the ability of bots to understand catpchas will grown,
but not the ability of humans.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Responsive web design

2012-07-27 Thread Tei
On 27 July 2012 12:53, Peter Coombe thewub.w...@googlemail.com wrote:
 This is one of the aims of the planned 'Athena' skin:
 https://www.mediawiki.org/wiki/Athena

 Pete / the wub

Very interesting. Looks very good already.


On 27 July 2012 12:01, John Elliot j...@jj5.net wrote:
 Are there any initiatives in the MediaWiki community for a MediaWiki
 theme that supports 'responsive design' [1] -- where content is properly
 laid out in an accessible form on all manner of devices including
 desktops and smart phones?

 [1] http://www.alistapart.com/articles/responsive-web-design/


Ouch. This website is aligned to the left, and designed for a fixed
width of 1024px.

*reads content*

Yet again, we remember that HTML is liquid. Is supposed to be, wen is
made fixed, is because compromises.


On 27 July 2012 12:08, David Gerard dger...@gmail.com wrote:
 On 27 July 2012 11:01, John Elliot j...@jj5.net wrote:

 Are there any initiatives in the MediaWiki community for a MediaWiki
 theme that supports 'responsive design' [1] -- where content is properly
 laid out in an accessible form on all manner of devices including
 desktops and smart phones?
 [1] http://www.alistapart.com/articles/responsive-web-design/


 http://blog.tommorris.org/post/21073443312/introducing-awfulness-js

 HTML 3.2 is looking better every day ...

Infinite scrolling is not always evil.

If you need to show a PDF document as a list of 200 high resolution
JPG files. You can make the page height the resulting height if all
the jpg where downloaded. But only download the JPG the user is
looking at.
If you try the naive approach,and create a html that links with img
all the 700KB jpg files,  the page will chocke for most users, because
will ask for too much bandwidth too quick. And maybe the users only
need to look at the first page, to confirm is interesting (maybe are
books, and is the wrong book, or in the wrong language ).

http://es.scribd.com/doc/6457786/Godel-Escher-Bach-by-Douglas-R-Hofstadter-

By making a document become a computer program, we probably lose the
ability to garantee it will end rendering before the end of the
existence of the universe. But is often a good tradeoff.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Sorry if inconvenient : SVG-to-png rendering on Wikimedia

2012-06-28 Thread Tei
On 27 June 2012 14:38, Petr Kadlec petr.kad...@gmail.com wrote:
 On 27 June 2012 13:33, Achim Flammenkamp ac...@uni-bielefeld.de wrote:
 2) I wonder why the SVG-graphic devolpers use such an improper(?) 
 rendering-
 philosophy. All these articfacts on the Iran-flag would have been avoided, if
 the rendering is divided up logical into two steps: Firstly render the 
 SVG-code
 to the size given in this SVG-code (or an integer multiple for large final 
 sizes) to a pixel (discret) map.

I can imagine how this can be a problem. Thin lines than in your
original render have 1 pixel width, will be removed or suffer a strong
antialias wen reduced down.
Re-rendering a image on a smaller resolution will result on a pixel
perfect image (except thick eyebrows), where 1 pixel width is still 1
pixel.

(no related)
http://www.imagemagick.org/script/command-line-options.php?#liquid-rescale


 Technical remark: The width/height specified in the SVG file is a
 generic “length” value [1], which can be in other units than pixels
 [2] and also can take non-integral values. [3]


...and the situation is even more complex than that. So more things
that thing can go wrong.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-06-28 Thread Tei
You can always have a line on the bottom of a mobile page, with  Do
the page render correctly?. And somehow use it to flag pages that
render incorrectly.  Wooot, perhaps this flagging may even save the
user agent of the visitor using the link.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-06-28 Thread Tei
On 28 June 2012 16:37, Martijn Hoekstra martijnhoeks...@gmail.com wrote:
 On Thu, Jun 28, 2012 at 4:32 PM, Tei oscar.vi...@gmail.com wrote:
 You can always have a line on the bottom of a mobile page, with  Do
 the page render correctly?. And somehow use it to flag pages that
 render incorrectly.  Wooot, perhaps this flagging may even save the
 user agent of the visitor using the link.


 You and what privacy policy/Access to nonpublic data policy are going
 to process that user agent?

Oops...  :-O

I have no idea whatsoever.  (Note: I will not use here the 'I was just
make a suggestion' card).

Maybe  you can store information this way:
path_page  |  browser |  browser version |  number of reports

So if two persons with the exact same user agent report on page Y  the
result may look like that  (not actually a log, but 4 fields in a
database).
page/Y  |  FooBrosers | 3.21 |   2

Do this will make the law gods angry?.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] So what's up with video ingestion?

2012-06-20 Thread Tei
semi-offtopic comment:


ffmpeg is awesome,  somebody should send a cake to these people, or something.

*sends love his way*

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Inline styles trouble on the mobile site

2012-05-11 Thread Tei
http://www.stuffandnonsense.co.uk/archives/images/specificitywars-05v2.jpg

I think theres a limitation to that,   .nomobile  .darthvader
.darthvader   will not work as expected (I think)


On 11 May 2012 10:24, Ryan Kaldari rkald...@wikimedia.org wrote:
 What about this idea: We could introduce a new CSS class called 'nomobile'
 that functioned similarly to 'noprint' — any element set to this class would
 be hidden completely on mobile devices. If someone noticed a problem with a
 specific template on mobile devices, they could either fix the template or
 set it to 'nomobile'. This would make template creators aware of the problem
 and give them an incentive to fix their inline styles.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] What's the best place to do post-upload processing on a file? Etc.

2012-05-04 Thread Tei
I am just a random user lurking on the mail-list, nor a mw dev, but I
wonder why you don't look on the SVG handling already on mediawiki.
What you are doing is rendering, with one added dimension :D
http://www.mediawiki.org/wiki/Manual:Configuration_settings#SVG
This stuff has to do things in a stablished way, that you can just
expand, or replicate the strategy.

This is a super-casual comment. Wait for what the mw devs say.

On 4 May 2012 13:58, emw emw.w...@gmail.com wrote:
 Hi all,

 For a MediaWiki extension I'm working on (see
 http://lists.wikimedia.org/pipermail/wikitech-l/2012-April/060254.html), an
 effectively plain-text file will need to be converted into a static image.
 I've got a set of scripts that does that, but it takes my medium-grade
 consumer laptop about 30 seconds to convert the plain-text file into a
 ray-traced static image.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Lua: return versus print

2012-04-13 Thread Tei
On 13 April 2012 13:45, Tim Starling tstarl...@wikimedia.org wrote:

 At the moment, in the Lua support extension we have been developing,
 wikitext is output to the wiki via the return value of a function. For
 example in wikitext you would have:

 {{#invoke:MyModule|myFunction}}

 Then in [[Module:MyModule]]:

 local p = {}
 function p.myFunction()
   return 'Hello, world!'
 end
 return p

..


 Does anyone have any thoughts on return versus print generally? Are
 there other reasons we would choose one over the other?

 -- Tim Starling


Functions that return a value are chain-able.  I suppose this is true in
LUA too.

$int = function($txt){
  return parseInt($txt,10);
};

$hats = function($numHats){
 return  We have $numHats excellents hats! ;
};

echo $hats(  $int(4123,234)  );

Perhaps this make functions that return a string slightly better.

-- 
--
ℱin del ℳensaje.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Interwiki talk page posting

2012-04-12 Thread Tei
On 12 April 2012 00:00, Platonides platoni...@gmail.com wrote:
 On 10/04/12 23:52, MZMcBride wrote:
 Now, whether for your purposes using the API is the best option, I don't
 know. But for my purposes, the API has been wonderful. The only major hiccup
 I hit was a few weeks ago when database lag spiked to a crazy level and the
 script couldn't get past it.

 MZMcBride

 Well, that's actually a *feature*. If the db lag mis so high, it makes
 sense that the bots defer editing until things get better. :)


small font
possible small optimization
How is that api?  Something like this could work with appending text
to a page.  appending text is much faster than retrieving a page,
appending locally, then sending again the new text for saving.
/small font








-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Save to userspace

2012-04-10 Thread Tei
On 10 April 2012 10:57, Petr Bena benap...@gmail.com wrote:
 Hi, this is a proposal for a new feature to mediawiki core or a new
 extension (I would myself prefer an extension, but given that the
 development process seems to be broken, per my previous email, it's
 likely not possible for a non-wmf dev to have it deployed)

 I think many editors had the problem that they were editing a page,
 but had to leave computer for some reason before they finished and
 needed to save the work, the only way to handle this is to copy the
 source code and save it somewhere, but what if you aren't going to use
 the same computer?
..
 Special:MyPage/Draft_$ArticleName_$Date

 Users could change it to some other name of course before saving. Or
 even they could create a template for name in their advanced
 preferences. This could make it simpler to save work in progress for
 newbies.


Or you could have a magic reserved space for users,  that works
exactly like localStorage, but is serverside. And limited size.

Then implement the thing clientside.  So the only serverside thing is
the ability to save key Foo and get key Foo,  for the current
logued user.

This is more exploitable than just for drafts. Could be used by
extensions to store user controlled config. Perhaps to store some
browsing story. Or things you can't imagine yet.

The evil twin to this, is that if you implement drafs. Can be abused
to have a localStorage serverside area.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Time to redirect to https by default?

2012-04-02 Thread Tei
Perhaps have a black list of countries that are know to break the
privacy of communications, then make https default for logued users in
these countries.

This may help because:

 - It only affect a subgroup of users (the ones from these countries)
 - It only affect a subgroup of that subgroup,  the logued users (not all)
 - It create a blacklist of bad countries where citizens are under
surveillance by the governement

This perhaps is not feasible, if theres not easy way to detect the
country based on the ip.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia Url Shortner Service

2012-03-26 Thread Tei
On 26 March 2012 08:38, Ariel T. Glenn ar...@wikimedia.org wrote:
..
 As one of those non latin script users, it irks me no end when I see a
 url that is opaque to me soley because it's been url-encoded.  I would
 love a smarter url shortener; there's no reason projects with a latin1
 script should produce human readable urls while the rest of us get to
 guess where links on our projects lead.  Even somewhat weird
 romanization is better than what we have now.

 Ariel

Perhaps this is one of these problems that can't be solved just with computers.

Anyway It seems theres a system to convert unicode to ascii and back
to the original ascii.
http://en.wikipedia.org/wiki/Punycode

This http://xn--caon-hqa.es.wikipedia.org/  and
http://cañon.es.wikipedia.org/  is the same url.

The ugly face of the problem shows with something like this:  मुखपृष्ठ
 turns into  xn--21bu3ao1c3cq5f, I don't help any human is helped by
reading or writting xn--21bu3ao1c3cq5f.

http://hi.wikipedia.org/wiki/%E0%A4%AE%E0%A5%81%E0%A4%96%E0%A4%AA%E0%A5%83%E0%A4%B7%E0%A5%8D%E0%A4%A0

http://hi.wikipedia.org/wiki/xn--21bu3ao1c3cq5f

:P

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video codecs and mobile

2012-03-20 Thread Tei
On 20 March 2012 02:24, Brion Vibber br...@pobox.com wrote:
..
 In theory we can produce a configuration with TimedMediaHandler to produce
 both H.264 and Theora/WebM transcodes, bringing Commons media to life for
 mobile users and Apple and Microsoft browser users.

 What do we think about this? What are the pros and cons?

 -- brion

H.264 is a propietery format, and the owners can start asking for a
tax to encoders, decoders and users.

Wikipedia would not be the free encyclopedia if you start asking
people money for watching videos :D

What perhaps can be done, withouth hurting the cause of freedom much,
is to have the encoder.So if you uploade a propietery h.264 video,
its encoded into a free format.  You still helps the H.264 to spread,
so is not that cool.  If the owners of h.264 start asking money to
encoders, you can drop support for the format. Nobody is damaged
(people sould change habits of what format video to upload).

The problem can be output. What free format a iPhone support?, if the
reply is none, then you have to choise no service at all, or output
video in h.264.   Not serving people is bad, and serving h.264 is bad
because you help h.264 gains more ground, hurting the cause of open
formats. Theres no good option.   IF you can serve a video that a
iPhone can watch, even if using some crappy javascript or java format,
you avoid pushing h.264 (so you help the cause of free formats).  If
this solution is slow, you create a incentive for iPhone to support a
open format (what is good again), and everyone can watch all videos
(what is good again).



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video codecs and mobile

2012-03-20 Thread Tei
On 20 March 2012 16:26, Stephen Bain stephen.b...@gmail.com wrote:
...
 It would seem possible to bake Theora or WebM support into the iOS app
 and direct users there if browsing from mobile Safari. Performance
 would not be so great given it would only be software decoding (this
 is why I was asking if anyone is aware of OpenCL decoders).

This sounds sweet.

Youtube also works somewhat like that: wen you want to see a youtube
video link, the youtube app opens. It may make sense for a iOS user to
have a wikipedia video opening the wikipedia app. I have heard that
the wikipedia app is rather good, too. :DD


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Video codecs and mobile

2012-03-20 Thread Tei
On 20 March 2012 15:03, Lars Aronsson l...@aronsson.se wrote:
..
 Now, if we were to take this path, how do we flood Wikipedia with
 videos? Live interviews in all biographies of living people?
 If this turns out to be completely unrealistic, because we can't
 produce videos in sufficient quantity, then maybe the time is not
 yet mature for video in Wikipedia.

Perhaps if you allow uploading video to articles. All these Small
City Wikipedia Page will have a short clip of the Main Street made
with a movil phone. The 'technical' quality of the video will not very
high... until the wiki effect quicks-in, and a new better video
replace it. Everybody have a camera in his pocket, in 2012.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JS Iteration (was: Using prototypical inheritance in dynamically loaded ResourceLoader modules)

2012-03-19 Thread Tei
On 19 March 2012 16:29, Krinkle krinklem...@gmail.com wrote:
 ontopic :) 
..

 If you need non-linear keys, don't create an array!

 code
 var myObj = {}; // not []
 myObj.property = value;

 var customProp = getPropName();
 myObj[customProp] = value;

 for ( var key in myObj ) {
  // myObj[key]
 }
 /code

 -- Krinkle

Suppose you want to use indexOf in a array.
http://www.cjboco.com/blog.cfm/post/indexof-problems-in-internet-explorer/

So, lets use the power of JS to fix JS,  lets add indexOf to the prototype
http://stackoverflow.com/questions/948358/array-prototype-problem

Ooops, Array.prototype is global

var a = [1,2,3,4,5];
for (x in a){
// Now indexOf is a part of EVERY array and
// will show up here as a value of 'x'
}

So what you do? you use a library for that.
http://documentcloud.github.com/underscore/
http://api.jquery.com/jQuery.inArray/

$.each   , _.each
$.inArray,  _.include(
and a lot of other nice tools that make you happy.

People seems to think functional programming is not something to
avoid, but something that can be usefull.

-- 
--
ℱin del ℳensaje.







postdata:
//TODO comment something about this
 var i=0;while(i10) { print hello;i++}
 for(var i=0;i10;i++){ print hello}
 10.times(function(){ print hello});

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] PDF Download

2012-03-15 Thread Tei
For everyone working on this type of things: Thanks!, you guys and
gale are my personal hero :D

Countless PHP apps need to create PDF files one way or another. Is a
serius pain in the ... Or usually is. I never tried TCPDF.

Will these PDF files in UTF-8 urdu/other be readable?, It will be
funny if the people with computers configured for urdu/other don't
have a unicode font with urdu glyfhs, and use normal fonts (not
unicode aware) to write text/ read texts.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Opengrok for Mediawiki code

2012-03-01 Thread Tei
Heres a tool to explore a code base,  desktop only (no nice server here).

http://sourcenav.berlios.de/

Its based on the old red-hat navigator, ...so is very old, and look
like something from 1985, but It works (I just tested: download,
unzip,  configure   make  make install ).

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Language templates missing from Mediawiki dumps

2012-02-07 Thread Tei
I can see that you made the same question 2 years ago and nobody helped you
 :(  on that point.

If I remember correctly, recreating the database is uncharted waters just
now, and theres a call for anyone tryiing to do it, to document the
problems and solutions. To help people tryiing it on the future.

Hope somebody can help you :D, or give you some workaround, patch or idea.

On 21 May 2010 21:35, Nathan Day nathanryan...@gmail.com wrote:

 

 2. For the articles that do show up, the templates are not transcluding.
 For
 an article that has Template:Hat for example, where I can see that the page
 exists in the db, mediawiki is acting like the template doesn't exist.

 If anyone has any experience with these kinds of problems or importing
 database dumps in general, your help would be much appreciated. Thank you!

 Nathan Day


On 7 February 2012 17:51, Nathan Day nathanryan...@gmail.com wrote:

 Hi Mediawiki users,

 I have experienced a repeatable problem in which after importing a
 mediawiki database dump, I need to Export the language templates from
 the live site and import them into my local instance. I use MWDumper
 to build an SQL file which is read into my local MySql instance. The
 problem is that the language templates articles are not included in
 the dump.

 I am referring to all templates articles that have the following form:

 Template:EN

 EN in this could be any language code.

 This is not limited any particular Mediawiki dump, they all seem to
 have this problem. That being said, it is not much to simply import
 the missing templates manually, I was wondering if anyone had
 experienced this problem or has a quicker solution than the manual
 import/export.

 Best Regards,
 Nathan Day



-- 
--
ℱin del ℳensaje.
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JavaScript profiling

2012-02-02 Thread Tei
On 31 January 2012 18:09, Daniel Friesen li...@nadir-seen-fire.com wrote:
 On Tue, 31 Jan 2012 08:36:02 -0800, Tei oscar.vi...@gmail.com wrote:

 *Cough* some random article about making js that is not abusing the
 slower parts of the language
 http://www.bcherry.net/talks/js-better-faster  *Cough*

 Bah, that frankly looks worthless.

 It goes all over extremely micro optimizations. The kind of thing that'll
 have almost no effect and even less effect as JS engines get better.

This is wishfull thinking. I mostly agree.

*remembers something*  Opera seems working on a profiler for CSS.
Chrome seems working in that too.  It will be supersweet to press F12
in Chrome and see what are the slowest selectors / most expensive
rendering things. Maybe discover that border radious is the most
expensive, about 4 times more than opacity (??).



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] JavaScript profiling

2012-01-31 Thread Tei
*Cough* some random article about making js that is not abusing the
slower parts of the language
http://www.bcherry.net/talks/js-better-faster  *Cough*

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Escaping messages

2012-01-25 Thread Tei
On 24 January 2012 15:57, Daniel Friesen li...@nadir-seen-fire.com wrote:
 On Tue, 24 Jan 2012 06:16:48 -0800, Tei oscar.vi...@gmail.com wrote:

 On 24 January 2012 06:59, Daniel Friesen li...@nadir-seen-fire.com
 wrote:
 ..

 Don't delude yourself into thinking that you can easily blacklist the
 elements that would run a script.
 http://ha.ckers.org/xss.html


 What about using textNodes?

 http://stackoverflow.com/questions/476821/is-a-dom-text-node-guaranteed-to-not-be-interpreted-as-html


 Then it's just text.
 That's about as safe as throwing everything through htmlspecialchars, it's
 fine.

 I'm saying that you can't blacklist things. ie: You can't run a message
 through a jquery message filter, try to strip out script tags from the dom
 and then insert it thinking that you've removed all the XSS vectors.


People on the internet suggest something like  $(div/).text(
scriptalert('lets do evil!')/script ).html();


postdata:
Some random code I just wrote.

var Stringbuilder = (function(){
  var text = [];
  return {
add:function(txt){ text.push(txt); return this;},
encode:function(txt){ text.push( new String( $(div/).text( txt
).html() ) ); return this;},
toString:function() { return text.join(); }
  };
});

var str = Stringbuilder();

str.add(table)
.  add(tr)
.  add(td)
.  encode(scriptalert('lets do evil!)/script)
.  add(/td)
.  add(/tr)
.  add(/table);

str.toString();















-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Decentralized data center

2012-01-25 Thread Tei
If I remember correctly, a backup of database is in a public folder,
so anyone can download it.  Some people do it, to run analysis and do
some interesting science/tech projects.  If the FBI and the Interpol
put the wikimedia people in jail (thats not going to happend), anybody
in a country withouth extradition treaty to USA can recreate it.

So, the wikimedia organization can be killed, but the data is public
and mirrored enough so probably can't be killed as easy.

Random link (not related):
https://tahoe-lafs.org/trac/tahoe-lafs

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Escaping messages

2012-01-24 Thread Tei
On 24 January 2012 06:59, Daniel Friesen li...@nadir-seen-fire.com wrote:
..
 Don't delude yourself into thinking that you can easily blacklist the
 elements that would run a script.
 http://ha.ckers.org/xss.html


What about using textNodes?
http://stackoverflow.com/questions/476821/is-a-dom-text-node-guaranteed-to-not-be-interpreted-as-html


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: wikicaptcha on GitHub

2012-01-19 Thread Tei
On 19 January 2012 11:19, Cristian Consonni kikkocrist...@gmail.com wrote:
 2012/1/15 Nikola Smolenski smole...@eunet.rs:
 Дана Wednesday 11 January 2012 18:19:14 Cristian Consonni написа:
 However, to my knowledge there is not a single OCR that exports this data, 
 nor
 is there a standard format for it. If an open source OCR could be modified to
 do this, then it would be easy to inject data retreieved from captchas back
 into OCR-ed text. And it could be used for so much more :)

 I know (but I am not proficient in their use) at least two open source
 OCR softwares:
 * OCRopus[1a][1b], by the German Research Center for Artificial
 Intelligence, sponsored by Google
 * Tesseract[2a][2b], started by HP in far 1995, now Google-sponsored
 (yeah, this one too!) [note: as far as I know OCRopus used tesserect
 as an engine for OCR]
 * GOCR/JOCR

 I think much can be done.

 Cristian

More related tools, the documentcloud project.

Raw Engine  = Tools
http://documentcloud.github.com/docsplit/

Tools = Human Documents
https://github.com/documentcloud/document-viewer

Human Documents = Beatiful viewers
http://www.pbs.org/newshour/rundown/documents/mark-twain-concerning-the-interview.html
http://www.commercialappeal.com/withers-exposed/pages-from-foia-reveal-withers-as-informant/#document/p2/a2431

Using tesseract alone is too much work. Tesseract want tiff files in
a particular format, and DPI.  Humans want stuff in a easy to use
format, perhaps click on a image and get the text directly behind the
mouse arrow as text can be copied and paste.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] SOPA banner implementation

2012-01-17 Thread Tei
On 16 January 2012 02:31, Erik Moeller e...@wikimedia.org wrote:
 On Sat, Jan 14, 2012 at 11:33 AM, MZMcBride z...@mzmcbride.com wrote:
 The question becomes: how will this be implemented? I assume some kind of
 CentralNotice banner with some CSS absolute positioning or something? Is
 that right? Or will it be part of a separate extension?

 What's currently under primary consideration is a CN implementation
 geo-located to US visitors. First early prototype here:

 http://test.wikipedia.org/?banner=blackout


*cough*

USA can take over hostnames .com from other countries.

Then blackout the frontpage of these websites with this image:
http://rojadirecta.com/IPRC_Seized_2011_02_NY.gif
http://rojadirecta.org/IPRC_Seized_2011_02_NY.gif

So SOPA is not just a US visitor concern, but worldwide.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] proposed tech conference anti-harassment policy

2012-01-12 Thread Tei
On 12 January 2012 17:09, Chad innocentkil...@gmail.com wrote:
 On Thu, Jan 12, 2012 at 11:04 AM, David Gerard dger...@gmail.com wrote:
 On 12 January 2012 16:00, Sumana Harihareswara suma...@wikimedia.org wrote:

 The Wikimedia Foundation is dedicated to a harassment-free conference
 experience for everyone.  I'm proposing a fairly short and standard
 anti-harassment policy of the type that's becoming best practice for
 tech conferences and hackathons.
 Draft: https://www.mediawiki.org/wiki/User:Sumanah/AHP


 Nice one :-) A candidate for WMF and wider Wikimedia events in general, too.


 I don't see anything preventing harassment over choice of DBMS ;-)

 All kidding aside, this looks great. Agree with David wholeheartedly
 here.


Is amazing how this whole wikimedia thing as changed since 2002.
I don't understand half the messages on this mail list.   This is
really a complex and professional organization, that has advanced a
lot in the past years. I lurk on the mail list, tryiing to find a way
to help, but is really hard, because there are here top-notch
profesionals doing things the best way that is possible. You guys
rocks.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki 2.0

2012-01-04 Thread Tei
On 7 December 2011 10:33, Dmitriy Sintsov ques...@rambler.ru wrote:
...
 Is Javascript really that good? Some people dislike prototypical
 inheritance, it seems that jQuery prefers to use wrappers instead
 (that's a kind of suboptimal architecture). Also, Google had some
 complains about Javascript flaws (for example primitive types don't
 allow high performance available in Java / C#), suggesting to replace it
 with something else.. Although having common clientside / serverside
 codebase is nice thing, for sure.

Vanilla javascript is not good, and anything complex built on top
vanilla javascript will sink if it gets too large, or was built
withouth strong guidelines.
People has started writting frameworks for javascript, so nobody has
to write vanilla javascript.  Stuff like
http://documentcloud.github.com/backbone/

But I have not seen anything really big written in javascript (except
perhaps the ofuscated version of Gmail js).

For his fans, Javascript is Batman, and the Internet is Gothan.  In
the last 5 years javascript has promised to solve all world problems,
forever, and and delivered on it.
The thing with Javascript is that it has not changed, what is evolving
is how people use it, and is becoming better and better and better at
a impresive speed.

The true test for the javascript power will come on the next years,
wen people start building complex thing (using framerworks like
backbone) and success... or not. I think the future for js is not
written jet, and to be honest I am very excited because the latest
developments are scary awesome.






-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikimedia Based Open Source Project - Help On Licencing

2011-12-30 Thread Tei
If you take a open source proyect, and develop another open source
proyect based on it, you are forking.  You must continue with the same
license the original authors used. This is a must, if you can't
convince all the original authors to allow you to use a different
license (hard if theres a lot of then, easy if theres only one).

Forking is fun, but heres a caveat: the original proyect may go faster
and develop features you may need, but you cant have in your fork.
Forking is a good idea if you want to follow different guidelines, not
a good idea if you have the same ideas of the original authors.
Forking is sometimes obligatory wen the original authors are stubborn
and too slow to update a proyect. Sometimes forking is a good idea if
the original proyect is poisoned by bloat, and you want to use a axe
and remove all the complexity (perhaps that was the case of firefox?).
 Normally is not a good ideas, and all open source projects have
smarted up and include most features in plugins, so the core is
small, not bloated, and flexible for everyone need.

On 29 December 2011 07:31, Sajith Vimukthi sajith@gmail.com wrote:
 Hello All,

 I am planning to develop a new Open Source project keeping mediawiki as the
 baseline. I wonder how the  liciening  policy of mediawiki will affect my
 intention. Could somebody help me on whether it is possible to develop my
 own app using mediawiki and distribute it as an opensource project?

 Thanks,
 Regards,
 Sajith Vimukthi Weerakoon,
 T .P No : ++94-716102392
              ++94-727102392
 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Standards for communication between antivandalism tools

2011-12-28 Thread Tei
complete off-topic

I has ben following this interesting thread, and some links connected
here. Doing that  I have found this:

http://code.google.com/p/php-console/

Is a chrome extension + a php class that created a out of band
communication from the php error generation to the desktop, opening
notifications for the programmer.  Is really cool, If you have chrome
open, and you refresh a firefox window that generate error's, you
still get the same notifications popups.
I don't know how it work, I suppose the php class create a socket and
do some sort of machine-readable push notifications to chrome.

HTML5 even have a notification api.
http://www.html5rocks.com/en/tutorials/notifications/quick/

The future is going to be fun :D

end offtopic.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] IE6

2011-06-03 Thread Tei
This site is best viewed with Netscape Navigator 2.0 or higher.
Download Netscape Now!
http://web.archive.org/web/19961226001115/www.cae.wisc.edu/~agnew/sp/luna.html

It seems that these messages don't get the point of the web. That is
to let everyone browse the web with whatever is available to then.



On 3 June 2011 20:30, Mark Dilley markwdil...@gmail.com wrote:
 aside from main conversation

 Would it be a good community gesture to join Microsoft in trying to eradicate 
 IE6?

 http://TheIE6Countdown.com

 or to not join them and put up a more general banner

 http://IE6NoMore.com

 and move on?

 /aside from main conversation



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Ratio of interwiki bots' edits

2011-01-14 Thread Tei
On 13 January 2011 13:23, Amir E. Aharoni amir.ahar...@mail.huji.ac.il wrote:
...
 This is a major reason to have the Interlanguage extension finally
 enabled. Besides a MAJOR cleaning-up in Recent Changes in all
 Wikipedias, it will give a somewhat clearer picture of the activity in
 the ones.

The easier way to make a log look cool and easy to read, is to add
colours and icons.

Is somewhat to how adding colour to source code make apparent the
structure, while monochrome code is harder to read.

post data:
I always feel like a horrible person for posting here or doing
suggestions. There are procedures, and other mail list, ... this is
the tail of the beast, the head is on the other side. Sorry.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-04 Thread Tei
On 4 January 2011 16:00, Alex Brollo alex.bro...@gmail.com wrote:
 2011/1/4 Roan Kattouw roan.katt...@gmail.com
...

 What a creative use of #lst allows, if it is really an efficient, light
 routine, is to build named variables and arrays of named variables into one
 page; I can't imagine what a good programmer could do with such a powerful
 tool. I'm, as you can imagine, far from a good programmer, nevertheless I
 built easily routines for unbeliavable results. Perhaps, coming back to the
 topic.  a good programmer would disrupt wikipedia using #lst? :-)


Don't use the words good programmers, sounds like mythic creatures
that never adds bugs and can work 24 hours without getting tired.
Haha...

What you seems you may need, is a special type of people, maybe in the
academia, or student, or working already on something that already ask
for a lot performance .  One interested in the intricate details of
optimizing.
The last time I tried to search something special about PHP (how to
force a garbage recollection in old versions of PHP) there was very
few hits on google, or none.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2011-01-01 Thread Tei
On 1 January 2011 03:03, Ryan Kaldari rkald...@wikimedia.org wrote:
 On this note, MTV Networks (my previous job) switched from using
 Mediawiki to Confluence a couple years ago. They mainly cited ease of
 use and Microsoft Office integration as the reasons. Personally I hated
 it, except for the dashboard interface, which was pretty slick. Some
 Wikipedia power-users have similar dashboard style interfaces that they
 have custom built on their User Pages, but I think it would be cool if
 we let people add these sort of interfaces without having to be a
 template-hacker.

 The sort of interface I'm talking about would include stuff like
 community and WikiProject notices and various real-time stats. If you
 were a vandal fighter, you would get a vandalism thermometer, streaming
 incident notices, a recent changes feed, etc. If you were a content
 reviewer, you would get lists of the latest Featured Article and Good
 Article candidates, as well as the latest images nominated for Featured
 Picture Status, and announcements from the Guild of Copyeditors. The
 possibilities are endless.

 Ryan Kaldari


So, what stop people from writing a dashboard wizard that let people
select a predefined one?



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] How would you disrupt Wikipedia?

2010-12-30 Thread Tei
With open source software, there are people who think “that’s dumb,” there are 
people who think “I want to see it fixed” and there are people who think “I 
can do something about it.” The people at the intersection of all three  
power open source.


A lot of people in the open source project Y will not see a problem
with X,  being X a huge usability problem that stop a lot of people
from using Y.

So what you have is a lot of people I don't see the problem with
that  ( realistically, a lot of people that will talk about a lot of
things, and not about X ),  and maybe some of the people that have
problems with X that don't know how to communicate his problem, or
don't care enough.

Any open source project work like a club.  The club work for the
people that is part of the club, and does the things that the people
of the club enjoy.  If you like chess, you will not join the basket
club, and probably the basket club will never run a chess competition.
Or the chess club a basket competition.

If anything, the Problem with open source, is that any change is
incremental, and there's a lot of endogamy.

Also user suggestions are not much better. Users often ask for things
that are too hard, or incremental enhancements that will result on
bloat on the long term.

So really, what you may need is one person that can see the problems
of the newbies, of the devs, of the people with a huge investment on
the project, and make long term decisions, and have a lot of influence
on the people, while working on the shadows towards that goal.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] Considering mirroring mediawiki-1.16.0.tar.gz

2010-11-10 Thread Tei
Hi,

Just a sugestion.

Downloading the last version of MediaWiki seems to take ages ATM.
Maybe servers are overloaded. And not mirror is offered.

$  wget http://download.wikimedia.org/mediawiki/1.16/mediawiki-1.16.0.tar.gz
this takes ages

I have managed to download with this:
$ svn checkout 
http://svn.wikimedia.org/svnroot/mediawiki/branches/REL1_16/phase3

But I suppose is not the prefered method.

I may be a good idea to provide mirrors for the mediawiki-1.16.0.tar.gz file.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Collaboration between staff and volunteers: a two-way street

2010-10-15 Thread Tei
What is this?

I have never read something like this before ...(since 28-2-2002)

Maybe some people need to take a phone and talk with other people.
There are some things that can't be conducted by emails.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikisource books and web 1.0 pages (was: pas de sujet)

2010-08-13 Thread Tei
On 13 August 2010 10:27, Lars Aronsson l...@aronsson.se wrote:
...
 If we applied this web 2.0 principle to Wikibooks and Wikisource,
 we wouldn't need to have pages with previous/next links. We could
 just have smooth, continuous scrolling in one long sequence. Readers
 could still arrive at a given coordinate (chapter or page), but
 continue from there in any direction.

 Examples of such user interfaces for books are Google Books and the
 Internet Archive online reader. You can link to page 14 like this:
 http://books.google.com/books?id=Z_ZLMAAJpg=PA14
 and then scroll up (to page 13) or down (to page 15). The whole
 book is never in your browser. New pages are AJAX loaded as they
 are needed.

You are not thinking web here.

The web way to solve a problem like easy access to next page or
different chapters is to have a next page link or have all the
chapters as tabs, or something like that.  Make the wiki aware of the
structure of a book, and make it render these nextpage link / chapters
tabs.

Web 2.0 is obsolete now, the future is  Web 3.5  ( CSS3, HTML5)  (-:


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wikipedia is one of the slower sites on the web

2010-08-13 Thread Tei
On 12 August 2010 00:01, Domas Mituzas midom.li...@gmail.com wrote:
...

 I'm sorry to disappoint you but none of the issues you wrote down here are 
 any new.
 If after reading any books or posts you think we have deficiencies, mostly it 
 is because of one of two reasons, either because we're lazy and didn't 
 implement, or because it is something we need to maintain wiki model.


I am not dissapointed.  The wiki model make it hard, because
everything can be modified, because the whole thing is giganteous and
have a innertia, and the need to support a giganteous list of
languages that will make the United Nations looks like timid.  And I
know you guys are a awesome bunch. And lots of eyes has ben put on the
problems.

This make mediawiki a ideal scenario to think about tecniques to make
the web faster.

Heres a cookie, a really nice plugin for firebug to check speed.
http://code.google.com/p/page-speed/


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Sentence-level editing

2010-08-13 Thread Tei
On 10 August 2010 00:55, Jan Paul Posma jp.po...@gmail.com wrote:
...
 The last few weeks I've worked on some prototypes to illustrate this idea.
 You can find the most advanced prototype here: 
 http://janpaulposma.nl/sle/prototype/prototype3.html
 The full project proposal and prototypes can be found here: 
 http://www.mediawiki.org/wiki/User:JanPaul123/Sentence-level_editing


This is the best thing I have see the whole week.

But suppose I want to be always in this mode, I can't click on links.
 Also maybe the user don't really need to see the lines highlighted,
... here in firefox wen I double-click on a word, the whole phrase is
highlited... is like the user is already acustomed for how
line-selections works on double click. But IANUE.Also, there need
to be stronger options,  commit changes, cancel  or something,
maybe theres a better way to do this.
May pay to ask some real usability experts about this, if have some
positive feedback to give, before continue.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wikipedia is one of the slower sites on the web

2010-08-11 Thread Tei
On 2 August 2010 15:24, Roan Kattouw roan.katt...@gmail.com wrote:
 2010/8/2 Tei oscar.vi...@gmail.com:
 Maybe a theme can get the individual icons that the theme use, and
 combine it all in a single png file.

 This technique is called spriting, and the single combined image file
 is called a sprite. We've done this with e.g. the enhanced toolbar
 buttons, but it doesn't work in all cases.

 Maybe the idea than resource=file must die in 2011 internet :-/

 The resourceloader branch contains work in progress on aggressively
 combining and minifying JavaScript and CSS. The mapping of one
 resource = one file will be preserved, but the mapping of one resource
 = one REQUEST will die: it'll be possible, and encouraged, to obtain
 multiple resources in one request.




A friend a recomended to me a excellent book (yes books are still
usefull on this digital age).  Is called Even Faster Websites.
Everyone sould make his company buy this book. Is excellent.

Reading this book has scared me for life.  There are things that are
worst than I trough.  JS forcing everything monothread (even stoping
the download of new resources!)... while it download ..and while it
executes.   How about a 90% of the code is not needed in onload, but
is loaded before onload anyway. Probably is a much better idea to read
that book that my post (thats a good line, I will end my email with
it).

Some comments on Wikipedia speed:


1)
This is not a website http://en.wikipedia.org;, is a redirection to this:
http://en.wikipedia.org/wiki/Main_Page
Can't http://en.wikipedia.org/wiki/Main_Page; be served from
http://en.wikipedia.org;?

Wait.. this will break relative links on the frontpage, but.. these
are absolute!  a href=/wiki/Wikipedia
title=WikipediaWikipedia/a

2)
The CSS load fine.  \o/
Probabbly the combining effort will save speed anyway.

3)
Probably the CSS rules can be optimized for speed )-:
Probably not.

4)
A bunch of js files!, and load one after another, secuential. This is
worse than a C program written to a file from disk reading byte by
byte. !!
Combining will probably save a lot. Or using a strategy to force the
browser to concurrent download + lineal execute, these files.

5)
There are a lot of img files. Do the page really need than much? sprinting?.

Total: 13.63 seconds.


You guys want to make this faster with cache optimization. But maybe
is not bandwith the problem, but latency. Latency accumulate even with
HEAD request that result in 302.   All the 302 in the world will not
make the page feel smooth, if already acummulate into 3+ seconds
territory.   ...Or I am wrong?

Probably is a much better idea to read that book that my post

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wikipedia is one of the slower sites on the web

2010-08-02 Thread Tei
On 28 July 2010 21:13,  jida...@jidanni.org wrote:
 Seems to me playing the role of the average dumb user, that
 en.wikipedia.org is one of the rather slow websites of the many websites
 I browse.

 No matter what browser, it takes more seconds from the time I click on a
 link to the time when the first bytes of the HTTP response start flowing
 back to me.

 Seems facebook is more zippy.

It seems fast here: 130ms.

The first load of the homepage can be slow:
http://zerror.com/unorganized/wika/lader1.png
http://en.wikipedia.org/wiki/Main_Page
(I need a bigger monitor, the escalator don't fit on my screen)



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] wikipedia is one of the slower sites on the web

2010-08-02 Thread Tei
On 2 August 2010 15:24, Roan Kattouw roan.katt...@gmail.com wrote:
...
 Maybe the idea than resource=file must die in 2011 internet :-/

 The resourceloader branch contains work in progress on aggressively
 combining and minifying JavaScript and CSS. The mapping of one
 resource = one file will be preserved, but the mapping of one resource
 = one REQUEST will die: it'll be possible, and encouraged, to obtain
 multiple resources in one request.


:-O

That is awesome solution, considering the complex of the real world
problems. Elegant, and probably as side effect may remove some bloat.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Architectural revisions to improve category sorting

2010-07-23 Thread Tei
This looks like a very important feature, and a hard one to get right.
You guys are real world heros :-)

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] developer

2010-07-16 Thread Tei
On 15 July 2010 18:35, Domas Mituzas midom.li...@gmail.com wrote:
 Hi!

 It's for Wikimedia operations as well as MediaWiki development.  The
 latter tends to take up much more of the list traffic in practice,
 though.

 Indeed, staff-ization of WMF made more and more of communications internal, 
 for better or worse.

 Domas

*puts on robe and El Santo mask*  the WMF sounds a lot like wrestling
org. Or a Linux desktop.
http://fmwwrestling.us/WMFArmyNews2007.html

Is MediaWiki appropiate for these uses? Sounds like something that ask
for very structure organization, and easy to use interface. MediaWiki
can be lots of things, but don't look like a XML Database. (The
semantic web has been namedroped here. But, anyway...)

You guys sould implemente sudo in MediaWiki, so the command sudo make
me a sandwitch works with mediawiki :-)  (I know mediawiki has not
been designed for different level access users).

Maybe Wikipedia sould use a different font for the text added on the
last revision of a page, like... font-family: script.  Obviusly, wikis
are handwritting, not typewritting. Wikis are not books, are notes on
a book. You *no* need a canvas renderer for mediawiki pages.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: [Full-disclosure] Someone using Wikipedia to infect others

2010-07-02 Thread Tei
On 1 July 2010 21:58, OQ overlo...@gmail.com wrote:
 On Thu, Jul 1, 2010 at 7:09 AM, Christopher Grant
 chrisgrantm...@gmail.com wrote:
 -- Forwarded message --
 From: Henri Salo he...@nerv.fi
 Date: Thu, 1 Jul 2010 14:36:40 +0300
 Subject: [Full-disclosure] Someone using Wikipedia to infect others
 To: full-disclos...@lists.grok.org.uk, m...@wikimedia.org

 And another person who doesn't understand that the From address isn't
 authoritative.

Is a obscure point.  To know it you have to learn SMTP, probably
reading the RFC.


 When RFC 822 format [7, 32] is being used, the mail data include the
   memo header items such as Date, Subject, To, Cc, From.  Server SMTP
   systems SHOULD NOT reject messages based on perceived defects in the
   RFC 822 or MIME [12] message header or message body.


You seems a informed person. We have to ignore this message? It looks
somewhat odd and out of context (mostly because the sender never added
context).I can see how, if Wikipedia host pdf files, some of these
can act as vector for malware.  If wikipedia serve the files
unmodified, I can see how is possible to write a renderer to memory
that rebuild the whole file, withouth any scripting. But such thing
may take lots of hours of programmers, and mediawiki seems very
limited by that factor  (and not epicness, there are lots of epics
things in the mediawiki proyects... BRAVO!).



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Fwd: [Full-disclosure] Someone using Wikipedia to infect others

2010-07-02 Thread Tei
On 2 July 2010 11:13, Q overlo...@gmail.com wrote:
 On 7/2/2010 3:46 AM, Tei wrote:
 On 1 July 2010 21:58, OQ overlo...@gmail.com wrote:
 On Thu, Jul 1, 2010 at 7:09 AM, Christopher Grant
 chrisgrantm...@gmail.com wrote:
 -- Forwarded message --
 From: Henri Salo he...@nerv.fi
 Date: Thu, 1 Jul 2010 14:36:40 +0300
 Subject: [Full-disclosure] Someone using Wikipedia to infect others
 To: full-disclos...@lists.grok.org.uk, m...@wikimedia.org

 And another person who doesn't understand that the From address isn't
 authoritative.

 Is a obscure point.  To know it you have to learn SMTP, probably
 reading the RFC.


 Well I take my statement back, he posted a followup in which he knows it
 didn't come from wikipedia, but still chose to say using Wikipedia to
 infect others instead of using Wikipedia's name to infect others


Somwhat unrelated:

Google has this service to see PDF's online.
http://docs.google.com/viewer?url=http://noscope.com/photostream/albums/various/no.pdf

Since it run on the browser, is safer than running any adobe monocultiveware.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] a wysiwyg editor for wikipedia?

2010-06-01 Thread Tei
On 31 May 2010 23:53, William Le Ferrand will...@corefarm.com wrote:
 Dear all,

 I've started to develop a simple wysiwyg editor that could be useful to
 wikipedia. Basically the editor gets the wiki code from wikipedai and builds
 the html on client side. Then you can edit the html code as you can imagine
 and when you are done another script converts the html back to wiki code.

 There is a simple demo here :
 http://www.corefarm.com:8080/wysiwyg?article=Open_innovation .

This is a nice approach,  It will be really helpfull for no-geek
people, and theres millions (literally) of no geek people you may want
to be able to edit a MediaWiki based wiki.

Beyond this place, there be dragons!

I like your implementation, because feel almost fullscreen. Too bad
theres two scrollbars. Are these two scrollbars needed? Maybe the
whole page can scroll, and the toolbar be stuck on the top of the
screen.
I suppose your implementation still pretends is a textarea, because
that way the toolbar is always visible. But this is not needed if
theres some CSS magic that can make the toolbar visible.. floating.

You can even have different toolbar in different regions, like a
Close / Save  toolbar on the North-Right that only show if the mouse
is near that location.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Reasonably efficient interwiki transclusion

2010-05-25 Thread Tei
On 25 May 2010 15:30, Platonides platoni...@gmail.com wrote:
 church.of.emacs.ml wrote:
 However, you'd have to worry that each distant wiki uses only a fair
 amount of the home wiki server's resources. E.g. set a limit of
 inclusions (that limit would have to be on the home-wiki-server-side)
 and disallow infinite loops (they're always fun).

 Infinite loops could only happen if both wikis can fetch from the other
 one. A simple solution would be to pass with the query who requested it
 originally. If the home wiki calls a different wiki, it would blame the
 one who asked for it (or maybe building a wiki + template path).


or request can have something like a deep counter,  to stop request
that need more than N iterations.  So if you get a request with deep 
 20, you can ignore that request.  This don't stop a evil wiki passing
a false deep level, but the idea of interwiki is a network built on
top of the www of wikis you trusth, so you will not add a evil wiki



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Vector skin not working on BlackBerry?

2010-05-24 Thread Tei
Maybe could be usefull to have a special url that automatically
disable vector, and make this setting continue with the session (is
that even possible?).

Like
http://en.wikipedia.com/classicview

So people designing smartphones, that want his smartphone to use the
old interface, can make the link point there, so a old style theme is
in use even before the user has the oportunity to login.


On 14 May 2010 17:55, Huib Laurens sterke...@gmail.com wrote:
 When visiting a vector site with a Nokia N-series (tested on N90,95and
 96) viewing it on Opera Mini there are problems also.

 Its impossible to click the buttons userpage, talkpage, watchlist ..
 they are all in the same spot, trying to press will give the
 prefences.

 The Buttons with Edit Talk History are all in the same place and
 pressing one isn't working at all.

 Best,


 2010/5/14, David Gerard dger...@gmail.com:
 BTW, Vector is also breaking the PS3 browser, according to a couple of
 comments on the blog post.

 We seem not to be quite achieving that graceful degradation thing
 immaculately ...


 - d.

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l



 --
 Huib Abigor Laurens

 Tech team
 www.wikiweet.nl - www.llamadawiki.nl - www.forgotten-beauty.com

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 https://lists.wikimedia.org/mailman/listinfo/wikitech-l




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Vector skin not working on BlackBerry?

2010-05-24 Thread Tei
On 24 May 2010 17:25, Aryeh Gregor simetrical+wikil...@gmail.com wrote:
 On Mon, May 24, 2010 at 7:48 AM, Tei oscar.vi...@gmail.com wrote:
 Maybe could be usefull to have a special url that automatically
 disable vector, and make this setting continue with the session (is
 that even possible?).

 http://en.wikipedia.org/wiki/?useskin=monobook

 You have to readd ?useskin=monobook or useskin=monobook on every page
 view, though, since it's not added to links.


Neato.

I have made this awesome url:
http://en.wikipedia.org/w/index.php?title=Special:UserLoginreturnto=Special:Preferencesreturntoquery=useskin%3Dmonobook

It create the login page in monobook, and once you login, puts you in
settings (again in monobook), so from there you can disable monobook
(note: I have not tested that myself, I love vector ;-)  )




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Vector skin not working on BlackBerry?

2010-05-14 Thread Tei
On 14 May 2010 16:34, Platonides platoni...@gmail.com wrote:
 vyznev wrote:
 I'd suspect some script is trying to do an API query using a very long URL,
 and the fact that this only happens when JS is enabled lends support to
 this.

 I don't see any long url requested on enwiki.


 I would guess that the Javascript associated with Vector is using more
 memory than the Monobook version did and this is causing an error for
 people that used to have no trouble browsing Wikipedia before.

 jQuery is by itself much more complex than the scripts used before,
 plugins.combined.min.js and Vector.combined.min.js aren't small either.

 Does Blackberry also fail on other webs using jQuery?



Maybe a Blackberry emulator can be usefull (one that is accurate). Any
idea of where one can be downloaded?


-- 
--
ℱin del ℳensaje.





























.
.
.
postdata:
And a ASCII art editor, to draws quick mockups in wikipedia.  That would be fun.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Automatic links to man pages

2010-04-19 Thread Tei
On 19 April 2010 12:13, masti mast...@gmail.com wrote:
 On 04/19/2010 11:52 AM, Strainu wrote:
 On Mon, Apr 19, 2010 at 12:46 PM, mastimast...@gmail.com  wrote:
 why not use http://linuxmanpages.com/ as external link?

 The site is unimportant, what is important is to have the link
 automatically created, hence shortening the wikitext.

 then create a template {{man|command}} for example



Maybe will be a good idea to expand the interwiki  idea.

Make so mediawiki download a list of the 200 most important articles
_titles_ from interwiki-connected sites.

And run the search on these articles _titles_  too.

That way,  specialized wikis (like a Australian Math specialized wiki)
may still shows if you search a topic about Math, that is soo
specialized that is not notable enough for Wikipedia.










-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Mediawiki extensions and Windows 2003 server

2010-04-06 Thread Tei
On 6 April 2010 13:52, Chad innocentkil...@gmail.com wrote:

 On Sun, Apr 4, 2010 at 8:29 PM, Makelesi Kora-Gonelevu
 makele...@gmail.com wrote:
  The wiki wont come on it says server cannot be found.
 

 To confirm: the wiki works normally, but when you add the require() lines
 to LocalSettings, the server starts returning that it cannot be found?


the user that IIS use to read files on the htdocs area has enough
right to open all the files?


--
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSoC project advice: port texvc to Python?

2010-03-31 Thread Tei
On 30 March 2010 16:34, Victor bob...@ua.fm wrote:


 Getting it off Ocaml is an excellent first step. I have tried and
 failed to get texvc working properly in MediaWiki myself more than a
 few times, because of Ocaml not wanting to play nice ...


 Actually I completely disagree. Since I've got some experience with both
 OCaml and PHP the idea to convert Maths processing to PHP looks
 like a not so good idea at all.


Doing Math in any programming language or digital computer is a bad
idea. Anyway.

I could be worse, It could be Math in Javascript:


v = (011 + 1 + 0.1)/3;

303.36667



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] serious questions about UsabilityInitiative going live

2010-03-26 Thread Tei
The day this turn live, the Internet will become a bit crazy.
Wikipedia is a important part of 2010 internet.

I am sure lots of people will want to put this skin on his suddenly
old-looking wikipedias in internet and lans. You guys are doing a nice
work.

note: I must report that the window  Add media wizard: Edit resource
don't  trim the content of the Caption textarea prior to use, so
If you put lots of   \n there, end on the page.

note: Is worth putting a link If you have problems with the new
theme, please report then here?.

note: is maybe a matter of personal preference, but I think the main
edit textarea could have a bit of padding, so the text can't touch the
border of the textarea.

note:  why textareas have vertical scrollbars?, maybe theres a way to
dinamically change the height counting the number of  \n inside the
textarea. Of that is evil?


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] GSoC project advice: port texvc to Python?

2010-03-24 Thread Tei
Is my impression that this is a problem where a PHP implementation
could be better. Who cares if is slow? the result can be cache
forever?, is something you will run only once, and the heavyweight
work (draw) will be made by C compiled code like the GD library?.

you need speed in stuff that run inside loops (runs N times), or on
stuff that delay other stuff (can't be made async), or on stuff that
is CPU intensive and runs every time (the computer get angry), or
stuff that is very IO intensive (mechanical stuff is slow), or stuff
that nees gigawats of memory (the memory gets angry if you touch lots
of pages).

stuff that is paralelizable,  async, memory light, and not IO
intensive don't need any optimization at all. write the better code
and have a big smiley :-)

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Broken videos

2010-03-16 Thread Tei
On 16 March 2010 10:29, Lars Aronsson l...@aronsson.se wrote:
 This past weekend, at the SXSW conference, a new initiative
 was launched to get video on Wikipedia,
 http://videoonwikipedia.org/

 That sounds like a great idea.


Uh..  binary content in a wiki.

Well.. the other option are Youtube, that can't be trusted, as videos
are removed often. Or the archive.org [1], that seems his mission.

[1] http://www.archive.org/details/movies


 But among the first videos to be uploaded since the
 announcement are two that show some construction
 equipment and both break my browser every time I try
 to watch them. How can this be possible with a fully
 updated Mozilla Firefox 3.5.8 on Ubuntu Linux?

Uh..  buffer overflow errors, complex file format loaders  in
programming languages like C Or false assumptions about memory
management with poor detection error and fatal consecuences.  Maybe
even bad program intercomunication.  ...
The internet was built on text based protocols to avoid these problems
or help debug then.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Broken videos

2010-03-16 Thread Tei
On 16 March 2010 15:04, Gregory Maxwell gmaxw...@gmail.com wrote:
..
 On Tue, Mar 16, 2010 at 6:52 AM, Tei oscar.vi...@gmail.com wrote:
 Uh..  buffer overflow errors, complex file format loaders  in
 programming languages like C Or false assumptions about memory
 management with poor detection error and fatal consecuences.  Maybe
 even bad program intercomunication.  ...
 The internet was built on text based protocols to avoid these problems
 or help debug then.

 Ironic that you say that... the variable length null terminated string
 is probably the worst thing to ever happen to computer security.
 Text does imply a degree of transparency, but it's not security
 cure-all.
Nothing is, but transparency is the next cool thing.


 In any case, video and audio are in the same boat as Jpeg/png, +/-
 some differences in software maturity.  There aren't any known or
 expected malware vectors for them.
Agreed. But seems possible to generate streams of video that crash the
browser.  So.. probably autoplay is evil. (is already evil because is
NSFW since distract coworkers )


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] modernizing mediawiki

2010-03-03 Thread Tei
On 3 March 2010 11:05, Domas Mituzas midom.li...@gmail.com wrote:
...
 , why can't the money be put into making a modern product instead of in 
 pockets of the people who run it? I know Wordpress and Mediawiki serve two 
 different purposes, but that's not the point. The point is, one is modern 
 and user friendly (Wordpress), and the other (Mediawiki) is not. Other 
 complaints:

 MediaWiki is very modern product, just not on the visible side (though maybe 
 usability initiative will change that). It has lots of fascinating modern 
 things internally :)
 Though of course, by in pockets of people who run it, you're definitely 
 trolling here. :-(


I have read this very thread in a different context.  Quake engines.
Most quake engines fall short in the usability side, because are
evolved by tecnical people,  and some of the users ask for more ...
tecnical features.  You have (on the quake scene)  sysadmins that want
sysadmins stuff, and are more than happy to edit text files and access
the server with ssh,  and  QuakeWorld veterans that ask some
competitive fairness and features that smooth the engine, but don't
exactly make the game look better, only cleaner... and would greet any
new console command :-)  (quake has a console to change settings).

There (on Quake engine) usability is always a nice thing to have, but
seems the priorities lie elsewhere, and anything else gets into the
engines before usability.The distance of usability from Quake to
any 2010 game is giganteous. Is something I would love to fix.. but I
have tons of other ideas.

I feel It takes a enormeous effort to move a proyect managed by
programmers and sysadmins for programmers and sysadmins to be
palatable by mere desktop users.  The good news is that sysadmins and
programmers are desktop users too, so will love a sexier interface,
and more usability.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] hiphop progress

2010-03-01 Thread Tei
Looks like a loot of fun :-)

On 1 March 2010 11:10, Domas Mituzas midom.li...@gmail.com wrote:
...
 Even if it wasn't hotspots like the parser could still be compiled
 with hiphop and turned into a PECL extension.

 hiphop provides major boost for actual mediawiki initialization too - while 
 Zend has to reinitialize objects and data all the time, having all that in 
 core process image is quite efficient.

 One other nice thing about hiphop is that the compiler output is
 relatively readable compared to most compilers. Meaning that if you

 That especially helps with debugging :)

 need to optimize some particular function it's easy to take the
 generated .cpp output and replace the generated code with something
 more native to C++ that doesn't lose speed because it needs to
 manipulate everything as a php object.

 Well, that is not entirely true - if it manipulated everything as PHP object 
 (zval), it would be as slow and inefficient as PHP. The major cost benefit 
 here is that it does strict type inference, and falls back to Variant only 
 when it cannot come up with decent type.
 And yes, one can find offending code that causes the expensive paths. I don't 
 see manual C++ code optimizations as way to go though - because they'd be 
 overwritten by next code build.


this smell like something that can benefict from metadata.

/* [return  integer] */  function getApparatusId($obj){
  //body
}

 - - -

User question follows:

What we can expect?  will future versions of MediaWiki be hiphop
compatible? there will be a fork or snapshot compatible?  The whole
experiment looks like will help to profile and enhance the engine,
will it generate a MediaWiki.tar.gz  file we (the users) will able to
install in our intranetss ??

Maybe a blog article about your findings could be nice. It may help
write fast PHP code. And will scare littel childrens and PHP
programmers with a C++ background.



--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikigalore

2010-02-26 Thread Tei
On 27 February 2010 00:48, Peter Kaminski kamin...@istori.com wrote:
 Gerard Meijssen writes,

 As far as leeches go, this takes the price as far as I am concerned. It
 sells scripts to leech any Wikimedia project and all this to have the rating
 of a website go up.


 The best long-term solution would be to work with Google and other
 search engines to encourage them to efficiently recognize Wikimedia
 content and reduce rankings of sites mirroring it just for SEO.

 It wouldn't make much sense to leech Wikimedia if it adversely affected
 site rankings.

*cough*But... these sites make a service to wikipedia and the public
in general, creating multiple mirrors of articles. If wikipedia is
down, you can still read the articles elsewhere, if the articles are
deleted, you can read these articles on these sites*cough*



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Google phases out support for IE6

2010-02-20 Thread Tei
On 20 February 2010 23:00, Ævar Arnfjörð Bjarmason ava...@gmail.com wrote:
 On Thu, Feb 4, 2010 at 14:37, Aryeh Gregor
 simetrical+wikil...@gmail.com wrote:
 On Wed, Feb 3, 2010 at 5:11 PM, Trevor Parscal tpars...@wikimedia.org 
 wrote:
 Are the stats setup to differentiate between real ie6 users and bing
 autosurfing?

 I'd be pretty surprised if Bing is generating enough traffic to
 noticeably affect the percentage, even if it does get counted as IE6.

 Bing can hit you pretty hard:
 http://blogs.perl.org/users/cpan_testers/2010/01/msnbot-must-die.html


Well.. is not a crawler, ... it seems a
cracracracracracracracracrawler, for the way repeat the same request N
times.  It act not like a single crawler, but like a multiple list of
crawler with not intercomunication all from the same range of ip's.  A
single optimization would be for all these crawlers to share the
robots.txt file (is not this obvious?). Since that request is not
shared, you see all instances making separate requests.  Theres also
not sincronization, so all the crawlers can hit you site at the same
time, say... 15 asking robot.txt at once .. or spread 2 hours, is just
luck.

It seems a .. simplistic and brute approach to internet indexing..  :-/
It seems Microsoft is dropping money on the problem, but not brains.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] enwiki dump problems

2010-02-19 Thread Tei
On 19 February 2010 14:54, Jamie Morken jmor...@shaw.ca wrote:
 I hope you guys are planning on adding some way to download the wikimedia 
 commons images too at some point

something that could be fun is git.

plus something like a ticket system, where you ask for permissions
to download a tree inside git, and a temporal password is given to
you.

but you don't know, maybe serverside git is killing (CPU?), maybe
clientside is too complex for normal users, .. (?)

 What about www.wikipirate.org or wikitorrent.org for a list of wikimedia 
 torrents? :)
 both are available!

nah,.. thats not cool enough, ... what about  pir...@home?, a
screensaver that seed the torrent,  with a tiny animation of pirates,
and sea, and the wiki logo :-),.. but you don't really want to be
associated to that!.

the idea to use Torrent to download snapshots of {insert large file
here} has ben discussed before on this mail list, It seems the
agrement is that is a bad idea for files that change often (you don't
want to have 10 different versions to download, this mean less
seeders), there are other problems (like ISP f*cking with the torrent
protocol, network overhead,  capped downloads, etc. )  but that was
the main gripe against torrent.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

[Wikitech-l] [Idea] Using MediaWiki has a engine.

2010-02-18 Thread Tei
I can imagine this url:

http://someserver.com/wiki/api.php?wikicode=code here
urlencodedformat=jsondevice=handheld

as a way to ask MediaWiki installed in someserver.com  to render has
html inside json the wikicode provided.

Do mediawiki already support something like this?

Is somewhat interesting, as make the engine portable, so any device
can render wikicode in html, even clientside javascript (with a single
ajax call).




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] [Idea] Using MediaWiki has a engine.

2010-02-18 Thread Tei
On 18 February 2010 15:06, Conrad Irwin conrad.ir...@googlemail.com wrote:

 On 02/18/2010 01:42 PM, Tei wrote:
 I can imagine this url:

 http://someserver.com/wiki/api.php?wikicode=code here
 urlencodedformat=jsondevice=handheld

 as a way to ask MediaWiki installed in someserver.com  to render has
 html inside json the wikicode provided.

 Do mediawiki already support something like this?

 Is somewhat interesting, as make the engine portable, so any device
 can render wikicode in html, even clientside javascript (with a single
 ajax call).


 Yes, the API ( http://www.mediawiki.org/wiki/API ) has a parse method:

 http://en.wiktionary.org/w/api.php?action=parsetext=[[hello]]format=jsonfm

 Yours

 Conrad

:-O

Thanks Mr. Conrad. This is really awesome!.





-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] User-Agent:

2010-02-16 Thread Tei
On 16 February 2010 02:54, Domas Mituzas midom.li...@gmail.com wrote:
 Hi!

 from now on specific per-bot/per-software/per-client User-Agent header is 
 mandatory for contacting Wikimedia sites.

 Domas

Looks OK to me.  But this is the type of decission that often break
existing stuff somewhere on the internet.  With user-agent, I can
imagine some overzealot firewall or anonymization service removing it.
 But you can always use the X's strategy: break something and way 6
years, If no one is angry seems no one was using the feature.


t...@localhost:~$ telnet en.wikipedia.org 80
Trying 91.198.174.2...
Connected to rr.esams.wikimedia.org.
Escape character is '^]'.
GET /
Please provide a User-Agent header
Connection closed by foreign host.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] New phpunit tests eat ~1GB of memory

2010-02-06 Thread Tei
off-topic-ish

theres also a function to explicit call the collector (sorry, I forgot the name)

it seems php only flag things for collecting  (wen you unset($stuff)
), but never really collect then. The documentation says that the
collector will run wen theres not work to do, but this seems a very
rare event (maybe is never triggered).

hu,,

On 6 February 2010 03:37, Jared Williams jared.willia...@ntlworld.com wrote:

 A guess would be to try PHP 5.3, and enable the garbage collector.

 http://www.php.net/manual/en/function.gc-enable.php

 Jared

 -Original Message-
 From: wikitech-l-boun...@lists.wikimedia.org
 [mailto:wikitech-l-boun...@lists.wikimedia.org] On Behalf Of
 Ævar Arnfjörð Bjarmason
 Sent: 06 February 2010 01:05
 To: wikitech-l@lists.wikimedia.org
 Cc: mediawiki-...@lists.wikimedia.org
 Subject: [Wikitech-l] New phpunit tests eat ~1GB of memory

 Since the tests were ported from t/ to phpunit's
 phase3/maintenance/tests/ in r61938 and other commits running
 the tests on my machine takes up to 1GB of memory and grows
 as it runs more tests. It seems that phpunit uses the same
 instance of the php interpreter for running all the tests.

 Is there some way around this? Perhaps phpunit.xml could be
 tweaked so that it runs a new php for each test?

 Furthermore when I run `make test' I get this:

     Time: 03:35, Memory: 1849.25Mb

     There were 2 failures:

     1) LanguageConverterTest::testGetPreferredVariantUserOption
     Failed asserting that two strings are equal.
     --- Expected
     +++ Actual
     @@ @@
     -tg-latn
     +tg


 /home/avar/src/mw/trunk/phase3/maintenance/tests/LanguageConve
 rterTest.php:82

     2) Warning
     No tests found in class ParserUnitTest.

     FAILURES!
     Tests: 686, Assertions: 3431, Failures: 2, Incomplete: 34

 But when I run phpunit manually on the test then all tests pass:

     $ phpunit LanguageConverterTest.php
     PHPUnit 3.4.5 by Sebastian Bergmann.

     .

     Time: 23 seconds, Memory: 23.75Mb

     OK (9 tests, 34 assertions)

 Also after I get Tests: 686, Assertions: 3431, Failures: 2,
 Incomplete: 34 in the first output phpunit doesn't exit and
 continues hugging my memory. Why is it still running? It has
 already run all the tests.

 On Wed, Feb 3, 2010 at 17:35,  ia...@svn.wikimedia.org wrote:
  http://www.mediawiki.org/wiki/Special:Code/MediaWiki/61938
 
  Revision: 61938
  Author:   ialex
  Date:     2010-02-03 17:35:59 + (Wed, 03 Feb 2010)
 
  Log Message:
  ---
  * Port tests from t/inc/
  * Added new tests to XmlTest
 
  Added Paths:
  ---
     trunk/phase3/tests/LicensesTest.php
     trunk/phase3/tests/SanitizerTest.php
     trunk/phase3/tests/TimeAdjustTest.php
     trunk/phase3/tests/TitleTest.php
     trunk/phase3/tests/XmlTest.php
 
  Added: trunk/phase3/tests/LicensesTest.php
 
 ===
  --- trunk/phase3/tests/LicensesTest.php
     (rev
  0)
  +++ trunk/phase3/tests/LicensesTest.php 2010-02-03 17:35:59
 UTC (rev
  +++ 61938)
  @@ -0,0 +1,17 @@
  +?php
  +
  +/**
  + * @group Broken
  + */
  +class LicensesTest extends PHPUnit_Framework_TestCase {
  +
  +       function testLicenses() {
  +               $str = 
  +* Free licenses:
  +** GFLD|Debian disagrees
  +;
  +
  +               $lc = new Licenses( $str );
  +               $this-assertTrue( is_a( $lc, 'Licenses' ),
 'Correct
  +class' );
  +       }
  +}
  \ No newline at end of file
 
 
  Property changes on: trunk/phase3/tests/LicensesTest.php
 
 ___
  Added: svn:eol-style
    + native
 
  Added: trunk/phase3/tests/SanitizerTest.php
 
 ===
  --- trunk/phase3/tests/SanitizerTest.php

  (rev 0)
  +++ trunk/phase3/tests/SanitizerTest.php        2010-02-03
 17:35:59
  +++ UTC (rev 61938)
  @@ -0,0 +1,71 @@
  +?php
  +
  +global $IP;
  +require_once( $IP/includes/Sanitizer.php );
  +
  +class SanitizerTest extends PHPUnit_Framework_TestCase {
  +
  +       function testDecodeNamedEntities() {
  +               $this-assertEquals(
  +                       \xc3\xa9cole,
  +                       Sanitizer::decodeCharReferences(
  + 'eacute;cole' ),
  +                       'decode named entities'
  +               );
  +       }
  +
  +       function testDecodeNumericEntities() {
  +               $this-assertEquals(
  +                       \xc4\x88io bonas dans l'\xc3\xa9cole!,
  +                       Sanitizer::decodeCharReferences(
 #x108;io
  + bonas dans l'#233;cole! ),
  +                       'decode numeric entities'
  +               );
  +       }
  +
  +       function testDecodeMixedEntities() {
  +               $this-assertEquals(
  +                       \xc4\x88io bonas dans l'\xc3\xa9cole!,
  +                       Sanitizer::decodeCharReferences(
 #x108;io
  + bonas dans l'eacute;cole! ),
  +                       

Re: [Wikitech-l] Flattening a wikimedia category

2010-02-05 Thread Tei
On 5 February 2010 20:17, Aryeh Gregor simetrical+wikil...@gmail.com wrote:
 On Fri, Feb 5, 2010 at 3:57 AM, Daniel Kinzler dan...@brightbyte.de wrote:
 Or,
 to put it differently: let people use flat tagging, but let's keep the 
 notion
 of one tag implying another, i.e. math implying science and texas implying 
 america.

 And as for [[Category:People executed for heresy]] - [[Category:Joan
 of Arc]] - [[English claims to the French throne]]?  That's only two
 steps, and it already doesn't make sense.  You could argue that
 [[Category:Joan of Arc]] really means [[Category:Stuff related to Joan
 of Arc]] and shouldn't be in [[Category:People executed for heresy]],
 but that sounds like it would take as much recategorization work as
 just using atomic categories -- and much subtler.



off-topic

all these - make me salivate for a good plot graph
(http://www.graphviz.org/?)


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] remove me from your mailing list

2010-02-03 Thread Tei
Quick!

Send a email here
wikitech-l-requ...@lists.wikimedia.org
use subject unsubscribe (withouth quotes)

Our majordomo will take care of it  ;-)


--
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Facebook introducing PHP compiler?

2010-02-02 Thread Tei
On 2 February 2010 18:53, Domas Mituzas midom.li...@gmail.com wrote:
 However, that article is just rumour. I think it's more likely they made
 some apc-like cache/optimizer than a compiler.

 http://www.facebook.com/note.php?note_id=280583813919id=9445547199ref=nf


He say is a rewriter,read  PHP, outputs C++  (probably he have
some custom classes like 'PHPString' to act like the PHP counterpart )

 Facebook 
One common way to address these inefficiencies is to rewrite the more
complex parts of your PHP application directly in C++ as PHP
Extensions. This largely transforms PHP into a glue language between
your front end HTML and application logic in C++. From a technical
perspective this works well, but drastically reduces the number of
engineers who are able to work on your entire application.


I was thinking about that the other day, I understand why MediaWiki
don't follow that route.
There are any profile of  MediaWiki somewhere to see what parts of
MediaWiki consume most CPU cycles?  I bet is some regex.




-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] (no subject)

2010-01-28 Thread Tei
On 28 January 2010 15:06, 李琴 q...@ica.stc.sh.cn wrote:
 Hi all,
  I have  built a LocalWiki.   Now I want the data of it to keep consistent
 with the
 Wikipedia and one work I should do is to get the data of update from
 Wikipedia.
 I get the URLs through analyzing the RSS
 (http://zh.wikipedia.org/w/index.php?title=Special:%E6%9C%80%E8%BF%91%E6%9B%B4%E6%94%B9feed=rss)
 and get all HTML content of the edit box by analyzing
 these URLs after opening an URL and clicking the ’edit this page’.

 That’s because I visit it too frequently and my IP address is prohibited
 or the network is too slow?

李琴 well.. thats webscrapping, that is a poor tecnique, one with lots
of errors that generate lots of trafic.

One thing a robot must do is read and follow  the
http://zh.wikipedia.org/robots.txt file ( probably you sould read it
too)
As a general rule of Internet, a  rude robot will be banned by the
site admins.

It would be a good idea to anounce your bot as a bot in the user_agent
string .  Good bot beavior is one that read a website like a human.  I
don't know,  like 10 request minute?.  I don't know about this
Wikipedia site rules about it.

What you are suffering could be  automatic or manual throttling, since
is detected a abusive number of request from your IP.

Wikipedia seems to provide fulldumps of his wiki, but are unusable
for you, since are giganteous :-/, trying to rebuilt wikipedia on your
PC with a snapshot would be like summoning Tchulu in a teapot. But.. I
don't know, maybe the zh version is smaller, or your resources
powerfull enough.  One feels that what you have built has a severe
overload (wastage of resources) and there must be better ways to do
it...



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Log of failed searches

2010-01-14 Thread Tei
2010/1/14 Gregory Maxwell gmaxw...@gmail.com:
 On Thu, Jan 14, 2010 at 11:15 AM, Gregory Maxwell gmaxw...@gmail.com wrote:
 Here is what I would suggest disclosing:
 #start_datetime end_datetime hits search_string
 2010-01-01-0:0:4 2010-01-13-23-59-50 39284 naked people
 2010-01-01-0:0:4 2010-01-13-23-59-50 23950 hot grits
 ...
 2010-01-01-0:0:4 2010-01-13-23-59-50 5 autoerotic quantum chromodynamics

 The logs are probably combined across wikis, so I'd change that to

 #start_datetime end_datetime projectcode hits search_string
 2010-01-01-0:0:4 2010-01-13-23-59-50 en.wikipedia 39284 naked people
 2010-01-01-0:0:4 2010-01-13-23-59-50 en.wikipedia 23950 hot grits
 ...
 2010-01-01-0:0:4 2010-01-13-23-59-50 en.wikipedia 5 autoerotic quantum
 chromodynamics
 2010-01-01-0:0:4 2010-01-13-23-59-50 de.wikipedia 25093 Bondage 
 Disziplin Pokémon

my   $0.02

I expect some fun here, since error encodings will hit things like , ñ,  ó.

2010-01-01-0:0:4 2010-01-13-23-59-50 de.wikipedia 25093 Bondage amp;
Disziplin Pokémon
2010-01-01-0:0:4 2010-01-13-23-59-50 de.wikipedia 25093 Bondage %33amp;
Disziplin Pokémon


on the other part, all these errors will be browser/proxy bugs, and
not mediawiki bugs. I think.
Anyway, if the special characters are replaced by spaces, there will
be less weird shit, and more misterious space holes.


-- 
--
Fin del Mensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Some suggestions about the edit page.

2010-01-13 Thread Tei
2010/1/13 Robert Leverington rob...@rhl.me.uk:
 On 2010-01-13, Tei wrote:
 %% The Death of Wiki %%
...
 you can't stop that, you can code something so the resulting dead body
 of wiki is not pure shit.   A possible idea could be to auto-protect
 pages without edit in N years (4 years),
...

 The AbsenteeLandlord extension may fulfil this to a certain extent. [1]

 [1] http://www.mediawiki.org/wiki/Extension:AbsenteeLandlord


I am happy, I am not the first one to think about these issues.


-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Bugzilla Vs other trackers.

2010-01-08 Thread Tei
On Thu, Jan 7, 2010 at 8:39 AM, Peter Gervai grin...@gmail.com wrote:
..
 Wouldn't be nice. First, it's an attitude thing: we want (and have to)
 promote open stuff.
 Second, it isn't nice to show something to the users they cannot use
 themselves. It's kind of against or basic principle of you can do
 what we do, you're free to do it, we just do it better :-)


It will be a good idea to pass the memo to the guys that design the
notability rules.

http://ioquake3.org/2009/02/20/ioquake3-entry-deleted-from-wikipedia/

Since most (all?) opensource proyects are webonly, and don't get in
the press, are on some obscure area of the web where something can
be wildly popular for these in-the-know, and invisible for these that
edit and delete articles.

I mean, I can write a bot to nominate *all* opensource projects
articles on wikipedia for speedy deletion, and few ones (maybe 6) will
survive that.

http://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Ioquake3


Keep no matter how loud people and guidlines scream for reliable
sources, many, many people use it and work on it and that makes it
notable. If the press is not able to reliably represent this reality
it's not a fault of the project and reality is a higher standard than
reliable press. What do you need press for an Open Source project?
Just looking at the SVN log proves more than any article could ever
do. -- ioquake3 maintainer for the FreeBSD project







-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Webzzle

2009-10-08 Thread Tei
On Thu, Oct 8, 2009 at 7:16 PM, Andrew Garrett agarr...@wikimedia.org wrote:

 On 08/10/2009, at 5:17 PM, Florence Devouard wrote:
 There is a presentation of the concept here:
 http://www.webzzle.com/intl/en/help.html


 I don't know why they bothered using a demo to explain their
 technology, clear explanations like organize the real-time knowledge
 web and explore the knowledge web in 1-click just speak for
 themselves.


The idea of mix Google and Wikipedia search in a mashup is not new...

The Google FX extension alredy did it..
http://userscripts.org/scripts/show/31950
http://img78.imageshack.us/img78/4999/googlefxv205mk1.jpg

A greasemonkey script can change the google page, because is the user,
Is like watching monalisa with sun glasses. But maybe a mashup service
has to ask permissions...

About webzzle:   Seems  ( insane || cool || naive ) people  but not
(smart  evil) people.  Most spammers are naive and insane, so I will
classify webzzle there, but is not a perfect fit.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia Google Earth layer

2009-10-02 Thread Tei
On Fri, Oct 2, 2009 at 3:37 PM, Strainu strain...@gmail.com wrote:
...
 I'm not sure if Wikimedia has anything to do with it, but I think I
 have a better chance of getting an answer here than by asking Google
 (the company) directly. Google (the search engine) was not really
 helpful on the matter.

you could always install Ethereal, and spy the trafic from your
computer to the network. It probably include some HTTP servers, and
GET / POST request you can read.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Wikipedia Google Earth layer

2009-10-02 Thread Tei
On Fri, Oct 2, 2009 at 6:15 PM, Roan Kattouw roan.katt...@gmail.com wrote:
 2009/10/2 Tei oscar.vi...@gmail.com:
 On Fri, Oct 2, 2009 at 3:37 PM, Strainu strain...@gmail.com wrote:
 ...
 I'm not sure if Wikimedia has anything to do with it, but I think I
 have a better chance of getting an answer here than by asking Google
 (the company) directly. Google (the search engine) was not really
 helpful on the matter.

 you could always install Ethereal, and spy the trafic from your
 computer to the network. It probably include some HTTP servers, and
 GET / POST request you can read.

 The LiveHTTPHeaders extension for Firefox will also do this job for
 you, and is a bit easier to install and use.


Nah really. Is google earth we are talking here. Since is a standalone
app, It talk directly trough the network.

--
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Disambiguation while editing

2009-10-01 Thread Tei
On Thu, Oct 1, 2009 at 12:37 AM, Lars Aronsson l...@aronsson.se wrote:

 In the edit box, when I type [[John Doe]], I want some chance to
 verify that I'm linking to the right article,

Humm?

I don't know the wikipedia, but on other wikis is like that:

Fire and forget.  You link [[Mr John Doe]].  Once is published you
notice is not a link, so you click, and make a redirection from [[Mr.
John Doe]] to the existing article [[Doc. John Doe]].

You can also make links that don't exist. Like in english expression
'[[a pocket full of horses]]', you don't need to link to articles
that exist.  On a wiki (I don't know wikipedia)  you don't have to
post correct or complete stuff. And having unpopulated links is a
invitation to others to create more articles or redirections.

I know Wikipedia has always been a strange wiki, so maybe thats not
how it works. I hate wikipedia a bit.

Also, the creation of a redirection from  [[Doc John Doe]] and [[John
Doe]] to  [[Dr John Doe]] is content. Maybe a dude could be googling
Doc John and find the redirection one. Is a happy error that these
redirections are created.



-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] js2 coding style for html output

2009-09-28 Thread Tei
On Mon, Sep 28, 2009 at 6:44 PM, Michael Dale md...@wikimedia.org wrote:
..
 I think both are useful and I like jquery style building of html since
 it gives you direct syntax errors rather than html parse errors which
 are not as predictable across browsers. But sometimes performance wise
 or from a quick get it working perspective its easier to write out an
 html string. Also I think tabbed html is a bit easier on the eyes for
 someone that has dealt a lot with html.

probabbly not the intend of your message, but your first and second
examples can be mixed

function dojBuild2(){
  var box = document.createElement(div);
   for(var i =0 ;i  repetCount;i++){
   var thing = document.createElement(span);
  thing.innerHTML ='span id=' + escape(i) + ' class=fish' +
   'p class=dog rel=foo ' +
   escape(i) +
   '/p' +
   '/span';

   box.appendChild(thing);
   }

   document.getElementById(cat).appendChild(box);
}

what I think we have here, is that  $('#cat') is expensive, and run
inside a loop in dojBuild

Since your post is about coding style, and not perfomance (and not
about the particular speed of this style), ignore this post.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

Re: [Wikitech-l] Proposal for editing template calls within pages

2009-09-26 Thread Tei
On Sat, Sep 26, 2009 at 10:14 AM, Dmitriy Sintsov ques...@rambler.ru wrote:
 * Tei oscar.vi...@gmail.com [Sat, 26 Sep 2009 02:40:06 +0200]:
 Hello.

 Heres a screenshot of me editing the wikipedia:

 http://zerror.com/unorganized/crap/nogoodenough.png

 All the webmasters on this mail list will spot the problem with this
 text  in 1 second: is unreadable. The space betwen lines, the lines
 length, the complexity of the text... Is really hard to read.
 A HTML textarea can server for writting emails, and simple text, but
 on this image fail short. Textareas are not designed for this, or are
 not good enough.

 How a webmaster can make that text better? well.. you need to stop
 using the HTML textarea widget. And emulate it with divs, css and
 javascript. You need to colorize the code.  Nowdays *ALL* good code
 editors colorize code. If our code editor don't colorize the wiki
 sintax, or don't even try, our editor is bad. I could be wrong, but
 maybe [[links]] and {{templates}} can be detected and colorized.   And
 since you are emulating a editor, you can add a bit of usefull
 beaviors:  make so some areas are read only, so the cursor skip then.
 Oh.. and you can make the whole think AJAXified,.. so wen you click
 [Edit section] this section become editable, and wen you save, the
 edit view send, and is replaced by the result. Why would you want to
 people bounce here and there to post stuff in 2009?

 He...  our computers support 24 M colors, and we are showing text with
 2 colors? pfff

 I am very much supporting you! Both code colorizing and AJAX editing
 preview. And maybe a links code completion - when yuu press [[ it will
 open an JS-generated dialog with drop-down title search list. It's not
 that wikitext is too hard (with the huge exception of templates) but the
 editor is very much restricted.. Though templates surely aren't nice and
 it's probably is better to keep them separate and XML-ize them.
 Dmitriy


For templates you can use a Code beatiffier, that unofuscate the code.
Templates can be hard to write, but theres no reason to let then be
hard to read. Maybe MW already do that..

Here is a example using another template language (bbcode):

[uRL]lalala[/URL]  =  [url]lalala[/url]

[quote=Dan]blabla  bla bla[/img]  =

[quote= Dani ]
   bla bla bla
[/quote]

I know that this maybe is a bad idea, If this may cause other
problems, and theres one million others things that are worth our time
:-I

A serverside Code beatifier can also helps a clientside colorizer.
He can massage the template code first, and be smarter than the
colorizers and prevent problems before hit the colorizer.   A code
beafifier can be implemented in a incremental way, the first version
can just lowercase all letter.  The colorizer can also be
implemented in a incremental way, starting colorizing simple stuff.
If a colorizing or a beatifier become a problem, can be deactivated,
and things will continue smoothly.

-- 
--
ℱin del ℳensaje.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l

  1   2   >