Re: [Wikitech-l] helping in WYSIWYG editor efforts

2011-01-22 Thread Panos Louridas
Picking up on the usability testing,

Since we do not have a dedicated usability lab (but we do have people that have 
carried out usability studies), we will have to create something ourselves.

I understand that the basic requirements are the ability to capture both the 
screen and the user; and to capture them in a synchronized way.

From some research that we did, we came up with the following setups:

(1) Telestream ScreenFlow (http://www.telestream.net/screen-flow/overview.htm)

(2) Mac OS 10.6 podcast capture (but this requires a podcast server, i.e., Mac 
OS 10.6 server, and I am not sure it's worth it).

(3) Matterhorn capture (http://www.opencastproject.org/matterhorn_capture)

We are leaning towards solution (1). We welcome any comments, any alternative 
solutions that we may explore, or any arguments for (2) and (3).

Cheers,

Panos.

On Jan 21, 2011, at 4:07 PM, Jan Paul Posma wrote:

 So a few minutes ago we've had a conversation about this. Panos will set up a 
 public collaboration space within GRNET. A few developers will be (part-time) 
 working on this from February for a (so far) unspecified amount of time. The 
 consensus was that it would be good to start off with some basic usability 
 testing, to see how well the different tools work for novice users. It'll be 
 very basic testing, with about 10 subjects from within GRNET (so with a bit 
 of technical bias) but only those who haven't edited before.
 
 Both Magnus' and my tools will be implemented on a clone of the Greek 
 Wikipedia and we will set up a fabricated article that works well with both 
 of our editors. It's only about the usability, not about technical aspects 
 for now. Both editing tools will have to be adapted and localised, perhaps 
 this can even be done by GRNET developers. We'll use my usability script that 
 I used before with the Sentence-Level Editing usability research.
 
 Once this usability testing has been done, we'll decide how to distribute the 
 efforts, and what will be done. We'll work closely with the GRNET developers 
 to assist them in working on these projects. Once we'll have more information 
 it will be posted to this list.
 
 Cheers,
 Jan Paul
 
 On 19-Jan-2011, at 23:34, Magnus Manske wrote:
 
 I have added Panos to Skype; yes, we should probably exchange Skype
 handles off-list.
 
 I am in Cambridge (London time), so that should work.
 
 Cheers,
 Magnus
 
 
 On Wed, Jan 19, 2011 at 8:34 PM, Jan Paul Posma jp.po...@gmail.com wrote:
 Skype sounds great! Also, I heard you work with Ariel, which is great 
 because that way you have a more local person to contact with MediaWiki 
 questions. Perhaps we can get off-list with those interested to schedule an 
 introductory meeting? (You, me, Magnus, Ariel, others?) I am located in the 
 Netherlands, so our hours will be similar.
 
 Cheers,
 Jan Paul
 
 On 19-Jan-2011, at 19:47, Panos Louridas wrote:
 
 Thanks to both Jean Paul and Magnus for taking up the offer!
 
 Based on your input I will look into our developer tool for people with 
 expertise in the following:
 
 * Advanced JS, preferably with experience in optimisation issues etc.
 
 * UI design, usability testing, etc.
 
 * Text processing (of sorts) for the needs of SLE
 
 (if you believe I am missing something, say so)
 
 I expect to have the people in place in February, I will let you know. I 
 will be following the list.
 
 Jean Paul indicated that we might talk in more detail. I do not follow IRC 
 because of my tight schedule; I do use Skype, however (ID: louridas). 
 Please Jean Paul, Magnus, and others, let me know if that suits you. As I 
 am located in Athens, my waking hours are around East European Time.
 
 Cheers,
 
 Panos.
 
 On Jan 19, 2011, at 3:54 PM, Jan Paul Posma wrote:
 
 A very generous offer indeed!
 
 My own SLE and Magnus' WYSIFTW are indeed the most active projects, so 
 that would be a good bet. Actually, for me the timing is just right, as 
 I'll be working on a paper about this editor for a while, so it'd be cool 
 to have someone(s) continue the project. If one of your researchers has a 
 brilliant idea on how to do this right, that would obviously be really 
 valuable too.
 
 A lot of things Magnus mentioned apply to my project too:
 * Improving detection algorithms, i.e. better sentence-level editing 
 (perhaps using an external language recognition library), better 
 detection of other elements. Keep in mind that the editor excludes 
 anything it doesn't 'understand', so this is a nice fallback, you don't 
 have to write a complex parser that detects a lot of stuff at once.
 * Cross-browser/platform/device compatibility (think mobile, 
 touchscreens, etc.)
 * Usability testing (the more the merrier!)
 * Verifying detection coverage (Which % of the wikitext is editable) and 
 quality (Wikitext - Adding markers - MediaWiki parser - Removing 
 markings - Wikitext??) Checking this on a large number of pages.
 * Test suites (again, the more the merrier, but only for 

Re: [Wikitech-l] helping in WYSIWYG editor efforts

2011-01-22 Thread Jan Paul Posma
I used the open source Camstudio, but (1) seems to be better as you can 
simultaneously capture the camera image. One question I didn't think of during 
the meeting: are you planning on releasing the videos online (e.g. on Wikimedia 
Commons)?

Cheers,
Jan Paul

On 22-Jan-2011, at 14:29, Panos Louridas wrote:

 Picking up on the usability testing,
 
 Since we do not have a dedicated usability lab (but we do have people that 
 have carried out usability studies), we will have to create something 
 ourselves.
 
 I understand that the basic requirements are the ability to capture both the 
 screen and the user; and to capture them in a synchronized way.
 
 From some research that we did, we came up with the following setups:
 
 (1) Telestream ScreenFlow (http://www.telestream.net/screen-flow/overview.htm)
 
 (2) Mac OS 10.6 podcast capture (but this requires a podcast server, i.e., 
 Mac OS 10.6 server, and I am not sure it's worth it).
 
 (3) Matterhorn capture (http://www.opencastproject.org/matterhorn_capture)
 
 We are leaning towards solution (1). We welcome any comments, any alternative 
 solutions that we may explore, or any arguments for (2) and (3).
 
 Cheers,
 
 Panos.
 
 On Jan 21, 2011, at 4:07 PM, Jan Paul Posma wrote:
 
 So a few minutes ago we've had a conversation about this. Panos will set up 
 a public collaboration space within GRNET. A few developers will be 
 (part-time) working on this from February for a (so far) unspecified amount 
 of time. The consensus was that it would be good to start off with some 
 basic usability testing, to see how well the different tools work for novice 
 users. It'll be very basic testing, with about 10 subjects from within GRNET 
 (so with a bit of technical bias) but only those who haven't edited before.
 
 Both Magnus' and my tools will be implemented on a clone of the Greek 
 Wikipedia and we will set up a fabricated article that works well with both 
 of our editors. It's only about the usability, not about technical aspects 
 for now. Both editing tools will have to be adapted and localised, perhaps 
 this can even be done by GRNET developers. We'll use my usability script 
 that I used before with the Sentence-Level Editing usability research.
 
 Once this usability testing has been done, we'll decide how to distribute 
 the efforts, and what will be done. We'll work closely with the GRNET 
 developers to assist them in working on these projects. Once we'll have more 
 information it will be posted to this list.
 
 Cheers,
 Jan Paul
 
 On 19-Jan-2011, at 23:34, Magnus Manske wrote:
 
 I have added Panos to Skype; yes, we should probably exchange Skype
 handles off-list.
 
 I am in Cambridge (London time), so that should work.
 
 Cheers,
 Magnus
 
 
 On Wed, Jan 19, 2011 at 8:34 PM, Jan Paul Posma jp.po...@gmail.com wrote:
 Skype sounds great! Also, I heard you work with Ariel, which is great 
 because that way you have a more local person to contact with MediaWiki 
 questions. Perhaps we can get off-list with those interested to schedule 
 an introductory meeting? (You, me, Magnus, Ariel, others?) I am located in 
 the Netherlands, so our hours will be similar.
 
 Cheers,
 Jan Paul
 
 On 19-Jan-2011, at 19:47, Panos Louridas wrote:
 
 Thanks to both Jean Paul and Magnus for taking up the offer!
 
 Based on your input I will look into our developer tool for people with 
 expertise in the following:
 
 * Advanced JS, preferably with experience in optimisation issues etc.
 
 * UI design, usability testing, etc.
 
 * Text processing (of sorts) for the needs of SLE
 
 (if you believe I am missing something, say so)
 
 I expect to have the people in place in February, I will let you know. I 
 will be following the list.
 
 Jean Paul indicated that we might talk in more detail. I do not follow 
 IRC because of my tight schedule; I do use Skype, however (ID: louridas). 
 Please Jean Paul, Magnus, and others, let me know if that suits you. As I 
 am located in Athens, my waking hours are around East European Time.
 
 Cheers,
 
 Panos.
 
 On Jan 19, 2011, at 3:54 PM, Jan Paul Posma wrote:
 
 A very generous offer indeed!
 
 My own SLE and Magnus' WYSIFTW are indeed the most active projects, so 
 that would be a good bet. Actually, for me the timing is just right, as 
 I'll be working on a paper about this editor for a while, so it'd be 
 cool to have someone(s) continue the project. If one of your researchers 
 has a brilliant idea on how to do this right, that would obviously be 
 really valuable too.
 
 A lot of things Magnus mentioned apply to my project too:
 * Improving detection algorithms, i.e. better sentence-level editing 
 (perhaps using an external language recognition library), better 
 detection of other elements. Keep in mind that the editor excludes 
 anything it doesn't 'understand', so this is a nice fallback, you don't 
 have to write a complex parser that detects a lot of stuff at once.
 * Cross-browser/platform/device compatibility (think 

Re: [Wikitech-l] File licensing information support

2011-01-22 Thread Bryan Tong Minh
On Fri, Jan 21, 2011 at 3:36 AM, Michael Dale md...@wikimedia.org wrote:
 On 01/20/2011 05:00 PM, Platonides wrote:
 I would have probably gone by the page_props route, passing the metadata
 from the wikitext to the tables via a parser function.

 I would also say its probably best to pass metadata from the wikitext to
 the tables via a parser function.  Similar to categories, and all other
 user edited metadata. This has the disadvantage that its not easy 'as
 easy' to edit via structured api entry point,  but has the advantage of
 working well with all the existing tools, templates and versioning.

This is actually the biggest decision that has been made, the rest is
mostly implementation details. (Please note that I'm not presenting
you with a fait accompli, it is of course still possible to change
this)

Handling metadata separately from wikitext provides two main
advantages: it is much more user friendly, and it allows us to
properly validate and parse data.

Having a clear separate input text field Author:  is much more
user friendly {{#fileauthor:}}, which is so to say, a type of obscure
MediaWiki jargon. I know that we could probably hide it behind a
template, but that is still not as friendly as a separate field. I
keep on hearing that especially for newbies, a big blob of wikitext is
plain scary. We regulars may be able to quickly parse the structure in
 {{Information}}, but for newbies this is certainly not so clear.
We actually see that from the community there is a demand for
separating the meta data from the wikitext -- this is after all why
they implemented the uselang= hacked upload form with a separate text
box for every meta field.

Also, a separate field allows MediaWiki to understand what a certain
input really means. {{#fileauthor:[[User:Bryan]]}} means nothing to
MediaWiki or re-users, but Author: Bryan___ [checkbox] This is a
Commons username can be parsed by MediaWiki to mean something. It
also allows us to mass change for example the author. If I want to
change my attribution from Bryan to Bryan Tong Minh, I would need
to edit the wikitext of every single upload, whereas in the new system
I go to Special:AuthorManager and change the attribution.

 Similar to categories, and all otheruser edited metadata.
Categories is a good example of why metadata does not belong in the
wikitext. If you have ever tried renaming a category... you need to
edit every page in the category and rename it in the wikitext. Commons
is running multiple bots to handle category rename requests.

All these advantage outweigh the pain of migration (which could
presumably be handled by bots) in my opinion.


Best regards,
Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] File licensing information support

2011-01-22 Thread Platonides
An internally handled parser function doesn't conflict with showing it
as a textbox.

We could for instance store it as a hidden page prefix.

Data stored in the text blob:
Author: [[Author:Bryan]]
License: GPL
---
{{Information| This is a nice picture I took }}
{{Deletion request|Copyvio from http://www.example.org}}


Data shown when clicking edit:

Author: input type=text value=Bryan /
License: selectGPL/select

textarea name=textbox1
{{Information| This is a nice picture I took }}
{{Deletion request|Copyvio from http://www.example.org}}
/textarea

Why do I like such approach?
* You don't need to create a new way for storing the history of such
metadata.
* Old versions are equally viewable.
* Things like edit conflicts are already handled.
* Diffing could be done directly with the blobs.
* Import/export automatically works.
* Extendable for more metadata.
* Readable for tools/wikis unaware of the new format.

On the other hand:
* It breaks the concept of everything is in the source.
* Parsing is different based on the namespace. A naive parsing as
License: GPL instead of showing an image and a GPL excerpt, would be
acceptable, but if incomplete markup is stored there, the renderings
would be completely different. Could be skipped if placing the metadata
inside a tag. But what happens if the tag is inserted elsewhere in the
page? MediaWiki doesn't have run-once tags.


PS: The field author would be just a pointer to the author page, so you
wouldn't need to edit everything on any case.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Announcing OpenStackManager extension

2011-01-22 Thread Platonides
Ryan Lane wrote:
 For the past month or so I've been working on an extension to manage
 OpenStack (Nova), for use on the Wikimedia Foundation's upcoming
 virtualization cluster:
 
 http://ryandlane.com/blog/2011/01/02/building-a-test-and-development-infrastructure-using-openstack/
 
 I've gotten to a point where I believe the extension is ready for an
 initial release.

Congratulations, Ryan!


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] File licensing information support

2011-01-22 Thread Krinkle
On Jan 22, 2011 at 21:04 Platonides wrote:
 An internally handled parser function doesn't conflict with showing it
 as a textbox.

 We could for instance store it as a hidden page prefix.

 Data stored in the text blob:
 Author: [[Author:Bryan]]
 License: GPL
 ---
 {{Information| This is a nice picture I took }}
 {{Deletion request|Copyvio from http://www.example.org}}
 

 Data shown when clicking edit:

 Author: input type=text value=Bryan /
 License: selectGPL/select

 textarea name=textbox1
 {{Information| This is a nice picture I took }}
 {{Deletion request|Copyvio from http://www.example.org}}
 /textarea

So PHP would extract {{#author:4}} and {{#license:12}} from the  
textblob when showing the editpage.
And show the remaining wikitext in the textarea and the author/ 
license as seperate form elements.
And upon saving, generate {{#author:4}} {{#license:12}}\n again and  
prepend to the textblob.

Double instances of these would be ignored (ie. stripped automatically  
since they're not re-inserted to
the textblob upon saving).
One small downside would be that if someone would edit the textarea  
manually to do stuff with
author and license, the next edit would re-arrange them since they're  
extracted and re-insterted
thus showing messy diffs. (not a major point as long as it's done  
independant from JavaScript,
which it can be if done from core / php).

If that's what you meant, I think it is an interesting concept that  
should not be ignored, however personally
I am not yet convinced this is the way to go. But when looking at the  
complete picture of up/down sides,
this could be something to consider.

--
Krinkle

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] File licensing information support

2011-01-22 Thread Platonides
Krinkle wrote:
 So PHP would extract {{#author:4}} and {{#license:12}} from the  
 textblob when showing the editpage.
 And show the remaining wikitext in the textarea and the author/ 
 license as seperate form elements.
 And upon saving, generate {{#author:4}} {{#license:12}}\n again and  
 prepend to the textblob.
 
 Double instances of these would be ignored (ie. stripped automatically  
 since they're not re-inserted to
 the textblob upon saving).
 One small downside would be that if someone would edit the textarea  
 manually to do stuff with
 author and license, the next edit would re-arrange them since they're  
 extracted and re-insterted
 thus showing messy diffs. (not a major point as long as it's done  
 independant from JavaScript,
 which it can be if done from core / php).
 
 If that's what you meant, I think it is an interesting concept that  
 should not be ignored, however personally
 I am not yet convinced this is the way to go. But when looking at the  
 complete picture of up/down sides,
 this could be something to consider.
 
 --
 Krinkle

That's an alternative approach. I was thinking in accepting them only at
the beginning of the page, but extracting from everywhere is also an
alternative.


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] WYSIFTW status

2011-01-22 Thread Magnus Manske
On Wed, Jan 19, 2011 at 10:57 PM, Magnus Manske
magnusman...@googlemail.com wrote:
 On Wed, Jan 19, 2011 at 8:25 PM, Platonides platoni...@gmail.com wrote:
 Magnus Manske wrote:
 On my usual test article [[Paris]], the slowest section (History)
 parses in ~5 sec (Firefox 3.6.13, MacBook Pro). Chrome 10 takes 2
 seconds. I believe these will already be acceptable to average users;
 optimisation should improve that further.

 Cheers,
 Magnus

 What about long tables?

 Worst-case-scenario I could find:
 http://en.wikipedia.org/wiki/Table_of_nuclides_(sorted_by_half-life)#Nuclides_with_no_experimentally_observed_decays

 4.7 sec in Chrome 10 on my iMac.
 6.2 sec in Firefox 4 beta 9.
 10.7 sec in Firefox 3.6.

 Could be worse, I guess...



Another update that might be of interest (if not, tell me :-)

I just went through my first round of code optimisation. Parsing speed
has improved considerably, especially for older browsers: Firefox
3.6 now parses [[Paris]] in 10 sec instead of 32 sec (YMMV).

Also, it is now loading the wikitext and the image information from
the API in parallel, which reduces pre-parsing time.

For small and medium-size articles, editing in WYSIFTW mode now often
loads (and parses) faster than the normal edit page takes to load
(using Chrome 10).

Cheers,
Magnus

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Farewell JSMin, Hello JavaScriptDistiller!

2011-01-22 Thread Maciej Jaros
Michael Dale (2011-01-21 16:04):
 On 01/21/2011 08:21 AM, Chad wrote:
 While I happen to think the licensing issue is rather bogus and
 doesn't really affect us, I'm glad to see it resolved. It outperforms
 our current solution and keeps the same behavior. Plus as a bonus,
 the vertical line smushing is configurable so if we want to argue
 about \n a year from now, we can :)
 Ideally we will be using closures by then and since it rewrites
 functions, variable names and sometimes collapses multi-line
 functionality, new line preservation will be a mute point. Furthermore,
 Google even has a nice add-on to firebug [1] for source code mapping.
 Making the dead horse even more dead.

 I feel like we are suck back in time, arguing about optimising code that
 came out eons ago in net time ( more than 7 years ago ) There are more
 modern solutions that take into consideration these concerns and do a
 better job at it. ( ie not just a readable line but a pointer back to
 the line of source code that is of concern )

 [1] http://code.google.com/closure/compiler/docs/inspector.html

Great. Now I only need to tell the user to install Firefox, install 
Firebug and some other addon, open the page in Firefox... Oh, wait. This 
error does not occur in Firefox...

Please, I can live with folding new lines (thou I don't believe those 
few bites are worth it) acutely compiling the code (or packing as some 
say) would be just evil for Mediawiki or Wikimedia to be more exact.

Just remember that people all over the world are hacking into Mediawiki 
all the time. Making it harder won't help a bit.

Regards,
Nux.

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Farewell JSMin, Hello JavaScriptDistiller!

2011-01-22 Thread Brion Vibber
On Sat, Jan 22, 2011 at 5:40 PM, Maciej Jaros e...@wp.pl wrote:

 Great. Now I only need to tell the user to install Firefox, install
 Firebug and some other addon, open the page in Firefox... Oh, wait. This
 error does not occur in Firefox...

 Please, I can live with folding new lines (thou I don't believe those
 few bites are worth it) acutely compiling the code (or packing as some
 say) would be just evil for Mediawiki or Wikimedia to be more exact.

 Just remember that people all over the world are hacking into Mediawiki
 all the time. Making it harder won't help a bit.


Making it more powerful and more self-aware may, however, help a **lot**.

ResourceLoader provides a relatively sane module-loading system for managing
what bits of code and data are flying around. Consider supplementing this
better encapsulation with in-browser development tool helpers -- say, to
provide a code editor with syntax highlighting and flagging of errors, and
to let you load your custom site/user/gadget JS modules in context while
you're working on them.

Common tasks like adding custom tabs or grabbing some part of the existing
UI to modify can be made much easier by a script editor that knows these
things exist, and can help you identify which bits of the UI you want to
work with by clicking on them.


Even coming back down to earth, consider that current versions of every
major browser now have at least a basic Firebug-like debugging console and a
DOM inspector available. Debugging JavaScript on IE used to consist of
getting a dialog box with a file name and line number that were usually
*entirely wrong*... other browsers gave you better error messages but not
much more.

Debugging JavaScript in today's browsers can at least pop up the live code
in context for you, and by sticking '?debug=false' on your URL, all our
minification will be conveniently gone from view, and the individual modules
easier to identify by hand. (This too could probably be given a nice UI
helper in a script debugging tool or even in a global error handler... which
could also do things like report JS errors upstream to MediaWiki
developers.) Whether you have all the magic of a real debugger or not,
that's *hugely* useful in debugging, and has made a world of difference in
my experience.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Farewell JSMin, Hello JavaScriptDistiller!

2011-01-22 Thread Roan Kattouw
2011/1/23 Brion Vibber br...@pobox.com:
 Debugging JavaScript in today's browsers can at least pop up the live code
 in context for you, and by sticking '?debug=false' on your URL, all our
 minification will be conveniently gone from view, and the individual modules
 easier to identify by hand.
I recommend using ?debug=true for that ;)

Roan Kattouw (Catrope)

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l