[Wikitech-l] Making usability part of the development process

2010-12-02 Thread Neil Kandalgaonkar
Hi there -- I don't post much here, but I was the programmer on the 
Multimedia Usability Project, which primarily focused on making uploads 
easier. The outside funding for that project just ended, so I think it's 
a good time to talk about what (if anything) we will do in the future 
along these lines.

Going forward, we ought not to think about usability as the 
responsibility a few people in San Francisco. I have been asking myself 
how we could end the need for usability projects, and instead make that 
part of everyone's practices.

What makes you a usability engineer? My personal belief is that it isn't 
(primarily) a matter of having special knowledge.

You become a usability software engineer when you see five average users 
utterly fail to accomplish the task you wanted them to be able to 
accomplish.

Programming is a hubristic enterprise, but for UI, these negative 
feelings are essential: watching ordinary users get angry and frustrated 
dealing with what you've created, even feeling a certain shame and 
embarassment that you got it so wrong. Only then do you see how large 
the conceptual gap is between you and the average user -- but you also 
usually come out of the experience with an immediate understanding of 
how to fix things.

So is there a way to have *everybody* who develops software for end 
users in our community have that experience? Maybe.

At the WMF, for these Usability Projects, we had to do formal studies 
with expert consultants, because these were grant-funded projects and we 
needed to present scientific data. Doing those studies is expensive and 
time-consuming.

But as a developer, it was more valuable to do hallway usability 
testing in an informal way. There are lots of startups these days that 
try to deliver such lightweight user testing over the web; could we do 
the same? Or, possibly we don't even need software; maybe what we need 
is a tradition of doing this for everything we release.

So how about that? If there were an easy way to integrate usability 
testing into your process as a developer, would you be interested? And 
what should that look like?

-- 
Neil Kandalgaonkar (   ne...@wikimedia.org

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] null revisions

2010-12-02 Thread Dmitriy Sintsov
Hi!
From looking at DB scheme I cannot find an efficient way of getting the 
list of null revisions or opposite (no null revisions list). With LIMIT 
paging (for custom API). When I GROUP then ORDER and LIMIT, it behaves 
extremly slow.
It seems that I should use very inefficient GROUP BY rev_text_id (and 
also MySQL not offering FIRST / LAST aggregate functions) and also there 
is no index on rev_text_id by default :-( I wish there was a field like 
rev_minor_edit but for detection of null revisions, such as these 
generated by XML import / export. They confuse the logic of my wiki 
synchronization script. However, even if I were able to persuade to 
include these features into the scheme, 1.15 which customers use, was 
already released some time ago, anyway :-( So probably the core patch is 
the only efficient way to solve my problem?
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Making usability part of the development process

2010-12-02 Thread Trevor Parscal
+1

Cheap hallway testing is so incredibly useful that I dedicated my time 
in Berlin last year to giving a crash course in it. I am not sure it was 
effective in inspiring or educating people on how to do this, but 
everyone is welcome to revisit the slides here:

http://wikitech.wikimedia.org/index.php?title=File:Trevor_Parscal_-_Wikimedia_Developers_Workshop_-_Berlin_2010.pdf

Yesterday we had our first on our own series of user tests, conducted 
by Parul Vora. While she is train in the kung-fu of user testing, she 
personally helped me put this set of slides together. I also pulled from 
my own experiences being involved in this kind of testing earlier in my 
career.

My general pitch is, we should all be in the habit of doing whatever it 
takes to view users as they interact with our creations. I often use my 
wife, and now sometimes my 3-year old daughter to help me. Usually just 
showing someone a picture of a screen and asking how would you do X? 
is amazingly revealing. Higher-fidelity testing is great, but it's 
designed to squeeze the last bit of juice out of the lemon. In my 
experience the majority of it comes out quite easily in even the most 
causal of circumstances.

My secondary pitch, which is not captured in these slides but was 
verbalized in Berlin was my view that we should user-test APIs with 
developers. This would especially be useful for our public HTTP API, but 
even PHP and JavaScript APIs could benefit from this. This differs from 
posting to the list and saying does anyone have any better ideas. 
Instead we would design APIs around use-cases, and then observe users in 
those use-cases succeeding or failing.

Bottom line - I know from experience that if we can even subtly 
introduce user testing as a factor in our decision making process, the 
impact will be amazing.

- Trevor

On 12/2/10 6:46 AM, Neil Kandalgaonkar wrote:
 Hi there -- I don't post much here, but I was the programmer on the
 Multimedia Usability Project, which primarily focused on making uploads
 easier. The outside funding for that project just ended, so I think it's
 a good time to talk about what (if anything) we will do in the future
 along these lines.

 Going forward, we ought not to think about usability as the
 responsibility a few people in San Francisco. I have been asking myself
 how we could end the need for usability projects, and instead make that
 part of everyone's practices.

 What makes you a usability engineer? My personal belief is that it isn't
 (primarily) a matter of having special knowledge.

 You become a usability software engineer when you see five average users
 utterly fail to accomplish the task you wanted them to be able to
 accomplish.

 Programming is a hubristic enterprise, but for UI, these negative
 feelings are essential: watching ordinary users get angry and frustrated
 dealing with what you've created, even feeling a certain shame and
 embarassment that you got it so wrong. Only then do you see how large
 the conceptual gap is between you and the average user -- but you also
 usually come out of the experience with an immediate understanding of
 how to fix things.

 So is there a way to have *everybody* who develops software for end
 users in our community have that experience? Maybe.

 At the WMF, for these Usability Projects, we had to do formal studies
 with expert consultants, because these were grant-funded projects and we
 needed to present scientific data. Doing those studies is expensive and
 time-consuming.

 But as a developer, it was more valuable to do hallway usability
 testing in an informal way. There are lots of startups these days that
 try to deliver such lightweight user testing over the web; could we do
 the same? Or, possibly we don't even need software; maybe what we need
 is a tradition of doing this for everything we release.

 So how about that? If there were an easy way to integrate usability
 testing into your process as a developer, would you be interested? And
 what should that look like?


___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Bryan Tong Minh
On Thu, Dec 2, 2010 at 6:23 PM, Dmitriy Sintsov ques...@rambler.ru wrote:
 So probably the core patch is
 the only efficient way to solve my problem?

You can always supply a database patch with your extension to add
indices you need to core tables.


Bryan

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Dmitriy Sintsov
* Bryan Tong Minh bryan.tongm...@gmail.com [Thu, 2 Dec 2010 19:38:47 
+0100]:
 On Thu, Dec 2, 2010 at 6:23 PM, Dmitriy Sintsov ques...@rambler.ru
 wrote:
  So probably the core patch is
  the only efficient way to solve my problem?
 
 You can always supply a database patch with your extension to add
 indices you need to core tables.

Indices are not hard to add, that's true. However, even with indexes the 
GROUP BY rev_text_id query on large revision set is slow. I probably 
will have to patch Revision::newNullRevision to add a new field value 
there (for the existing it is possible to fill the new field with 
UPDATE, however there will be new null revisions).
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Making usability part of the development process

2010-12-02 Thread Arthur Richards
I'd like to +1 all of this, particularly Trevor's thoughts on 
developer-testing APIs - or software that is otherwise meant to be 
extended/used by other developers.

As a predominantly back-end developer, it's become clear to me over the 
last few years that usability is not just a front-end concern.  'Users' 
don't necessarily only interact with the 'front end'.  Particularly in 
an open-source paradigm, 'users' need to be thought of much more broadly 
than just the person who clicks links on your website.  If we ignore 
principles of usability at the back-end code level, we run the risk of 
writing software that is utterly useless to anyone other than 
ourselves.  This not only defeats some of the purpose of OSS, it makes 
OSS that much more inaccessible all users - developers included.


On 12/02/2010 10:35 AM, Trevor Parscal wrote:
 +1

 Cheap hallway testing is so incredibly useful that I dedicated my time
 in Berlin last year to giving a crash course in it. I am not sure it was
 effective in inspiring or educating people on how to do this, but
 everyone is welcome to revisit the slides here:

 http://wikitech.wikimedia.org/index.php?title=File:Trevor_Parscal_-_Wikimedia_Developers_Workshop_-_Berlin_2010.pdf

 Yesterday we had our first on our own series of user tests, conducted
 by Parul Vora. While she is train in the kung-fu of user testing, she
 personally helped me put this set of slides together. I also pulled from
 my own experiences being involved in this kind of testing earlier in my
 career.

 My general pitch is, we should all be in the habit of doing whatever it
 takes to view users as they interact with our creations. I often use my
 wife, and now sometimes my 3-year old daughter to help me. Usually just
 showing someone a picture of a screen and asking how would you do X?
 is amazingly revealing. Higher-fidelity testing is great, but it's
 designed to squeeze the last bit of juice out of the lemon. In my
 experience the majority of it comes out quite easily in even the most
 causal of circumstances.

 My secondary pitch, which is not captured in these slides but was
 verbalized in Berlin was my view that we should user-test APIs with
 developers. This would especially be useful for our public HTTP API, but
 even PHP and JavaScript APIs could benefit from this. This differs from
 posting to the list and saying does anyone have any better ideas.
 Instead we would design APIs around use-cases, and then observe users in
 those use-cases succeeding or failing.

 Bottom line - I know from experience that if we can even subtly
 introduce user testing as a factor in our decision making process, the
 impact will be amazing.

 - Trevor

 On 12/2/10 6:46 AM, Neil Kandalgaonkar wrote:
 Hi there -- I don't post much here, but I was the programmer on the
 Multimedia Usability Project, which primarily focused on making uploads
 easier. The outside funding for that project just ended, so I think it's
 a good time to talk about what (if anything) we will do in the future
 along these lines.

 Going forward, we ought not to think about usability as the
 responsibility a few people in San Francisco. I have been asking myself
 how we could end the need for usability projects, and instead make that
 part of everyone's practices.

 What makes you a usability engineer? My personal belief is that it isn't
 (primarily) a matter of having special knowledge.

 You become a usability software engineer when you see five average users
 utterly fail to accomplish the task you wanted them to be able to
 accomplish.

 Programming is a hubristic enterprise, but for UI, these negative
 feelings are essential: watching ordinary users get angry and frustrated
 dealing with what you've created, even feeling a certain shame and
 embarassment that you got it so wrong. Only then do you see how large
 the conceptual gap is between you and the average user -- but you also
 usually come out of the experience with an immediate understanding of
 how to fix things.

 So is there a way to have *everybody* who develops software for end
 users in our community have that experience? Maybe.

 At the WMF, for these Usability Projects, we had to do formal studies
 with expert consultants, because these were grant-funded projects and we
 needed to present scientific data. Doing those studies is expensive and
 time-consuming.

 But as a developer, it was more valuable to do hallway usability
 testing in an informal way. There are lots of startups these days that
 try to deliver such lightweight user testing over the web; could we do
 the same? Or, possibly we don't even need software; maybe what we need
 is a tradition of doing this for everything we release.

 So how about that? If there were an easy way to integrate usability
 testing into your process as a developer, would you be interested? And
 what should that look like?

 ___
 Wikitech-l mailing list
 Wikitech-l@lists.wikimedia.org
 

Re: [Wikitech-l] No more syntax errors!

2010-12-02 Thread Chad
On Wed, Dec 1, 2010 at 10:32 PM,  jida...@jidanni.org wrote:
 Glad you are finally checking.
 I still will check that I can at least see my Main Page after updating though.


Of course. And it's why people should always test their code.
This check doesn't catch fatals, warnings, notices or strict
issues. Only syntax errors.

-Chad

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Making usability part of the development process

2010-12-02 Thread Neil Kandalgaonkar
On 12/2/10 11:05 AM, Arthur Richards wrote:

 As a predominantly back-end developer, it's become clear to me over the
 last few years that usability is not just a front-end concern.  'Users'
 don't necessarily only interact with the 'front end'.  Particularly in
 an open-source paradigm, 'users' need to be thought of much more broadly
 than just the person who clicks links on your website.

This is especially true of our projects.

Like, there's a thread over on Commons Village Pump right now about how 
to change certain strings that are hardcoded into the upload process. 
Apparently this kind of bug lasts for years rather than five minutes, 
just because the people who are most exercised about it don't have PHP 
skills or don't have commit access.

We have all kinds of contributors who know how to hack one layer of the 
system but not others.

That said, I think the deeper we go into the code, it becomes more about 
good documentation. At the outer layers things have to be more discoverable.

-- 
Neil Kandalgaonkar (   ne...@wikimedia.org

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Brion Vibber
On Thu, Dec 2, 2010 at 10:43 AM, Dmitriy Sintsov ques...@rambler.ru wrote:

 Indices are not hard to add, that's true. However, even with indexes the
 GROUP BY rev_text_id query on large revision set is slow. I probably
 will have to patch Revision::newNullRevision to add a new field value
 there (for the existing it is possible to fill the new field with
 UPDATE, however there will be new null revisions).


What is it that your system actually needs to be able to do this for? Is
there an issue with loading up the previous text items, or are you trying to
optimize storage on your end by not storing text twice when it happened to
use the same text blob on the origin site?

Beware that there's not anything that really distinguishes null revisions
from their predecessors, other than that they come later than the previous
ones. Note that it's also possible for the earlier revision to get deleted
while a later revision using the same text blob still remains.

The previously referenced text blob might also have originally come in in a
much older revision, not the immediately preceding one; this may be legit
for certain kinds of reverts, for instance.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


[Wikitech-l] Status update on test framework deployment

2010-12-02 Thread Rob Lanphier
Hi everyone,

It's been a while since I've updated the notes from our test framework
meetings, so I just did so today:
http://www.mediawiki.org/wiki/Meetings/Test_framework

The meeting earlier today is here:
http://www.mediawiki.org/wiki/Meetings/Test_framework/2010-12-02

Not a lot of context in those, so I'll provide a summary.  Markus
Glaser has been doing a lot of work over the past month getting the
Selenium framework in shape for adding new tests.  He also documented
what parameters are necessary to run Selenium tests on our test grid,
which gives everyone access to many different browsers to test
against:
http://www.mediawiki.org/wiki/Selenium_Configuration#Sample_configuration_to_run_against_a_selenium_grid

Concurrently with that, Nadeesha and Jinseh at Calcey Technologies
have been ramping up on the framework, and have a number of tests that
Nadeesha will be committing in trunk soon.

Our conversation today was brief, and mainly a mundane runthrough of
action items.  One conversation we did drift into was one about
installer testing, after figuring out that that is a weak spot in our
coverage right now (as many people refreshing their installs from
trunk don't run the installer every time they refresh, and it's one of
the big features for the next release of MediaWiki).  The framework
currently isn't well suited to test prior to the db and everything
being set up, so the folks at Calcey are going to spend some time
thinking about that.

Since we don't have a manual testing plan for the installer, I've put
a stub here:
http://www.mediawiki.org/wiki/New_installer/Test_plan

...and I've asked Calcey to flesh it out.  The idea is that once we
all agree on what makes sense to test at all (manually, automated, or
otherwise), then we can talk about what makes sense to automate.

The installer testing is a plan we've cooked up today, so we haven't
even run it past Chad yet, for example (/me waves at Chad).

If you'd like to participate in the meetings, let me know.  Our IRC
meetings obviously require no RSVP (next one is next week, December 9
at 8am PST on #mediawiki), but our voice meetings we'd like you to
RSVP for, since they're still kind of a pain to get going (next one is
the week after next, December 16 at 8am PST).

Rob

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] Making usability part of the development process

2010-12-02 Thread Platonides
Neil Kandalgaonkar wrote:
 
 Like, there's a thread over on Commons Village Pump right now about how 
 to change certain strings that are hardcoded into the upload process. 
 Apparently this kind of bug lasts for years rather than five minutes, 
 just because the people who are most exercised about it don't have PHP 
 skills or don't have commit access.

Not really. I had thought about it before without reaching to a Right
Way to fix it. However, today I was inspired.

http://www.mediawiki.org/wiki/Special:Code/MediaWiki/77623



___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Dmitriy Sintsov
* Brion Vibber br...@pobox.com [Thu, 2 Dec 2010 12:15:18 -0800]:
 What is it that your system actually needs to be able to do this for? 
Is
 there an issue with loading up the previous text items, or are you
 trying to
 optimize storage on your end by not storing text twice when it 
happened
 to
 use the same text blob on the origin site?

I try to synchronize recent changes of two wiki sites via XML chunks 
(consequtive groups of 10 revisions), created by WikiExporter. It mostly 
works (however I am still haven't checked all throughly, what will 
happen if an revision with earlier timestamp is trying to import over 
revision with older timestamp?), however, ImportReporter::reportPage 
also creates an extra null revision for every revision page imported for 
informational purposes (Imported by WikiSync in my case). 
Unfortunately, at the next run of synchronization, such revision becomes 
a difference between sites and synchronization reports that sites are 
not equal (even though there really was no changes, except for 
informational null revision).

 Beware that there's not anything that really distinguishes null
 revisions
 from their predecessors, other than that they come later than the
 previous
 ones. Note that it's also possible for the earlier revision to get
 deleted
 while a later revision using the same text blob still remains.

That's really bad for me - I probably should patch the deletion as well, 
to remove a flag field of rev_null from null revision row, when it's 
non-null match of rev_text_id was deleted :-( Too much of patches of the 
core and I am even not sure that I can intercept all kinds of revision 
deletion - should check that).

With GROUP BY on large set being slow and FIRST / LAST aggregators 
unavailable, it probably would be easier to me just not to call 
ImportReporter from by derived WikiImporter class? Informational null 
revisions won't be simply created in such case. They are nice to end 
user, that's why I have tried to keep them.

 The previously referenced text blob might also have originally come in
 in a
 much older revision, not the immediately preceding one; this may be
 legit
 for certain kinds of reverts, for instance.

Thanks for explanation.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Brion Vibber
On Thu, Dec 2, 2010 at 8:09 PM, Dmitriy Sintsov ques...@rambler.ru wrote:

 * Brion Vibber br...@pobox.com [Thu, 2 Dec 2010 12:15:18 -0800]:
  What is it that your system actually needs to be able to do this for?
 Is
  there an issue with loading up the previous text items, or are you
  trying to
  optimize storage on your end by not storing text twice when it
 happened
  to
  use the same text blob on the origin site?
 
 I try to synchronize recent changes of two wiki sites via XML chunks
 (consequtive groups of 10 revisions), created by WikiExporter. It mostly
 works (however I am still haven't checked all throughly, what will
 happen if an revision with earlier timestamp is trying to import over
 revision with older timestamp?), however, ImportReporter::reportPage
 also creates an extra null revision for every revision page imported for
 informational purposes (Imported by WikiSync in my case).
 Unfortunately, at the next run of synchronization, such revision becomes
 a difference between sites and synchronization reports that sites are
 not equal (even though there really was no changes, except for
 informational null revision).


It sounds to me like what you need to do is recognize and skip your tool's
edits, not null edits generally.

If these are all created by a particular user account, for instance, that
should be pretty straightforward: compare the user ID value and skip the
revision.

-- brion
___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l


Re: [Wikitech-l] null revisions

2010-12-02 Thread Dmitriy Sintsov
* Brion Vibber br...@pobox.com [Thu, 2 Dec 2010 20:45:16 -0800]:
 It sounds to me like what you need to do is recognize and skip your
 tool's
 edits, not null edits generally.

 If these are all created by a particular user account, for instance,
 that
 should be pretty straightforward: compare the user ID value and skip 
the
 revision.

A good idea, I'll make a mandatory account name for synchronization. 
Probably should work, however is there any way to disable interactive 
edits for some particular account while allowing it to use Import / 
Export and API in general? I'll check whether denying 'edit' action for 
synchronization account still would allow to perform automatic import. 
Otherwise, one can only hope that synchronization bot account will not 
be misused for ordinary edits (which should not be skipped from 
synchronization). Anyway, I can provide such warning at the extension 
page, at least.
Dmitriy

___
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l