On 29 Jun 2009 at 12:55, John Howell wrote:

> At 10:50 AM -0400 6/29/09, David W. Fenton wrote:
> >
> >The results of a statistical
> >study of people's behavior is no[t] PROSCRIPTIVE, but DESCRIPTIVE.
> >That's is, it doesn't say what people SHOULD do, but describes how
> >people behave.
> 
> All very true, but let's not forget the dirty little secret of 
> statistical studies.  The selection of subjects is crucial to the 
> final result, and it's VERY easy to skew the results in advance by 
> careful selection of subjects.

That's a statistically solved problem. You present your results in 
terms of likelihood of reproducibility. A p=5 rating means that 95% 
of the time the results will be the same. That's a fairly high 
standard for research like this, and not likely to be what was used.

I don't know the statistical specifics. I do know the studies were 
conducted by professionals who, at least after the triumph of the 
GUI, didn't really have any need to promote mouse usage any more than 
was already the general practice. That is, there's no reason why 
those conducting the study would need to nudge the results in any 
particular direction.

>  (Similarly, the medieval Church 
> required courses in logic as a part of education (and probably still 
> does), because they understood perfectly that logic can be used to 
> prove anything you want as long as you control all the initial 
> assumptions!!)  Select subjects at random and you get the bell-shaped 
> curve.  Select any other way and you have skewing built in.

Do you really think that the people at Apple and Microsoft were 
statistically naïve enough to not account for these factors?

> Statistical analysis loses validity to the extent that subject 
> selection is not completely random (Statistics 101!), because the 
> statistical tools ASSUME random selection and therefore project their 
> results back to the specific population sampled and NOT to the 
> general population.  (Also Statistics 101:  the results of most 
> statistical studies map accurately only to college Sophomores, 
> because the subjects of most statistical studies, especially in the 
> early days, were college Sophomores!!)

I think the paragraph above is very sloppily worded. Not all 
statistical tools assume random distributions because not all natural 
distributions are random. 

> So did MS and Apple sample the entire population at random?  I doubt 
> it. 

Statistics tell us that doing so would not have improved the accuracy 
of the survey once they had surveyed an appropriately large 
population.

> That would have given them results with validity but very low 
> applicability because a vast majority of the general population are, 
> in fact, computer illiterate (or "naive," which is a much nicer 
> word!), even though WE might not think so.  No, they probably would 
> have sought out experienced computer users, although not necessarily 
> users of specific programs if the experimental design followed 
> acceptable guidelines. 

Rather than speculate about what did or didn't happen in order that 
you can dismiss the results of the study, why don't you look them up 
and find out?

Remember, I'm not promoting the validity of the studies, just 
pointing out that the conclusions fly in the face of conventional 
wisdom.

> So right there, that limits the universe to 
> which the results apply AND it skews the results unless they were 
> clever enough to control for the ways their subjects already used 
> their keyboards and mice.  And I'd also bet that they allowed the 
> subjects to self-select themselves as well (by putting up notices or 
> recruiting in locations where they could expect to find users), and 
> that's definitely a no-no but it's the easy way out.

Bet all you want. You might be able to find out what actually 
happened in the studies instead of just speculating about it.

> Several years ago there was a European study of perfect pitch which 
> claimed to find that certain areas of the brain were active in people 
> who had perfect pitch and not in people who did not.  Sounds valid, 
> right?  But it wasn't, because they recruited their subjects by going 
> to a music conservatory and asking for people who claimed to have 
> perfect pitch!  In other words, the experimental team were studying a 
> human attribute that they themselves could not define, did not 
> understand, and did not use preliminary screening tests for, and 
> their subjects were self-selected!  The fact that they DID get 
> results makes it an interesting study, but only a preliminary one 
> since they had no control subjects.
> 
> So yes, I can question the validity of the studies you cite, BUT on 
> the basis of their probable subject selection rather than on any 
> other aspect, and I can question it without studying the full 
> experimental reports (which I did for the perfect pitch article), but 
> questioning it does not mean rejecting it.  And questioning is a 
> basic component of scientific method.

You've questioned an imaginary study, not any of the actual studies. 
You may or may not be right, but there's no way to know until someone 
actually looks it up. I don't have a dog in this fight in regard to 
whether or not the studies were correctly done, but I have strong 
doubts that companies like Apple and Microsoft would subsidize shoddy 
research. This would tend to suggest to me that, at least pro forma, 
the studies were properly conducted, particularly because failure to 
do so would have elicited ridicule from the computing community.

Perhaps it did? 

I don't know.

Nor do I really care.

My concern here is the absolutely astonishing vehemence with which 
the mere suggestion of mousing being faster (based on certain 
studies) is met with all sorts of speculation about why the studies 
had to be flawed, or could have been wrongly designed. What is 
completely absent in the "arguments" against the studies is even one 
factual assertion about the studies themselves. Nobody seems 
interested in investigating what the content or design of the studies 
was -- those in opposition seem to want to just speculate endlessly 
on bad motives or improper study design.

> I fully understand and appreciate your comments, David, but also 
> those who raise questions for their own reasons.  For myself, I 
> studied keyboard (it was called "typing" in those days and used real 
> typewriters!) in high school and taught myself to use the mouse in 
> middle age, and I do whatever is comfortable for ME, as I imagine we 
> all do.

And? Individuals' actions are never going to be predicted by 
statistical analysis. This should surprise no one.

That doesn't mean that the behavior of groups cannot be described in 
the aggregate, after careful study.

It seems to me that it is this latter assertion that is upsetting 
people, who seem to confuse it with the former, i.e., that statistics 
predict individual behavior.

-- 
David W. Fenton                    http://dfenton.com
David Fenton Associates       http://dfenton.com/DFA/


_______________________________________________
Finale mailing list
Finale@shsu.edu
http://lists.shsu.edu/mailman/listinfo/finale

Reply via email to