A friend of mine (whom I will credit if I get permission to attribute
the quote to them) recently said to me, on the subject of why most
students in programming classes have a hard time:

    ... programming is not well-explained, and ... a bunch of dudes in
    their prime of life laid down a whole lot of pseudo-zen language
    about it to intimidate their would-be successors.

I agree that these are real problems, but I've also seen a lot of
people have a lot of difficulty with programming --- even in
environments, like spreadsheets and JavaScript, where I have seen no
pseudo-zen language flying around.

I also think there are other reasons that people have a hard time
pprogramming, founded in premature optimization, proprietary software
business models, cognitive science, user interface design, and
computability theory.  Some of these are remediable technologically.

First, cognitive science and CHI.  In Don Norman's language, there are
large gulfs of execution --- it's hard to figure out what to do to make
your program do a certain thing --- and large gulfs of evaluation ---
it's hard to know when your program is doing what you want it to.
Programming environments are generally not very responsive, so it
takes a long time to see whether your program did the right thing, and
you've lost your mental state.

Additionally, traditional programming environments require that you be
good at imagining things you can't see, like the states of your
variables in the middle of your program, and debugging requires
reasoning backwards from effects to possible causes, and then *testing
and rejecting your incorrect hypotheses*, which is a *very* difficult
skill to learn.  Furthermore, traditional debuggers (for sequential
languages) are very unforgiving of human error --- when you try to use
one of them to see what one of those values is at a particular point,
it's easy to accidentally single-step past it, and then you have to
start over --- and overwhelm the user with too much information.

For some systematic psychological research on the cognitive
difficulties of programming, see "Six Learning Barriers in End-User
Programming Systems", by Andrew J. Ko et al.:
http://www.cs.cmu.edu/~ajko/papers/Ko2004LearningBarriers.pdf

Next, premature optimization.  Traditionally, for the sake of
execution efficiency, rapid startup, and small program size, software
is compiled into some kind of binary blob that is hard to inspect and
examine, stripped not only of its source code but even of debugging
symbols, so it's hard to look at someone else's program to see how
they did something, even if you use that feature every day.  This is
beginning to change.

Next, some proprietary software business models demand that users be
prevented from modifying their software so that it can act against
their interests: for example, refusing to function, sending their
private data to the software maker's privacy-invasion customers, or
wasting their attention and screen space on advertisements.  This also
makes it harder to look at someone else's program to see how they did
something.

Finally, fundamental results in computability theory make it much
easier to write code that does something surprising than to understand
why it does what it does, in any Turing-complete language.

Dijkstra would contend that writing programs that work requires
symbolic reasoning, which is also rather more difficult than concrete
reasoning.  I think that kind of programming is important, but it's
not really the majority of what I do when I program.

Reply via email to