Yes, your description of the "thought clearing process" is very accurate,
the main point being two interesting issues this process raises:

A. Considering that the pseudo-code could have contributed more to my
understanding of the problem that the final code, how much is lost by not
having such "intermediate" artefacts somehow be part of the final code?
What would be the best way to incorporate them? There is a partial analogy
in mathematics where the mathematicians job is pretty much generating
proofs but there is much less information about how those were discovered.

B. Can programming languages be "brought" closer to the way we think yet
still stay executable by the computer... ? (So that A is not relevant
anymore)

B1. ... via a particular programming style?

B2. ... by an appropriate design?

Such division helps to systematise some of the approaches you already
mentioned.

For example, with literate programming which implements a solution to A) I
could keep the pseudo-code as part of the program in an interesting way,
but with the amount of cross-referencing than goes on, I wonder how easy it
is to remember where each part of a large program is located. Also, I have
a hard time imaging working with multiple persons concurrently on a large
literate program... But I don't have much experience here.

For B1), I learned lots of little ways of writing "readable" code from
Martin Fowlers "Refactoring", tricks like this:

http://martinfowler.com/refactoring/catalog/introduceExplainingVariable.html
http://martinfowler.com/refactoring/catalog/decomposeConditional.html

But this can only go so far - it works in business programming but with
really complicated things it isn't enough to make an "ordinary" programming
language a good "tool for thought".

B2) is more serious, with the before-mentioned Lisp macros, POLs etc. I
again (unfortunately) do not have experience programming in this style.

I will be very vague here, but maybe you could somehow invert B) and
instead of finding a way to program that's similar to the way we currently
think, find a programming language that would be convenient to reason in. I
have things like program derivation in mind here, an interesting modern
example is this paper:

http://www.cs.tufts.edu/~nr/comp150fp/archive/richard-bird/sudoku.pdf

Maybe a good idea for an advanced programming course would be to take two
weeks off from ordinary work, pick a couple of non-trivial algorithmic
problems and, say, try to solve two of them via literate programming, two
of them via constructing a language bottom-up in Lisp or Smalltalk, two of
them via derivation etc. On the other hand, it seems quite reasonable to
ask whether this matters at all during problem solving, maybe the same
mostly unconscious mental facilities do the work anyway and then this would
matter only for communicating the solution to other people. Anyway, maybe
that's what I will try to do during my vacation this year :)

Cheers,
Jarek

2012/5/8 Julian Leviston <jul...@leviston.net>

> Isn't this simply a description of your "thought clearing process"?
>
> You think in English... not Ruby.
>
> I'd actually hazard a guess and say that really, you think in a
> semi-verbal semi-phyiscal pattern language, and not very well formed one,
> either. This is the case for most people. This is why you have to write
> hard problems down... you have to bake them into physical form so you can
> process them again and again, slowly developing what you mean into a shape.
>
> Julian
>
> On 09/05/2012, at 2:20 AM, Jarek Rzeszótko wrote:
>
> Example: I have been programming in Ruby for 7 years now, for 5 years
> professionally, and yet when I face a really difficult problem the best way
> still turns out to be to write out a basic outline of the overall algorithm
> in pseudo-code. It might be a personal thing, but for me there are just too
> many irrelevant details to keep in mind when trying to solve a complex
> problem using a programming language right from the start. I cannot think
> of classes, method names, arguments etc. until I get a basic idea of how
> the given computation should work like on a very high level (and with the
> low-level details staying "fuzzy"). I know there are people who feel the
> same way, there was an interesting essay from Paul Graham followed by a
> very interesting comment on MetaFilter about this:
>
>
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to