On 7/01/19 9:09 AM, Avi Gross wrote:
[Can we ever change the subject line?]
{REAL SUBJECT: degrees of compilation.}
Peter wrote:

"...
Hoever, this is the Python list and one of the advantages of Python is that we 
don't have to compile our code. So we need a different excuse for fencing on 
office chairs ;-).
..."

You had time for that? We had to move to working on program-B whilst awaiting the compile-print-return of program-A - sometimes even system/application-B!

In some respects I think the imposed compartmentalisation of one's work was a positive aspect. We concentrated on ensuring that solid progress was made, run by run - whereas the ease and simplicity of making small changes (one after the other) can easily consume tracts of time. (at some risk) We would try to foresee problems, and ensure that the code would 'continue to work' rather than falling at the first hurdle. On the other hand, once the batch was "submitted", we could completely drop that program's concerns from our minds. At times a massive stress reduction (hence the sword play?), and a good way to approach the next task with a constructive (?and less-cluttered) mind.

YMMV!


I won't share my own war stories of how to compile in what now seem primitive 
environments on overloaded machines where you often figured out your mistakes 
on paper before the compiler quit and showed you officially 😊

Ah yes, the ?good, old days when computers were powered by dinosaurs running in treadmills...

It wasn't that the machine was overloaded so much, more that the Ops staff were charged with 'making efficient use of the machine' - that is to say, the 40~45% of CPU time that IBM didn't take to run the OpSys. Accordingly, our batch-compile jobs went into a long queue, just so that the machine could attend to them when it had some 'spare' capacity.

Since then, Moore's law etc, the economics have changed and the cost of programmer time is prioritised over computing costs. This became a paradigm-shift for me, because my (?junior) colleagues enacted a philosophy of 'throw it at the machine and let the real-time/time-sharing mini-computer/PC test it for you' (if only). Initially this seemed a scandalous waste of resources, but was also 'the way of the future'.

However, with large programs and poor test methods, using the above 'machine as tester' idea, the costs of running a series of 'long' tests with only minor changes between soon exposes economic limits! So, I've never quite lost that habit of <<<figured out your mistakes on paper before the compiler quit and showed you officially>>>. That said, the tenets of Test-driven development continue to erode my conservatism/appeal to my inherent laziness...

Conversely, I have been (admittedly, rather quickly) auditing a Coursera MOOC series (Python 3 Programming Specialisation = five courses, out of U.Mich). Without commenting on their pedagogical success or otherwise, I was struck by their presenting the idea of a 'paper computer'* - even having an 'intelligent text book' tool to demonstrate step-wise execution of code. (without having to introduce the complexities of a debugger) It is a good way to help new-comers understand the relationships between a variable's name, its value, its type, its mutability, etc, etc - which (IIRC) is where they introduced the idea. However, later lessons have included exhortations to review that new work, using the 'paper computer' technique.

* apologies: this is the term I learned (all those years ago, mumble, mumble) and I'm quite sure they have a different name. The essence is the same.

I'm only about 40% through the courses, and will be interested to see if they continue to refer to the technique as code-complexity builds...


--
Regards =dn
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to