On 13/08/2016 08:37, Steven D'Aprano wrote:
On Fri, 12 Aug 2016 09:55 pm, BartC wrote:

I know because my intention was to create a RECORD, with a specific set
of members or fields. With records, you usually don't just create
arbitrary named members as you go along.

Unfortunately your intention counts for nothing. You know what they say
about computers: the damned things don't do what you want, only what you
tell them to do.

The equivalent of a record in Python is not an arbitrary object, but a
tuple, or perhaps a namedtuple if you want to access fields by name.

OK, yet another thing I wasn't aware of (Python seems to be bristling with these things!). But when I tried it, it seems even namedtuples are immutable, so in most instances they can't do the same job.

The __slots__ thing, while crude, seems to be a better bet.

(The way I deal with records elsewhere is so ridiculously simple that it almost makes me want to cry when I see what most dynamic languages have to do. Example:

record date=                # define the record
   var day, month, year
end

d := date(25,12,2015)       # create an instance

d.year := 1999              # change a field

println d                   # show "(25,12,1999)"

Typing .yaer by mistake is usually picked up at compile-time - unless there is an actual field 'yaer' define elsewhere and it's in scope. Then it's detected at runtime like using __slots__.)

(1) Editing is cheap, but compiling and testing code is expensive

No, compiling is cheap too.

Perhaps now, but not always. And not even always now.

The Macintosh SE was built on a Motorola 68000 CPU with a clock speed of 7.8
MHz.

Of course Open Office is a little bigger and more complex than that old text
editor: 30K files versus a dozen or so, nine million LOC versus,

Ridiculously bloated project. And an average of 300 lines per file? Why not just one function per file then.

But even given that, I can't see that a 9Mloc project, as a monolithic file, would take (today) more than a minute or so with a snappy compiler. What kills it is probably two things: being fragmented among 30,000 files; and, assuming this is C code, those things called 'headers' where you have to repeatedly compile the same junk over and over again (perhaps 30,000 times in this case).

A static size of 9Mloc could mean a /considerably/ higher line-count needing to be compiled when headers are taken into account.

So a poor language, poor project structure, and probably slow compilers all contributing.

By memory, my Mac SE had all of 1MB of RAM, although unlike earlier Macs at
least it had dedicated VRAM for the display. You know how software can
trade off space for time? Yeah, well when you've got less than 1MB
available, you're often trading off time for space.

The first working compiler I did was for an 8-bit 4-MHz Z80 with 64KB memory. It was designed to give instant results (commercial compilers such as for C I understood spent most of their time grinding floppy disks).

The programs were small, but there was anyway an upper limit as to how big they could be as they had to fit into a small memory. I don't remember /ever/ having to wait unduly for any re-compile even as machines and projects got bigger.

I saw it as part of my job to continually ensure my tools were fit for purpose. Eventually I introduced dynamic, scripting languages into my apps to keep the edit-compile-run cycle close minimal most of the time.

So I completely reject your assertion that compilation is and always has
been cheap.

You're right in that using conventional tools it probably wasn't. It was for me however.

(Of course the design of Python makes that impractical because it would
require the byte-code compiler to see inside imported modules before
execution is commenced.)

But the fundamental problem is that Python has an exec command.

Also that 'import' statements can be conditional. And could have intervening code between them that redefines names from an earlier import.

Now, a linter, editor or other external tool is perfectly capable of using
heuristics to recognise what is *likely* to be a variable. It doesn't
matter if your IDE's code completion fails to work here:

exec("value = 1")
x = val[press tab for code completion]

but it would be completely unacceptable for the compiler to flag this as an
error:

exec("value = 1")
x = value + 1
    ^
SyntaxError: no variable called 'value'

I don't know how exec works. Is the 'value' name a local if this is in a function, or a global if outside a function?

If your example was in a function, wouldn't 'value' be assumed to be a global anyway, if it wasn't assigned to anywhere else in the function?

I can't see an issue with names inside exec() that are designed to be used outside, required to be 'declared' outside the exec. That is, created as they normally are in Python, by assignment. But then I expect that any Python code, inside exec or not, could also deliberately mess with dictionaries and things to create and remove variable names at will.

The language doesn't make it easy for itself!

--
Bartc
--
https://mail.python.org/mailman/listinfo/python-list

Reply via email to