Re: Strong typing vs. strong testing

2010-09-29 Thread George Neuner
On Wed, 29 Sep 2010 12:40:58 +0200, p...@informatimago.com (Pascal J.
Bourguignon) wrote:

George Neuner gneun...@comcast.net writes:

 On Tue, 28 Sep 2010 12:15:07 -0700, Keith Thompson ks...@mib.org
 wrote:

George Neuner gneun...@comcast.net writes:
 On 28 Sep 2010 12:42:40 GMT, Albert van der Horst
 alb...@spenarnc.xs4all.nl wrote:
I would say the dimensional checking is underrated. It must be
complemented with a hard and fast rule about only using standard
(SI) units internally.

Oil output internal : m^3/sec
Oil output printed:  kbarrels/day

 barrel is not an SI unit.

He didn't say it was.  Internal calculations are done in SI units (in
this case, m^3/sec); on output, the internal units can be converted to
whatever is convenient.

 That's true.  But it is a situation where the conversion to SI units
 loses precision and therefore probably shouldn't be done.


  And when speaking about oil there isn't
 even a simple conversion.

   42 US gallons  ?  34.9723 imp gal  ?  158.9873 L

 [In case those marks don't render, they are meant to be the
 double-tilda sign meaning approximately equal.]

There are multiple different kinds of barrels, but barrels of oil
are (consistently, as far as I know) defined as 42 US liquid gallons.
A US liquid gallon is, by definition, 231 cubic inches; an inch
is, by definition, 0.0254 meter.  So a barrel of oil is *exactly*
0.158987294928 m^3, and 1 m^3/sec is exactly 13.7365022817792
kbarrels/day.  (Please feel free to check my math.)  That's
admittedly a lot of digits, but there's no need for approximations
(unless they're imposed by the numeric representation you're using).

 I don't care to check it ... the fact that the SI unit involves 12
 decimal places whereas the imperial unit involves 3 tells me the
 conversion probably shouldn't be done in a program that wants
 accuracy.


Because perhaps you're thinking that oil is sent over the oceans, and
sold retails in barrils of 42 gallons?

Actually, when I buy oil, it's from a pump that's graduated in liters!

It comes from trucks with citerns containing 24 m³.

And these trucks get it from reservoirs of 23,850 m³.

Tankers move approximately 2,000,000,000 metric tons says the English
Wikipedia page...



Now perhaps it all depends on whether you buy your oil from Total or
from Texaco, but in my opinion, you're forgetting something: the last
drop.  You never get exactly 42 gallons of oil, there's always a little
drop more or less, so what you get is perhaps 158.987 liter or
41.221 US gallons, or even 158.98 liter = 41.9980729 US gallons,
where you need more significant digits.


No.  I'm just reacting to the significant figures issue.   Real
world issues like US vs Eurozone and measurement error aside - and
without implying anyone here - many people seem to forget that
multiplying significant figures doesn't add them, and results to 12
decimal places are not necessarily any more accurate than results to 2
decimal places.

It makes sense to break macro barrel into micro units only when
necessary.  When a refinery purchases 500,000 barrels, it is charged a
barrel price, not some multiple of gallon or liter price and
regardless of drop over/under.  The refinery's process is continuous
and it needs a delivery if it has less than 20,000 barrels - so the
current reserve figure of 174,092 barrels is as accurate as is needed
(they need to order by tomorrow because delivery will take 10 days).
OTOH, because the refinery sells product to commercial vendors of
gasoline/petrol and heating oil in gallons or liters, it does makes
sense to track inventory and sales in (large multiples of) those
units.

Similarly, converting everything to m³ simply because you can does not
make sense.  When talking about the natural gas reserve of the United
States, the figures are given in Km³ - a few thousand m³ either way is
irrelevant.

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-09-28 Thread George Neuner
On 28 Sep 2010 12:42:40 GMT, Albert van der Horst
alb...@spenarnc.xs4all.nl wrote:

I would say the dimensional checking is underrated. It must be
complemented with a hard and fast rule about only using standard
(SI) units internally.

Oil output internal : m^3/sec
Oil output printed:  kbarrels/day

barrel is not an SI unit.  And when speaking about oil there isn't
even a simple conversion.

  42 US gallons  ?  34.9723 imp gal  ?  158.9873 L

[In case those marks don't render, they are meant to be the
double-tilda sign meaning approximately equal.]

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Strong typing vs. strong testing

2010-09-28 Thread George Neuner
On Tue, 28 Sep 2010 12:15:07 -0700, Keith Thompson ks...@mib.org
wrote:

George Neuner gneun...@comcast.net writes:
 On 28 Sep 2010 12:42:40 GMT, Albert van der Horst
 alb...@spenarnc.xs4all.nl wrote:
I would say the dimensional checking is underrated. It must be
complemented with a hard and fast rule about only using standard
(SI) units internally.

Oil output internal : m^3/sec
Oil output printed:  kbarrels/day

 barrel is not an SI unit.

He didn't say it was.  Internal calculations are done in SI units (in
this case, m^3/sec); on output, the internal units can be converted to
whatever is convenient.

That's true.  But it is a situation where the conversion to SI units
loses precision and therefore probably shouldn't be done.


  And when speaking about oil there isn't
 even a simple conversion.

   42 US gallons  ?  34.9723 imp gal  ?  158.9873 L

 [In case those marks don't render, they are meant to be the
 double-tilda sign meaning approximately equal.]

There are multiple different kinds of barrels, but barrels of oil
are (consistently, as far as I know) defined as 42 US liquid gallons.
A US liquid gallon is, by definition, 231 cubic inches; an inch
is, by definition, 0.0254 meter.  So a barrel of oil is *exactly*
0.158987294928 m^3, and 1 m^3/sec is exactly 13.7365022817792
kbarrels/day.  (Please feel free to check my math.)  That's
admittedly a lot of digits, but there's no need for approximations
(unless they're imposed by the numeric representation you're using).

I don't care to check it ... the fact that the SI unit involves 12
decimal places whereas the imperial unit involves 3 tells me the
conversion probably shouldn't be done in a program that wants
accuracy.

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-23 Thread George Neuner
On Fri, 23 Jul 2010 15:10:16 +0200, francogrex fra...@grex.org
wrote:

Unfortunately many so-called experts in the field look down 
on newbies and mistreat them (in any programming language forum),
forgetting in the process that they were also at a certain time
newbies until some gentle and nice enough teachers took the 
trouble to educate them. 

I don't think it's accurate to say that [some] experts really scorn
newbies, but I do agree that newbies are occasionally mistreated.  

One thing newbies have to realize is that on Usenet you are quite
likely to be talking to people who were there at the beginning and, of
necessity, are largely self educated in whatever the subject matter
might be.  Many - I'd even say most - are happy to clarify
understanding and help with complicated problems, but there is a
general expectation that newbies have some basic research skills and
that they have tried to solve their problem before asking for help.

Unfortunately, there is a small percentage of people who think Usenet
and other online forums are for answering homework questions or for
digging out of a jam at work.  Getting help depends a lot on how the
question is asked: strident pleas for quick help or demands for an
answer are immediate red flags, but so are questions that begin with
X is crap, why can't I do ... and even seemingly polite questions
that are vague or unfocused (possibly indicating little or no thought
behind them) or posts which are directed to a large number of groups
(such as this thread we're in now).  

And, of course, in the language forums, drawing comparisons to
non-subject languages is generally considered rude except when done to
illustrate a relevant discussion point.  Introducing irrelevant
comparisons, deliberately proselytizing X in a Y group or doing a lot
of complaining about the subject language is bound to attract disdain.

As the Internet has grown, the absolute number of people in that
small percentage has grown as well.  A newbie can simply be unlucky
enough to ask a question at the wrong time.  If there has been a
recent rash of problem posts then experts may accidentally respond
negatively to a legitimate question.

Of course, there are cross-cultural issues too.  Many of the technical
groups are English-language.  English, even when polite, can seem
harsh and/or abrupt to non-native speakers.

On the whole, moderated groups are more conducive to helping newbies
because the moderator(s) filter obvious red flag posts.

And, finally, newbies themselves should realize that experts are
donating time to answer questions and do get frustrated answering the
same questions over and over.  They should not be offended by cold
responses that direct them to FAQs or that just give links to study
material.  Newbies who need hand-holding or warm welcoming responses
filled with detail should go find a tutor.


 ... you have the bad professors who are freaks 
(probably they have a lot of problems at home, their wives 
screwing all the males on the block, daughters drug addicts etc) 
and want to take their hatred out on you,

Unquestionably, there are experts who need their dosages adjusted. But
the same can be said for some percentage of other users too.

OTOH, newbies often aren't in the position to know who is an expert
... obviously, anyone able to correctly answer their question knows
more about that specific issue.  That doesn't necessarily qualify the
responder as an expert.  Some people get defensive at the edges of
their comfort zones.


Just some thoughts. YMMV.
George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: C interpreter in Lisp/scheme/python

2010-07-08 Thread George Neuner
On Thu, 08 Jul 2010 10:39:45 +0200, p...@informatimago.com (Pascal J.
Bourguignon) wrote:

Nick Keighley nick_keighley_nos...@hotmail.com writes:
 Nick Keighley nick_keighley_nos...@hotmail.com wrote:
 Rivka Miller rivkaumil...@gmail.com wrote:

 Anyone know what the first initial of L. Peter Deutsch stand for ?

 Laurence according to wikipedia (search time 2s)

 oops! He was born Laurence but changed it legally to L. including
 the dot

Too bad, Laurence is a nice name.

He probably hates the nickname Larry.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Jewish Pirates of the Caribbean

2010-06-16 Thread George Neuner
On Wed, 16 Jun 2010 17:23:35 +0200, p...@informatimago.com (Pascal J.
Bourguignon) wrote:

Kryno Bosman kryno.bos...@gmail.com writes:
 Would you, please, be so nice to share *your* truth somewhere else?

He has been long time ago kill-filed by everybody.   

Your quoting of his message puts you at risk of being kill-filed too.

The only way to deal with this kind of post is the kill file and
foremost not quoting them!

IMO, the best way to deal with that kind of trash is to hunt down the
poster and put a bullet in him.  I'm not Jewish, but I'm offended on
their behalf.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Which is the best implementation of LISP family of languages for real world programming ?

2010-06-12 Thread George Neuner
On Sat, 12 Jun 2010 18:57:08 +0300, Antti \Andy\ Ylikoski
antti.yliko...@gmail.com wrote:

OT:  (very Off Topic.)
I would not trust dolphins to take care of my investments.

Why not?  Remember the chimpanzee that picked stocks and beat many
professional fund managers?
http://www.marketwatch.com/story/dart-throwing-chimp-still-making-monkey-of-internet-funds?pagenumber=2


The average dolphin's brain is bigger than the average human's (and
far bigger than a chimpanzee's).  Dolphin investment strategies might
look fishy to us but dolphins have a unique point of view on important
industries such as transportation, telecommunications, construction,
tourism, energy exploration, food production, etc.  

I'd trust a dolphin over a Wall Street fund manager any day.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: NoSQL Movement?

2010-03-04 Thread George Neuner
On Thu, 04 Mar 2010 18:51:21 +0200, Juan Pedro Bolivar Puente
magnic...@gmail.com wrote:

On 04/03/10 16:21, ccc31807 wrote:
 On Mar 3, 4:55 pm, toby t...@telegraphics.com.au wrote:
  where you have to store data and

 relational data
 
 Data is neither relational nor unrelational. Data is data.
 Relationships are an artifact, something we impose on the data.
 Relations are for human convenience, not something inherent in the
 data itself.
 

No, relations are data. Data is data says nothing. Data is
information. Actually, all data are relations: relating /values/ to
/properties/ of /entities/. Relations as understood by the relational
model is nothing else but assuming that properties and entities are
first class values of the data system and the can also be related.

Well ... sort of.  Information is not data but rather the
understanding of something represented by the data.  The term
information overload is counter-intuitive ... it really means an
excess of data for which there is little understanding.

Similarly, at the level to which you are referring, a relation is not
data but simply a theoretical construct.  At this level testable
properties or instances of the relation are data, but the relation
itself is not.  The relation may be data at a higher level.

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-09 Thread George Neuner
On Tue, 9 Jun 2009 10:47:11 -0700 (PDT), toby
t...@telegraphics.com.au wrote:

On Jun 7, 2:41 pm, Jon Harrop j...@ffconsultancy.com wrote:
 Arved Sandstrom wrote:
  Jon Harrop wrote:

 performance means mutable state.

Hm, not sure Erlangers would wholly agree.

Erlang uses quite a bit of mutable state behind the scenes ... the
programmers just don't see it.

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: multi-core software

2009-06-06 Thread George Neuner
On Fri, 05 Jun 2009 16:26:37 -0700, Roedy Green
see_webs...@mindprod.com.invalid wrote:

On Fri, 5 Jun 2009 18:15:00 + (UTC), Kaz Kylheku
kkylh...@gmail.com wrote, quoted or indirectly quoted someone who
said :

Even for problems where it appears trivial, there can be hidden
issues, like false cache coherency communication where no actual
sharing is taking place. Or locks that appear to have low contention and
negligible performance impact on ``only'' 8 processors suddenly turn into
bottlenecks. Then there is NUMA. A given address in memory may be
RAM attached to the processor accessing it, or to another processor,
with very different access costs.

Could what you are saying be summed up by saying, The more threads
you have the more important it is to keep your threads independent,
sharing as little data as possible.

And therein lies the problem of leveraging many cores.  There is a lot
of potential parallelism in programs (even in Java :) that is lost
because it is too fine a grain for threads.  Even the lightest weight
user space (green) threads need a few hundred instructions, minimum,
to amortize the cost of context switching.

Add to that the fact that programmers have shown themselves, on
average, to be remarkably bad at figuring out what _should_ be done in
parallel - as opposed to what _can_ be done - and you've got a clear
indicator that threads, as we know them, are not scalable except under
a limited set of conditions. 

George
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-12 Thread George Neuner
On Wed, 10 Dec 2008 21:37:34 + (UTC), Kaz Kylheku
kkylh...@gmail.com wrote:

Now try writing a device driver for your wireless LAN adapter in Mathematica.

Notice how Xah chose not to respond to this.

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-12 Thread George Neuner
On Thu, 11 Dec 2008 10:41:59 -0800 (PST), Xah Lee xah...@gmail.com
wrote:

On Dec 10, 2:47 pm, John W Kennedy jwke...@attglobal.net wrote:
 Xah Lee wrote:
  In lisp, python, perl, etc, you'll have 10 or so lines. In C or Java,
  you'll have 50 or hundreds lines.

 C:

 #include stdlib.h
 #include math.h

 void normal(int dim, float* x, float* a) {
     float sum = 0.0f;
     int i;
     float divisor;
     for (i = 0; i  dim; ++i) sum += x[i] * x[i];
     divisor = sqrt(sum);
     for (i = 0; i  dim; ++i) a[i] = x[i]/divisor;

 }

i don't have experience coding C. 

Then why do you talk about it as if you know something?

The code above doesn't seems to satisfy the spec.

It does.

The input should be just a vector, array, list, or
whatever the lang supports. The output is the same 
datatype of the same dimension.

C's native arrays are stored contiguously.  Multidimensional arrays
can be accessed as a vector of length (dim1 * dim2 * ... * dimN).

This code handles arrays of any dimensionality.  The poorly named
argument 'dim' specifies the total number of elements in the array.

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-12 Thread George Neuner
On Mon, 8 Dec 2008 15:14:18 -0800 (PST), Xah Lee xah...@gmail.com
wrote:

Dear George Neuner,

Xah Lee wrote:
 For example,
 the level or power of lang can be roughly order as
 this:

 assembly langs
 C, pascal
 C++, java, c#
 unix shells
 perl, python, ruby, php
 lisp
 Mathematica

George wrote:
 According to what power estimation?  Assembly, C/C++, C#, Pascal,
 Java, Python, Ruby and Lisp are all Turing Complete.  I don't know
 offhand whether Mathematica is also TC, but if it is then it is at
 most equally powerful.

it's amazing that every tech geekers (aka idiots) want to quote
“Turing Complete” in every chance. Even a simple cellular automata,
such as Conway's game of life or rule 110, are complete.

http://en.wikipedia.org/wiki/Conway's_Game_of_Life
http://en.wikipedia.org/wiki/Rule_110

in fact, according to Stephen Wolfram's controversial thesis by the
name of “Principle of computational equivalence”, every goddamn thing
in nature is just about turing complete. (just imagine, when you take
a piss, the stream of yellow fluid is actually doing turning complete
computations!)

Wolfram's thesis does not make the case that everything is somehow
doing computation.  

for a change, it'd be far more interesting and effective knowledge
showoff to cite langs that are not so-called fuck of the turing
complete.

We geek idiots cite Turing because it is an important measure of a
language.  There are plenty of languages which are not complete.  That
you completely disregard a fundamental truth of computing is
disturbing.

the rest of you message went on stupidly on the turing complete point
of view on language's power, mixed with lisp fanaticism, and personal
gribes about merits and applicability assembly vs higher level langs.

You don't seem to understand the difference between leverage and power
and that disturbs all the geeks here who do.  We worry that newbies
might actually listen to your ridiculous ramblings and be led away
from the truth.

It's fine to go on with your gribes, but be careful in using me as a
stepping stone.

Xah, if I wanted to step on you I would do it with combat boots.  You
should be thankful that you live 3000 miles away and I don't care
enough about your petty name calling to come looking for you.  If you
insult people in person like you do on usenet then I'm amazed that
you've lived this long.

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: Mathematica 7 compares to other languages

2008-12-08 Thread George Neuner
On Sun, 7 Dec 2008 14:53:49 -0800 (PST), Xah Lee [EMAIL PROTECTED]
wrote:

The phenomenon of creating code that are inefficient is proportional
to the highlevelness or power of the lang. In general, the higher
level of the lang, the less possible it is actually to produce a code
that is as efficient as a lower level lang. 

This depends on whether someone has taken the time to create a high
quality optimizing compiler.


For example, the level or power of lang can be roughly order as 
this:

assembly langs
C, pascal
C++, java, c#
unix shells
perl, python, ruby, php
lisp
Mathematica

According to what power estimation?  Assembly, C/C++, C#, Pascal,
Java, Python, Ruby and Lisp are all Turing Complete.  I don't know
offhand whether Mathematica is also TC, but if it is then it is at
most equally powerful.

Grammatic complexity is not exactly orthogonal to expressive power,
but it is mostly so.  Lisp's SEXPRs are an existence proof that a
Turing powerful language can have a very simple grammar.  And while a
2D symbolic equation editor may be easier to use than spelling out the
elements of an equation in a linear textual form, it is not in any
real sense more powerful.


the lower level the lang, the longer it consumes programer's time, but
faster the code runs. Higher level langs may or may not be crafted to
be as efficient.  For example, code written in the level of langs such
as perl, python, ruby, will never run as fast as C, regardless what
expert a perler is. 

There is no language level reason that Perl could not run as fast as C
... it's just that no one has cared to implement it.


C code will never run as fast as assembler langs.

For a large function with many variables and/or subcalls, a good C
compiler will almost always beat an assembler programmer by sheer
brute force - no matter how good the programmer is.  I suspect the
same is true for most HLLs that have good optimizing compilers.

I've spent years doing hard real time programming and I am an expert
in C and a number of assembly languages.  It is (and has been for a
long time) impractical to try to beat a good C compiler for a popular
chip by writing from scratch in assembly.  It's not just that it takes
too long ... it's that most chips are simply too complex for a
programmer to keep all the instruction interaction details straight in
his/her head.  Obviously results vary by programmer, but once a
function grows beyond 100 or so instructions, the compiler starts to
win consistently.  By the time you've got 500 instructions (just a
medium sized C function) it's virtually impossible to beat the
compiler.

In functional languages where individual functions tend to be much
smaller, you'll still find very complex functions in the disassembly
that arose from composition, aggressive inlining, generic
specialization, inlined pattern matching, etc.  Here an assembly
programmer can quite often match the compiler for a particular
function (because it is short), but overall will fail to match the
compiler in composition.

When maximum speed is necessary it's almost always best to start with
an HLL and then hand optimize your optimizing compiler's output.
Humans are quite often able to find additional optimizations in
assembly code that they could not have written as well overall in the
first place.

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-09-01 Thread George Neuner
On Mon, 1 Sep 2008 21:03:44 + (UTC), Martin Gregorie
[EMAIL PROTECTED] wrote:

On Mon, 01 Sep 2008 12:04:05 -0700, Robert Maas, http://tinyurl.com/uh3t
wrote:

 From: George Neuner [EMAIL PROTECTED] A friend of mine had an
 early 8080 micros that was programmed through the front panel using
 knife switches
 
 When you say knife switches, do you mean the kind that are shaped like
 flat paddles? 

Pedantic correction:

Knife switch is the wrong term. These are high current switches, 
typically used in the sort of heavy duty circuit where the wiring hums 
when power is on or in school electrical circuits so even the back of the 
class can see whether the switch is open or closed. In these a copper 
'blade' closes the contact by being pushed down into a 
narrow, sprung U terminal that makes a close contact with both sides of 
the blade. Like this: http://www.science-city.com/knifeswitch.html

What you're talking is a flat handle on a SPST or DPST toggle switch. It 
is often called a paddle switch and mounted with the flats on the handle 
horizontal. Like this, but often with a longer handle: 
http://www.pixmania.co.uk/uk/uk/1382717/art/radioshack/spdt-panel-mount-
paddle-s.html

I don't know the correct term, but what I was talking about was a tiny
switch with a 1/2 inch metal handle that looks like a longish grain of
rice.  We used to call them knife switches because after hours
flipping them they would feel like they were cutting into your
fingers.

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-08-22 Thread George Neuner
On Thu, 21 Aug 2008 02:30:27 GMT, [EMAIL PROTECTED] wrote:

On Wed, 20 Aug 2008 21:18:22 -0500, [EMAIL PROTECTED] (Rob Warnock) wrote:

Martin Gregorie  [EMAIL PROTECTED] wrote:
+---
| I was fascinated, though by the designs of early assemblers: I first 
| learnt Elliott assembler, which required the op codes to be typed on 
| octal but used symbolic labels and variable names. Meanwhile a colleague 
| had started on a KDF6 which was the opposite - op codes were mnemonics 
| but all addresses were absolute and entered in octal. I always wondered 
| about the rationale of the KDF6 assembler writers in tackling only the 
| easy part of the job.
+---

In the LGP-30, they used hex addresses, sort of[1], but the opcodes
(all 16 of them) had single-letter mnemonics chosen so that the
low 4 bits of the character codes *were* the correct nibble for
the opcode!  ;-}

[Or you could type in the actual hex digits, since the low 4 bits
of *their* character codes were also their corresponding binary
nibble values... but that would have been wrong.]


-Rob

[1] The LGP-30 character code was defined before the industry had
yet standardized on a common hex character set, so instead of
0123456789abcdef they used 0123456789fgjkqw. [The fgjkqw
were some random characters on the Flexowriter keyboard whose low
4 bits just happened to be what we now call 0xa-0xf]. Even worse,
the sector addresses of instructions were *not* right-justified
in the machine word (off by one bit), plus because of the shift-
register nature of the accumulator you lost the low bit of each
machine word when you typed in instructions (or read them from
tape), so the address values you used in coding went up by *4*!
That is, machine locations were counted [*and* coded, in both
absolute machine code  assembler] as 0, 4, 8, j, 10,
14, 18, 1j (pronounced J-teen!!), etc.

-
Rob Warnock   [EMAIL PROTECTED]
627 26th Avenue   URL:http://rpw3.org/
San Mateo, CA 94403   (650)572-2607


Whats os interresting about all this hullabaloo is that nobody has
coded machine code here, and know's squat about it.

A friend of mine had an early 8080 micros that was programmed through
the front panel using knife switches ... toggle in the binary address,
latch it, toggle in the binary data, latch it, repeat ad nauseam.  It
had no storage device initially ... to use it you had to input your
program by hand every time you turned it on.

I did a little bit of programming on it, but I tired of it quickly.
As did my friend - once he got the tape storage working (a new prom)
and got hold of a shareware assembler, we hardly ever touched the
switch panel again.  Then came CP/M and all the initial pain was
forgotten (replaced by CP/M pain 8-).


I'm not talking assembly language. Don't you know that there are routines
that program machine code? Yes, burned in, bitwise encodings that enable
machine instructions? Nothing below that.

There is nobody here, who ever visited/replied with any thought relavence that 
can
be brought foward to any degree, meaning anything, nobody

What are you looking for?  An emulator you can play with?  

Machine coding is not relevant anymore - it's completely infeasible to
input all but the smallest program.  My friend had a BASIC interpreter
for his 8080 - about 2KB which took hours to input by hand and heaven
help you if you screwed up or the computer crashed.

sln

George
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-05-10 Thread George Neuner
On Fri, 09 May 2008 22:45:26 -0500, [EMAIL PROTECTED] (Rob Warnock) wrote:

George Neuner gneuner2/@/comcast.net wrote:

On Wed, 7 May 2008 16:13:36 -0700 (PDT), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

• Functions [in Mathematica] that takes elements out of list
are variously named First, Rest, Last, Extract, Part, Take, 
Select, Cases, DeleteCases... as opposed to “car”, “cdr”, 
“filter”, “filter”, “pop”, “shift”, “unshift”, in lisps and
perl and other langs.

| Common Lisp doesn't have filter.

Of course it does! It just spells it REMOVE-IF-NOT!!  ;-}  ;-}

I know.  You snipped the text I replied to.  

Xah carelessly conflated functions snatched from various languages in
an attempt to make some point about intuitive naming.  If he objects
to naming a function filter, you can just imagine what he'd have to
say about remove-if[-not].

George
--
for email reply remove / from address
--
http://mail.python.org/mailman/listinfo/python-list


Re: implementation for Parsing Expression Grammar?

2008-05-10 Thread George Neuner
On Fri, 9 May 2008 22:52:30 -0700 (PDT), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

In the past weeks i've been thinking over the problem on the practical
problems of regex in its matching power. For example, often it can't
be used to match anything of nested nature, even the most simple
nesting. It can't be used to match any simple grammar expressed by
BNF. Some rather very regular and simple languages such as XML, or
even url, email address, are not specified as a regex. (there exist
regex that are pages long that tried to match email address though)

What's your point?  The limitations of regular expressions are well
known.

After days of researching this problem, looking into parsers and its
theories etc, today i found the answer!!

What i was looking for is called Parsing Expression Grammar (PEG).

PEG has its own problems - it's very easy with PEG to create subtly
ambiguous grammars for which quite legal looking input is rejected.
And there are no good tools to analyze a PEG and warn you of subtle
problems.

Chris Clark (YACC++) has posted at length about the merits, problems
and limitations of various parse techniques - including PEG - in
comp.compilers.  Before you consider doing anything with PEG I suggest
you look up his posts and read the related threads.


It seems to me it's already in Perl6, and there's also a
implementation in Haskell. Is the perl6 PEG is in a usable state?

Thanks.

  Xah
  [EMAIL PROTECTED]
? http://xahlee.org/

George
--
for email reply remove / from address
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-05-09 Thread George Neuner
On Thu, 8 May 2008 22:38:44 -0700, Waylen Gumbal [EMAIL PROTECTED]
wrote:

Sherman Pendley wrote:
 [EMAIL PROTECTED] writes:
 
   PLEASE DO NOT | :.:\:\:/:/:.:
   FEED THE TROLLS | :=.' - - '.=:
 
  I don't think Xah is trolling here (contrary to his/her habit)
  but posing an interesting matter of discussion.

 It might be interesting in the abstract, but any such discussion, when
 cross-posted to multiple language groups on usenet, will inevitably
 devolve into a flamewar as proponents of the various languages argue
 about which language better expresses the ideas being talked about.
 It's like a law of usenet or something.

 If Xah wanted an interesting discussion, he could have posted this to
 one language-neutral group such as comp.programming. He doesn't want
 that - he wants the multi-group flamefest.

Not everyone follows language-neutral groups (such as comp,programming 
as you pointed out), so you actually reach more people by cross posting. 
This is what I don't understand - everyone seems to assume that by cross 
posting, one intends on start a flamefest, when in fact most such 
flamefests are started by those who cannot bring themselves to 
skipping over the topic that they so dislike.

The problem is that many initial posts have topics that are misleading
or simplistic.  Often an interesting discussion can start on some
point the initial poster never considered or meant to raise.

George
--
for email reply remove / from address
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-05-08 Thread George Neuner
On Thu, 08 May 2008 03:25:54 -0700,
[EMAIL PROTECTED] (Robert Maas,
http://tinyurl.com/uh3t) wrote:

 From: [EMAIL PROTECTED] [EMAIL PROTECTED]
 the importance of naming of functions.

 ... [take] a keyword and ask a wide
 audience, who doesn't know about the language or even unfamiliar of
 computer programing, to guess what it means.

This is a dumb idea ...

Better would be to reverse the question: Ask
random people on the street what they would like to call these
concepts:

... and this one is even dumber.

Terms don't exist in a vacuum - they exist to facilitate communication
within a particular knowledge or skill domain.  For example, English
is only meaningful to those who speak English.  The opinions of random
people who have no relevant domain knowledge are worthless.

Such a survey could only be meaningful if the survey population
already possessed some knowledge of programming, but were not already
aware of the particular terminology being surveyed.

George
--
for email reply remove / from address
--
http://mail.python.org/mailman/listinfo/python-list


Re: The Importance of Terminology's Quality

2008-05-07 Thread George Neuner
On Wed, 7 May 2008 16:13:36 -0700 (PDT), [EMAIL PROTECTED]
[EMAIL PROTECTED] wrote:

I'd like to introduce a blog post by Stephen Wolfram, on the design
process of Mathematica. In particular, he touches on the importance of
naming of functions.

• Ten Thousand Hours of Design Reviews (2008 Jan 10) by Stephen
Wolfram
 http://blog.wolfram.com/2008/01/10/ten-thousand-hours-of-design-reviews/

The issue is fitting here today, in our discussion of “closure”
terminology recently, as well the jargons “lisp 1 vs lisp2” (multi-
meaning space vs single-meaning space), “tail recursion”, “currying”,
“lambda”, that perennially crop up here and elsewhere in computer
language forums in wild misunderstanding and brouhaha.

The functions in Mathematica, are usually very well-name, in contrast
to most other computing languages. In particular, the naming in
Mathematica, as Stephen Wolfram implied in his blog above, takes the
perspective of naming by capturing the essense, or mathematical
essence, of the keyword in question. (as opposed to, naming it
according to convention, which often came from historical happenings)
When a thing is well-named from the perspective of what it actually
“mathematically” is, as opposed to historical developments, it avoids
vast amount of potential confusion.

Let me give a few example.

• “lambda”, widely used as a keyword in functional languages, is named
just “Function” in Mathematica. The “lambda” happend to be called so
in the field of symbolic logic, is due to use of the greek letter
lambda “?” by happenstance. The word does not convey what it means.
While, the name “Function”, stands for the mathematical concept of
“function” as is.

Lambda is not a function - it is a function constructor.   A better
name for it might be MAKE-FUNCTION.  

I (and probably anyone else you might ask) will agree that the term
lambda is not indicative of it's meaning, but it's meaning is not
synonymous with function as you suggest.

I suspect Mathematica of just following historical convention itself.
Mathematica uses the term inappropriately just as it was (ab)used in
Pascal (circa 1970).  I'm not aware of earlier (ab)uses but there
probably were some.


• Module, Block, in Mathematica is in lisp's various “let*”. The
lisp's keywords “let”, is based on the English word “let”. That word
is one of the English word with multitudes of meanings. If you look up
its definition in a dictionary, you'll see that it means many
disparate things. One of them, as in “let's go”, has the meaning of
“permit; to cause to; allow”. This meaning is rather vague from a
mathematical sense. Mathematica's choice of Module, Block, is based on
the idea that it builds a self-contained segment of code. (however,
the choice of Block as keyword here isn't perfect, since the word also
has meanings like “obstruct; jam”)

Let is the preferred mathematical term for introducing a variable.
Lisp uses it in that meaning.  What a word means or doesn't in English
is not particularly relevant to its use in another language.  There
are many instances of two human languages using identical looking
words with very different meanings.  Why should computer languages be
immune?


• Functions that takes elements out of list are variously named First,
Rest, Last, Extract, Part, Take, Select, Cases, DeleteCases... as
opposed to “car”, “cdr”, “filter”, “filter”, “pop”, “shift”,
“unshift”, in lisps and perl and other langs.

Lisp has first and rest - which are just synonyms for car and
cdr.  Older programmers typically prefer car and cdr for historical
reasons, but few object to the use of first and rest except for
semantic reasons - Lisp does not have a list data type, lists are
aggregates constructed from a primitive pair data type.  Pairs can be
used to construct trees as well as lists and rest has little meaning
for a tree.  When used with lists, first and rest are meaningful terms
and no one will object to them.

Besides which, you easily create synonyms for car and cdr (and
virtually any other Lisp function) with no more burden on the reader
of your code than using a C macro.  You can call them first and
rest, or first and second, or left and right, or red and black
or whatever else makes sense for your data.

People coming to Lisp from other languages often complain of macros
that they have to learn a new language every time they read a
program.  But in fact, the same is true in all languages - the reader
always has to learn the user-defined functions and how they are used
to make sense of the code.  In that sense Lisp is no different from
any other language.

Common Lisp doesn't have filter.  Even so, with respect to the
merits of calling a function extract or select versus filter, I
think that's just a matter of familiarity.  The term filter conveys
a more general idea than the others and can, by parameterization,
perform either function.


The above are some examples. The thing to note is that, Mathematica's
choices are often such that the word 

Re: Python's doc problems: sort

2008-05-02 Thread George Neuner
On Wed, 30 Apr 2008 12:35:10 +0200, John Thingstad
[EMAIL PROTECTED] wrote:

På Wed, 30 Apr 2008 06:26:31 +0200, skrev George Sakkis  
[EMAIL PROTECTED]:


\|||/
  (o o)
 ,ooO--(_)---.
 | Please|
 |   don't feed the  |
 | TROLL's ! |
 '--Ooo--'
 |__|__|
  || ||
 ooO Ooo

Doesn't copying Rainer Joswig's troll warning constitute a copywright  
infrigment :)

It's not an exact copy of Rainer's so it may be arguable whether it
violates his copyright.  Might have more luck with a trademark
argument - distorted marks may still infringe.

George
--
for email reply remove / from address
--
http://mail.python.org/mailman/listinfo/python-list


Re: DEK's birthday

2008-01-12 Thread George Neuner
On Sat, 12 Jan 2008 12:03:49 -0800 (PST), [EMAIL PROTECTED] wrote:

On Jan 10, 9:57 am, Jim [EMAIL PROTECTED] wrote:
 DEK celebrates 70 today.

 I doubt he'll read this but I'll say it anyway: Happy Birthday!

 Jim Hefferon

Donald Knuth is a son of a bitch who made a lot of money from tax
payer's grants. The computers he began with were built with tax
payer's money.

His mother lived in the same newyork where the yank bastards staged
that fake drama

letsroll911.org
stj911.org
scholarsfor911truth.org
nkusa.org
countercurrents.org
counterpunch.org

and the bastard did not lift a finger, did not lift a pen in the
defense of the truth.

may such bastard scholars burn in hell for ever and never know the
kingdom of heavan.

Amen


Why don't you play Hide And Go Fuck Yourself.

-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Choosing a new language

2007-12-30 Thread George Neuner
On Fri, 28 Dec 2007 22:07:12 -0800 (PST), [EMAIL PROTECTED] wrote:

Ada is airline/dod blessed.

Airline blessed maybe.  The DOD revoked its Ada only edict because
they couldn't find enough Ada programmers.  AFAIK, Ada is still the
preferred language, but it is not required.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Choosing a new language

2007-12-28 Thread George Neuner
On Fri, 28 Dec 2007 12:54:57 -0800, John Nagle [EMAIL PROTECTED]
wrote:

Actually, the ability to fix a running program [in Lisp] isn't
that useful in real life.  It's more cool than useful.  Editing a 
program from a break was more important back when computers were slower
and just rerunning from the beginning was expensive.

Speak for yourself.

The ability to patch a running program is very useful for certain
types of embedded applications.  Not every program having high
availability requirements can be restarted quickly, or can be
implemented reasonably using multiple servers or processes to allow
rolling restarts.

I worked with real time programs that required external machinery to
operate and several minutes to reinitialize and recover from a cold
restart.  Debugging non-trivial code changes could take hours or days
without the ability to hot patch and continue.  I know not everyone
works in RT, but I can't possibly be alone in developing applications
that are hard to restart effectively.

That all said, online compilation such as in Lisp is only one of
several ways of replacing running code.  Whether it is the best way is
open for debate.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: TeX pestilence (was Distributed RVS, Darcs, tech love)

2007-10-22 Thread George Neuner
On Mon, 22 Oct 2007 05:50:30 -0700, Xah Lee [EMAIL PROTECTED] wrote:

TeX, in my opinion, has done massive damage to the computing world.

i have written on this variously in emails. No coherent argument, but
the basic thoughts are here:
http://xahlee.org/cmaci/notation/TeX_pestilence.html

Knuth did a whole lot more for computing than you have or, probably,
ever will.  Your arrogance is truly amazing.


1. A typesetting system per se, not a mathematical expressions
representation system.

So?

2. The free nature, like cigeratte given to children, contaminated the
entire field of math knowledge representation into 2 decades of
stagnation.

What the frac are you talking about?

3. Being a typesetting system, brainwashed entire generation of
mathematicians into micro-spacing doodling.

Like they wouldn't be doodling anyway.  At least the TeX doodling is
likely to be readable (as if anyone cared).

4. Inargurated a massive collection of documents that are invalid
HTML. (due to the programing moron's ingorance and need to idolize a
leader, and TeX's inherent problem of being a typesetting system that
is unsuitable of representing any structure or semantics)

HTML is unsuitable for representing most structure and semantics.  And
legions of fumbling idiots compose brand new invalid HTML every day.

5. This is arguable and trivial, but i think TeX judged as a computer
language in particular its syntax, on esthetical grounds, sucks in
major ways.

No one except you thinks TeX is a computer language.

Btw, a example of item 4 above, is Python's documentation. Fucking
asses and holes.

Watch your language, there are children present.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Distributed RVS, Darcs, tech love

2007-10-20 Thread George Neuner
On Sun, 21 Oct 2007 01:20:47 -, Daniel Pitts
[EMAIL PROTECTED] wrote:

On Oct 20, 2:04 pm, llothar [EMAIL PROTECTED] wrote:
  I love math. I respect Math. I'm nothing but a menial servant to
  Mathematics.

 Programming and use cases are not maths. Many mathematics are
 the worst programmers i've seen because they want to solve things and
 much more often you just need heuristics. Once they are into exact
 world they loose there capability to see the factor of relevance in
 algorithms.

 And they almost never match the mental model that the average
 user has about a problem.

I read somewhere that for large primes, using Fermat's Little Theorem
test is *good enough* for engineers because the chances of it being
wrong are less likely than a cosmic particle hitting your CPU at the
exact instant to cause a failure of the same sort.  This is the
primary difference between engineers and mathematicians.

An attractive person of the opposite sex stands on the other side of
the room.  You are told that your approach must be made in a series of
discrete steps during which you may close half the remaining distance
between yourself and the other person.

Mathematician: But I'll never get there!

Engineer: I'll get close enough.


--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The fundamental concept of continuations

2007-10-15 Thread George Neuner
On Mon, 15 Oct 2007 11:56:39 +1300, Lawrence D'Oliveiro
[EMAIL PROTECTED] wrote:

In message [EMAIL PROTECTED], Barb Knox wrote:

 Instead of function A returning to its caller, the
 caller provides an additional argument (the continuation) which is a
 function B to be called by A with A's result(s).

That's just a callback. I've been doing that in C code (and other
similar-level languages) for years.

Callbacks are a form of continuation.  However, general continuations
such as those in Scheme, carry with them their execution context.
This allows them to used directly for things like user-space
threading.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The fundamental concept of continuations

2007-10-11 Thread George Neuner
On Wed, 10 Oct 2007 12:49:58 +0200, David Kastrup [EMAIL PROTECTED] wrote:

[EMAIL PROTECTED] writes:

 Again I am depressed to encounter a fundamentally new concept that I
 was all along unheard of. Its not even in paul graham's book where i
 learnt part of Lisp. Its in Marc Feeley's video.

 Can anyone explain:

 (1) its origin
 (2) its syntax and semantics in emacs lisp, common lisp, scheme
 (3) Is it present in python and java ?
 (4) Its implementation in assembly. for example in the manner that
 pointer fundamentally arises from indirect addressing and nothing new.
 So how do you juggle PC to do it.
 (5) how does it compare to and superior to a function or subroutine
 call. how does it differ.

Basically, there is no difference to function/subroutine call.  The
difference is just that there is no call stack: the dynamic context
for a call is created on the heap and is garbage-collected when it is
no longer accessible.  A continuation is just a reference to the state
of the current dynamic context.  As long as a continuation remains
accessible, you can return to it as often as you like.

Yes and no.  General continuations, as you describe, are not the only
form continuations take.  Nor are they the most common form used.  The
most common continuations are function calls and returns.  Upward
one-shot continuations (exceptions or non-local returns) are the next
most common form used, even in Scheme.

Upward continuations can be stack implemented.  On many CPU's, using
the hardware stack (where possible) is faster than using heap
allocated structures.  For performance, some Scheme compilers go to
great lengths to identify upward continuations and nested functions
that can be stack implemented.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The fundamental concept of continuations

2007-10-09 Thread George Neuner
On Tue, 09 Oct 2007 05:15:49 -, [EMAIL PROTECTED] wrote:

Again I am depressed to encounter a fundamentally new concept that I
was all along unheard of. Its not even in paul graham's book where i
learnt part of Lisp. Its in Marc Feeley's video.

Can anyone explain:

(1) its origin

Lambda calculus.  Continuation is just a formal term for what the
code does next.  It manifests, literally, as the next instruction(s)
to be executed.


(2) its syntax and semantics in emacs lisp, common lisp, scheme

Lisp does not have explicit continuations so there is no syntax for
them.  Continuations in Lisp mainly take the form of function calls,
function returns, exceptions, conditions, etc.  Sometimes code is
written in continuation passing style (CPS) in which each function
has one or more additional function parameters (the continuations) -
the function terminates by passing its result as an argument to one of
those continuation functions.

Scheme has explicit continuations based on closures.  Closure
continuations are created using CALL-WITH-CURRENT-CONTINUATION
(usually abbreviated as CALL/CC).  Some Schemes also recognize a
LET/CC form used mainly for escape continuations (exceptions).
Scheme's closure continuations can be stored in data structures and
used for complex control forms such as multitasking.  Like Lisp,
Scheme code also is sometimes written using CPS.


(3) Is it present in python and java ?

It is present in all languages.  It generally takes the form of
procedure or function calls, returns, exceptions, etc.


(4) Its implementation in assembly. for example in the manner that
pointer fundamentally arises from indirect addressing and nothing new.
So how do you juggle PC to do it.

As I stated above, every function call or return _is_ a continuation
... their implementation is obvious.

For the closure form used in Scheme, the implementation is to create a
closure, a data structure containing the function address and some
method of accessing the function's free variables, and to call the
function.  How you do this depends greatly on the instruction set.


(5) how does it compare to and superior to a function or subroutine
call. how does it differ.

Calling closure continuations is a little more complicated and a bit
slower than calling a normal function.  Creating the closure in the
first place may be simple or complicated depending on the complexity
of the source code and the processor's instruction set.


Thanks a lot.

(6) any good readable references that explain it lucidly ?

Get yourself a good textbook on compilers.  Most of the techniques are
applicable to all languages - even for seemingly very different
languages, the differences in their compilers are simply in how the
basic compilation techniques are combined.


My favorite intermediate-level books are 

Aho, Sethi  Ullman. Compilers: Principles, Techniques and Tools.
2nd Ed. 2006. ISBN 0-321-48681-1.
The first edition from 1986, ISBN 0-201-10088-6, is also worth having
if you can still find it.  The 1st edition is mainly about procedural
languages, the 2nd gives more time to functional languages and modern
runtime issues like GC and virtual machines.

Cooper  Torczon, Engineering a Compiler, 2004.  
ISBN 1-55860-698-X (hardcover), 1-55860-699-8 (paperback).
Also available as a restricted 90-day ebook from
http://rapidshare.com/files/24382311/155860698X.Morgan_20Kaufmann.Engineering_20a_20Compiler.pdf


There are also some decent intro books available online.  They don't
go into excruciating detail but they do cover the basics of code
shaping which is what you are interested in.

Torben Mogensen. Basics of Compiler Design
http://www.diku.dk/~torbenm/Basics/

Engineering a Compiler.  I don't have this author's name, nor can
Google find it at the moment.  I have a copy though (~2MB) - if you
are interested, contact me by email and I'll send it to you.


Also Google for free CS books.  Many older books (including some
classics) that have gone out of print have been released
electronically for free download.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Modernization of Emacs: terminology buffer and keybinding

2007-10-03 Thread George Neuner
On Wed, 3 Oct 2007 09:36:40 + (UTC), [EMAIL PROTECTED] (Bent C
Dalager) wrote:

In article [EMAIL PROTECTED], David Kastrup  [EMAIL PROTECTED] wrote:
[EMAIL PROTECTED] (Bent C Dalager) writes:

 In article [EMAIL PROTECTED],
 Frank Goenninger  [EMAIL PROTECTED] wrote:

Well, I didn't start the discussion. So you should ask the OP about the 
why. I jumped in when I came across the so often mentioned hey, it's 
all well defined statement was brought in. I simply said that if that 
well-definedness is against common understanding then I don't give 
a damn about that clever definitions. Because I have to know that there 
are such definitions - always also knowing that free is not really 
free.

 Liberated is a valid meaning of the word free.

No.  It is a valid meaning of the word freed.

Only if you're being exceedingly pedantic and probably not even
then. Webster 1913 lists, among other meanings,

Free
(...)
Liberated, by arriving at a certain age, from the control
of parents, guardian, or master.

The point presumably being that having been liberated, you are now
free.

I don't think knowing the meaning of a word is being pedantic.
Freed is derived from free but has a different, though associated,
meaning.  Words have meaning despite the many attempts by Generation X
to assert otherwise.  Symbolism over substance has become the mantra
of the young.

The English language has degenerated significantly in the last 30
years.  People (marketers in particular) routinely coin ridiculous new
words and hope they will catch on.  I remember seeing a documentary
(circa 1990?) about changes in the English language.  One part of the
program was about the BBC news and one of its editors, whom the staff
called the protector of language, who checked the pronunciation of
words by the news anchors.  The thing that struck me about this story
was the number of BBC newspeople who publicly admitted that they could
hardly wait for this man to retire so they could write and speak the
way they wanted rather than having to be correct.

Dictionaries used to be the arbiters of the language - any word or
meaning of a word not found in the dictionary was considered a
colloquial (slang) use.  Since the 1980's, an entry in the dictionary
has become little more than evidence of popularity as the major
dictionaries (OED, Webster, Cambridge, etc.) will now consider any
word they can find used in print.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Modernization of Emacs: terminology buffer and keybinding

2007-10-03 Thread George Neuner
On Wed, 3 Oct 2007 18:20:38 + (UTC), [EMAIL PROTECTED] (Bent C
Dalager) wrote:

In article [EMAIL PROTECTED],
George Neuner  gneuner2/@comcast.net wrote:
On Wed, 3 Oct 2007 09:36:40 + (UTC), [EMAIL PROTECTED] (Bent C
Dalager) wrote:


Only if you're being exceedingly pedantic and probably not even
then. Webster 1913 lists, among other meanings,

Free
(...)
Liberated, by arriving at a certain age, from the control
of parents, guardian, or master.

The point presumably being that having been liberated, you are now
free.

 (...)

The English language has degenerated significantly in the last 30
years.
 (...)

Dictionaries used to be the arbiters of the language - any word or
meaning of a word not found in the dictionary was considered a
colloquial (slang) use.  Since the 1980's, an entry in the dictionary
has become little more than evidence of popularity as the major
dictionaries (OED, Webster, Cambridge, etc.) will now consider any
word they can find used in print.

Apparantly, you missed the part where I referred to the 1913 edition
of Webster. I have kept it in the quoted text above for your
convenience. I can assure you that 1913 is both more than 30 years ago
/and/ it is before 1980, in case that was in doubt.

Cheers
   Bent D

I didn't miss it.  Your post was just an opportunity to rant.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Modernization of Emacs: terminology buffer and keybinding

2007-10-03 Thread George Neuner
On Wed, 03 Oct 2007 23:07:32 +0100, [EMAIL PROTECTED] wrote:

George Neuner wrote:
 Symbolism over substance has become the mantra
 of the young.

Symbolism: The practice of representing things by means of symbols or 
of attributing symbolic meanings or significance to objects, events, or 
relationships.

One might even suggest that all written language is based on the use of 
words as symbols.

Substance: (2)
a. Essential nature; essence.
b. Gist; heart.

Mantra: A sacred verbal formula repeated in prayer, meditation, or 
incantation, such as an invocation of a god, a magic spell, or a 
syllable or portion of scripture containing mystical potentialities.

Perhaps the young people you're referring to are not the same young 
people that I know, because I've never even heard of a religion whose 
object of reverence is meta-level analysis of language.

The Christian Bible says In the beginning was the Word, and the Word
was with God, and the Word was God. John 1:1.  Theologians and
philosophers have been writing about it for quite a few centuries.


Or, how about politics?  Another example from the Judeo-Christian
Bible (that is, from the Old Testament), politicking was the sin that
resulted in Lucifer's fall from God's grace.
[Yeah, I know the official story is that Lucifer's sin was envy.
Trust me ... I was there.  God didn't have a clue until Lucifer went
and organized the rally to protest God's policy on human souls (back
then God trusted his angels and was not in the habit of reading their
minds).  He didn't find out that Lucifer was behind the protests until
after Michael's police units had put down the riots.  When it was all
over, God didn't care that Lucifer had been envious or prideful or
lustful ... He was simply pissed that Lucifer had protested His
policies.

Shortly after He outlawed beer in Heaven because many of the rioters
had been drunk.  Then He started a program of wire-tapping without
warrants to spy on innocent angels.  I was ready to leave when He
closed the pubs, the illegal wire-taps just clinched it.]


Tell me, do you know what hyperbole means?

Yes I do.


George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Modernization of Emacs: terminology buffer and keybinding

2007-10-01 Thread George Neuner
On Tue, 02 Oct 2007 03:38:08 GMT, Roedy Green
[EMAIL PROTECTED] wrote:

On Fri, 28 Sep 2007 18:27:04 -0500, Damien Kick [EMAIL PROTECTED]
wrote, quoted or indirectly quoted someone who said :

free as in beer. 
 
but does not free beer nearly always come with a catch or implied
obligation?

It means you have to bring the chips.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: The Modernization of Emacs: terminology buffer and keybinding

2007-10-01 Thread George Neuner
On Tue, 02 Oct 2007 01:16:25 +0100, [EMAIL PROTECTED] wrote:

Ken Tilton wrote:
 Kenny happened to solve the traveling 
 salesman problem and protein-folding and passed the fricking Turing test 
 by using add-42 wherever he needed 42 added to a number, and  RMS wants 
 credit and ownership and control of it all. 

That might be what RMS wants (or not, I've never asked him), but it 
doesn't follow from the licence.  What follows from the licence is that 
you have to distribute the derived work as GPL _or not at all_.  I 
practice - if not in marketing terms - that's no more a land grab than a 
proprietary licence saying you can't use this to add your own numbers 
to 42 at all and if you do we'll eat your brains.

The other consideration is that, and notwitshtanding any text to the 
contrary in the GPL, it's not actually up to the copyright holder to 
define what derived work means: it's for the court to decide that. 
Now, I don't want to imply that courts are rational animals that can be 
relied on to understand all the issues in technical cases like this (ha, 
I slay myself) but really, if there's a reasonable concern that an 
implementation of the major advances in computer science you describe 
are legally derivative of someone's function that adds 42 to its 
argument, your legal system is fucked.  Redo from start.

Our [US] legal system is fucked ... more so with respect to patents,
but copyrights aren't far behind.  The US Congress just revisited
patent law to make it less of a land grab - we'll have to wait and see
how the USPTO interprets the new rules - but copyright law has been
trending the other way (more grabbing) for a couple of decades now.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: is laziness a programer's virtue?

2007-04-17 Thread George Neuner
On 17 Apr 2007 08:20:24 -0700, Ingo Menger
[EMAIL PROTECTED] wrote:

On 17 Apr., 12:33, Markus E Leypold
[EMAIL PROTECTED] wrote:

 What makes Xah a troll is neither off-topic posts nor being
 incoherent -- its the attitude. He's broadcasting his drivel to a
 number of groups not with the intention to discuss (he hardly ever
 answers to answers to his posts), but solely with the intention to
 inform the world at large about his own magnificient thoughts.

This hits the nail on the head.
Perhaps one could understand this behaviour on cultural grounds. In
chinese culture it may be not uncommon to write something that merely
sounds like great wisdom and it is nevertheless appreciated because
it's a piece of calligraphic art.

 Trying to correct Xah's behaviour is probably impossible.

Perhaps somebody could ask the chinese government to put him in jail
for hurting international society :)

That's going to be tough because, according to his web page, he's
living in a Honda Civic somewhere in Illinois, USA.

http://xahlee.org/PageTwo_dir/Personal_dir/xah.html

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a type error?

2006-07-14 Thread George Neuner
On 13 Jul 2006 08:45:49 -0700, Marshall [EMAIL PROTECTED]
wrote:

On the other hand, there is no problem domain for which pointers
are a requirement. I agree they are deucedly convenient, though.


I would argue that pointers/references _are_ a requirement for I/O.  I
know of no workable method for interpreting raw bits as meaningful
data other than to overlay a typed template upon them.

Categorically disallowing address manipulation functionally cripples
the language because an important class of programs (system programs)
cannot be written.

Of course, languages can go overboard the other way too.  IMO, C did
not need to provide address arithmetic at the language level,
reinterpretable references and array indexing would have sufficed for
any use.  Modula 3's type safe view is an example of getting it right.

It is quite reasonable to say I don't write _ so I don't need
[whatever language feature enables writing it].  It is important,
however, to be aware of the limitation and make your choice
deliberately.


George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a type error?

2006-07-11 Thread George Neuner
On Tue, 11 Jul 2006 14:59:46 GMT, David Hopwood
[EMAIL PROTECTED] wrote:

What matters is that, over the range
of typical programs written in the language, the value of the increased
confidence in program correctness outweighs the effort involved in both
adding annotations, and understanding whether any remaining run-time checks
are guaranteed to succeed.

Agreed, but ...


Look at it this way: suppose that I *need* to verify that a program has
no range errors. Doing that entirely manually would be extremely tedious.
If the compiler can do, say, 90% of the work, and point out the places that
need to be verified manually, then that would be both less tedious, and
less error-prone.

All of this presupposes that you have a high level of confidence in
the compiler.  I've been in software development for going in 20 years
now and worked 10 years on high performance, high availability
systems.  In all that time I have yet to meet a compiler ... or
significant program of any kind ... that is without bugs, noticeable
or not.

I'm a fan of static typing but the major problem I have with complex
inferencing (in general) is the black box aspect of it.  That is, when
the compiler rejects my code, is it really because a) I was stupid, b)
the types are too complex, or c) the compiler itself has a bug.  It's
certainly true that the vast majority of my problems are because I'm
stupid, but I've run into actual compiler bugs far too often for my
liking (high performance coding has a tendency to uncover them).

I think I understand how to implement HM inferencing ... I haven't
actually done it yet, but I've studied it and I'm working on a toy
language that will eventually use it.  But HM itself is a toy compared
to an inferencing system that could realistically handle some of the
problems that were discussed in this and Xah's expressiveness thread
(my own beef is with *static* checking of range narrowing assignments
which I still don't believe can be done regardless of Chris Smith's
assertions to the contrary).

It seems to me that the code complexity of such a super-duper
inferencing system would make its bug free implementation quite
difficult and I personally would be less inclined to trust a compiler
that used it than one having a less capable (but easier to implement)
system.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is a type error?

2006-07-09 Thread George Neuner
On Mon, 26 Jun 2006 11:49:51 -0600, Chris Smith [EMAIL PROTECTED]
wrote:

Pascal Costanza [EMAIL PROTECTED] wrote:

 This is maybe better understandable in user-level code. Consider the 
 following class definition:
 
 class Person {
String name;
int age;
 
void buyPorn() {
 if ( this.age 18) throw new AgeRestrictionException();
 ...
}
 }
 
 The message p.buyPorn() is a perfectly valid message send and will pass 
 both static and dynamic type tests (given p is of type Person in the 
 static case).

It appears you've written the code above to assume that the type system 
can't cerify that age = 18... otherwise, the if statement would not 
make sense.  It also looks like Java, in which the type system is indeed 
not powerfule enough to do that check statically.  However, it sounds as 
if you're claiming that it wouldn't be possible for the type system to 
do this?  If so, that's not correct.  If such a thing were checked at 
compile-time by a static type check, then failing to actually provide 
that guarantee would be a type error, and the compiler would tell you 
so.

Now this is getting ridiculous.  Regardless of implementation
language, Pascal's example is of a data dependent, runtime constraint.
Is the compiler to forbid a person object from aging?  If not, then
how does this person object suddenly become a different type when it
becomes 18?  Is this conversion to be automatic or manual?  

The programmer could forget to make a manual conversion, in which case
the program's behavior is wrong.  If your marvelous static type
checker does the conversion automatically, then obviously types are
not static and can be changed at runtime.  

Either way you've failed to prevent a runtime problem using a purely
static analysis.


George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-27 Thread George Neuner
On Mon, 26 Jun 2006 13:02:33 -0600, Chris Smith [EMAIL PROTECTED]
wrote:

George Neuner gneuner2/@comcast.net wrote:

 I worked in signal and image processing for many years and those are
 places where narrowing conversions are used all the time - in the form
 of floating point calculations reduced to integer to value samples or
 pixels, or to value or index lookup tables.  Often the same
 calculation needs to be done for several different purposes.

These are value conversions, not type conversions.  Basically, when you 
convert a floating point number to an integer, you are not simply 
promising the compiler something about the type; you are actually asking 
the compiler to convert one value to another -- i.e., see to it that 
whatever this is now, it /becomes/ an integer by the time we're done.  
This also results in a type conversion, but you've just converted the 
value to the appropriate form.  There is a narrowing value conversion, 
but the type conversion is perfectly safe.

We're talking at cross purposes.  I'm questioning whether a strong
type system can be completely static as some people here seem to
think.  I maintain that it is simply not possible to make compile time
guarantees about *all* runtime behavior and that, in particular,
narrowing conversions will _always_ require runtime checking.


 I can know that my conversion of floating point to integer is going to
 produce a value within a certain range ... but, in general, the
 compiler can't determine what that range will be.

If you mean my compiler can't, then this is probably the case.  If you 
mean no possible compiler could, then I'm not sure this is really very 
likely at all.

Again, the discussion is about narrowing the result.  It doesn't
matter how much the compiler knows about the ranges.  When the
computation mixes types, the range of the result can only widen as the
compiler determines the types involved.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-26 Thread George Neuner
On Sun, 25 Jun 2006 14:28:22 -0600, Chris Smith [EMAIL PROTECTED]
wrote:

George Neuner gneuner2/@comcast.net wrote:
 Undecidability can always be avoided by adding annotations, but of 
 course that would be gross overkill in the case of index type widening.
 
 Just what sort of type annotation will convince a compiler that a
 narrowing conversion which could produce an illegal value will, in
 fact, never produce an illegal value?

The annotation doesn't make the narrowing conversion safe; it prevents 
the narrowing conversion from happening. 

That was my point ... you get a program that won't compile.


If, for example, I need to 
subtract two numbers and all I know is that they are both between 2 and 
40, then I only know that the result is between -38 and 38, which may 
contain invalid array indices.  However, if the numbers were part of a 
pair, and I knew that the type of the pair was pair of numbers, 2 
through 40, where the first number is greater than the second, then I 
would know that the difference is between 0 and 38, and that may be a 
valid index.

Of course, the restrictions on code that would allow me to retain 
knowledge of the form [pair of numbers, 2 through 40, a  b] may be 
prohibitive.  That is an open question in type theory, as a matter of 
fact; whether types of this level of power may be inferred by any 
tractable procedure so that safe code like this may be written without 
making giving the development process undue difficulty by requiring ten 
times as much type annotations as actual code.  There are attempts that 
have been made, and they don't look too awfully bad.

I worked in signal and image processing for many years and those are
places where narrowing conversions are used all the time - in the form
of floating point calculations reduced to integer to value samples or
pixels, or to value or index lookup tables.  Often the same
calculation needs to be done for several different purposes.

I can know that my conversion of floating point to integer is going to
produce a value within a certain range ... but, in general, the
compiler can't determine what that range will be.  All it knows is
that a floating point value is being truncated and the stupid
programmer wants to stick the result into some type too narrow to
represent the range of possible values.

Like I said to Ben, I haven't seen any _practical_ static type system
that can deal with things like this.  Writing a generic function is
impossible under the circumstances, and writing a separate function
for each narrow type is ridiculous and a maintenance nightmare even if
they can share the bulk of the code.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-25 Thread George Neuner
On 22 Jun 2006 08:42:09 -0700, [EMAIL PROTECTED] wrote:

Darren New schrieb:
 I'm pretty sure in Pascal you could say

 Type Apple = Integer; Orange = Integer;
 and then vars of type apple and orange were not interchangable.

No, the following compiles perfectly fine (using GNU Pascal):

  program bla;
  type
apple = integer;
orange = integer;
  var
a : apple;
o : orange;
  begin
a := o
  end.

You are both correct.  

The original Pascal specification failed to mention whether user
defined types should be compatible by name or by structure.  Though
Wirth's own 1974 reference implementation used name compatibility,
implementations were free to use structure compatibility instead and
many did.  There was a time when typing differences made Pascal code
completely non-portable[1].

When Pascal was finally standardized in 1983, the committees followed
C's (dubious) example and chose to use structure compatibility for
simple types and name compatibility for records.


[1] Wirth also failed to specify whether boolean expression evaluation
should be short-circuit or complete.  Again, implementations went in
both directions.  Some allowed either method by switch, but the type
compatibility issue continued to plague Pascal until standard
conforming compilers emerged in the mid 80's.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-25 Thread George Neuner
On Sun, 25 Jun 2006 13:42:45 +0200, Joachim Durchholz
[EMAIL PROTECTED] wrote:

George Neuner schrieb:
 The point is really that the checks that prevent these things must be
 performed at runtime and can't be prevented by any practical type
 analysis performed at compile time.  I'm not a type theorist but my
 opinion is that a static type system that could, a priori, prevent the
 problem is impossible.

No type theory is needed.
Assume that the wide index type goes into a function and the result is 
assigned to a variable fo the narrow type, and it's instantly clear that 
the problem is undecidable.

Yes ... the problem is undecidable and that can be statically checked.
But the result is that your program won't compile even if it can be
proved at runtime that an illegal value would never be possible.


Undecidability can always be avoided by adding annotations, but of 
course that would be gross overkill in the case of index type widening.

Just what sort of type annotation will convince a compiler that a
narrowing conversion which could produce an illegal value will, in
fact, never produce an illegal value?

[Other than don't check this of course.]

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-21 Thread George Neuner
On Wed, 21 Jun 2006 16:12:48 + (UTC), Dimitri Maziuk
[EMAIL PROTECTED] wrote:

George Neuner sez:
 On Mon, 19 Jun 2006 22:02:55 + (UTC), Dimitri Maziuk
[EMAIL PROTECTED] wrote:

Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing 
 problem because it can be checked based on only the variable itself, 
 whereas checking whether it's in bounds requires knowledge about the array.

You mean like
 subtype MyArrayIndexType is INTEGER 7 .. 11
 type MyArrayType is array (MyArrayIndexType) of MyElementType


 If the index computation involves wider types it can still produce
 illegal index values.  The runtime computation of an illegal index
 value is not prevented by narrowing subtypes and cannot be statically
 checked.

My vague recollection is that no, it won't unless _you_ explicitly code an
unchecked type conversion. But it's been a while.


You can't totally prevent it ... if the index computation involves
types having a wider range, frequently the solution is to compute a
wide index value and then narrow it.  But if the wider value is out of
range for the narrow type you have a problem.  

Using the illegal wide value in a checked narrowing conversion will
throw a CONSTRAINT_ERROR exception - it doesn't matter whether you
access the array directly using the wide value or try to assign the
value to a variable of the narrow index type.  Using the wide value
unchecked will access memory beyond the array which is not what you
wanted and may cause a crash. 

The point is really that the checks that prevent these things must be
performed at runtime and can't be prevented by any practical type
analysis performed at compile time.  I'm not a type theorist but my
opinion is that a static type system that could, a priori, prevent the
problem is impossible.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-21 Thread George Neuner
On 21 Jun 2006 15:04:23 -0700, Greg Buchholz
[EMAIL PROTECTED] wrote:

I haven't been following this thread too closely, but I thought the
following article might be of interest...

Eliminating Array Bound Checking through Non-dependent types.
http://okmij.org/ftp/Haskell/types.html#branding


That was interesting, but the authors' method still involves runtime
checking of the array bounds.  IMO, all they really succeeded in doing
was turning the original recursion into CPS and making the code a
little bit clearer.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-20 Thread George Neuner
On Mon, 19 Jun 2006 22:02:55 + (UTC), Dimitri Maziuk
[EMAIL PROTECTED] wrote:

Yet Another Dan sez:

... Requiring an array index to be an integer is considered a typing 
 problem because it can be checked based on only the variable itself, 
 whereas checking whether it's in bounds requires knowledge about the array.

You mean like
 subtype MyArrayIndexType is INTEGER 7 .. 11
 type MyArrayType is array (MyArrayIndexType) of MyElementType


If the index computation involves wider types it can still produce
illegal index values.  The runtime computation of an illegal index
value is not prevented by narrowing subtypes and cannot be statically
checked.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-19 Thread George Neuner
On 19 Jun 2006 10:19:05 +0200, [EMAIL PROTECTED] (Torben Ægidius
Mogensen) wrote:


If you don't have definitions (stubs or complete) of the functions you
use in your code, you can only run it up to the point where you call
an undefined function.  So you can't really do much exploration until
you have some definitions.

Well in Lisp that just drops you into the debugger where you can
supply the needed return data and continue.  I agree that it is not
something often needed.


I expect a lot of the exploration you do with incomplete programs
amount to the feedback you get from type inference.

The ability to write functions and test them immediately without
writing a lot of supporting code is _far_ more useful to me than type
inference.  

I'm not going to weigh in on the static v dynamic argument ... I think
both approaches have their place.  I am, however, going to ask what
information you think type inference can provide that substitutes for
algorithm or data structure exploration.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: What is Expressiveness in a Computer Language

2006-06-19 Thread George Neuner
On 19 Jun 2006 13:53:01 +0200, [EMAIL PROTECTED] (Torben Ægidius
Mogensen) wrote:

George Neuner gneuner2/@comcast.net writes:

 On 19 Jun 2006 10:19:05 +0200, [EMAIL PROTECTED] (Torben Ægidius
 Mogensen) wrote:

 I expect a lot of the exploration you do with incomplete programs
 amount to the feedback you get from type inference.
 
 The ability to write functions and test them immediately without
 writing a lot of supporting code is _far_ more useful to me than type
 inference.  

I can't see what this has to do with static/dynamic typing.  You can
test individul functions in isolation is statically typed languages
too.

It has nothing to do with static/dynamic typing and that was the point
... that support for exploratory programming is orthogonal to the
language's typing scheme.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Xah Lee network abuse

2006-06-11 Thread George Neuner
On Sun, 11 Jun 2006 06:05:22 GMT, Mike Schilling
[EMAIL PROTECTED] wrote:


Philippa Cowderoy [EMAIL PROTECTED] wrote in message 
news:[EMAIL PROTECTED]
 On Sun, 11 Jun 2006, Mike Schilling wrote:

 I'm not aware of any definition of libel that includes making statements
 that are not provably true.

 I believe UK law uses one that's close to it.

If I were to write, say, that Tony Blair's tax policy will lead to higher 
deficits, I could be convicted of libel?  Even if that's true, it's not a 
priori provable. 

DISCLAIMER - I AM NOT A LAWYER

In the US, the defense against a libel claim is to prove the statement
or accusation is true.

In the US, libel involves damage to someone's reputation by means of
deliberately false statements or accusations.  Expert opinion is
explicitly protected from libel claims unless it malicious.
Non-expert opinion is generally judged on the intent of the author.
Unprovable supposition is generally held to be non-libelous, however
unprovable accusation is not allowed.

Moreover, in the US, political figures are explicitly denied some (but
not all) libel protections because it is expected that their actions
will cause some measure of public dissent.

I don't know UK defamation law but I suspect it is quite similar to US
law.  In your polite example, your opinion of Tony Blair's policy
would be unprovable supposition at the time of the writing (as would
Blair's own) and would therefore not be libelous.  However, if your
opinion took an accusatory tone saying, for example, that he was
increasing the public deficit to line his pockets, then you had better
be right.

George
--
for email reply remove / from address
-- 
http://mail.python.org/mailman/listinfo/python-list