On 04/26/2018 08:03 PM, H. S. Teoh wrote:
On Thu, Apr 26, 2018 at 07:14:17PM -0400, Nick Sabalausky (Abscissa) via 
Digitalmars-d wrote:
On 04/26/2018 06:47 PM, H. S. Teoh wrote:

If "less is more" were universally true, we'd be programming in BF
instead of D.  :-O  (Since, after all, it's Turing-complete, which
is all anybody really needs. :-P)


Yea. Speaking of which, I wish more CS students were taught the the
inherent limitations of "Turing-complete" vs (for example) "Big-O".
There's faaaar too many people being taught "Turing-complete means it
can do anything" which, of course, is complete and total bunk in more
(important) ways than one.

Actually, Turing-complete *does* mean it can do anything... well,
anything that can be done by a machine, that is.  There are inherently
unsolvable problems that no amount of Turing-completeness will help you
with.


It's directly analogous to the LR vs LL matter, and LR's "Well, I can tell you this input does/doesn't satisfy the grammar, but I can't help ya with much more than that":

Turning-completeness only tells you whether a given Turing-complete system (ie, "language", machine, etc) *can* compute XYZ if given infinite time and memory resources. That's it, that's all it says. (Granted, that *is* still a useful thing to know...)

However, Turing-completeness says nothing about whether the given language can accomplish said task *in the same time complexity* as another Turing-complete language. Or any other resource complexity, for that matter.

And Turing-completeness also says nothing about what inputs/outputs a language/system has access to.

Ex: VBScript is Turing-complete, but it can't do direct memory access, period, or invoke hardware interrupts (at least not without interop to another language, at which point it's not really VBScript doing the direct memory access, etc). This means there are things that simply cannot be done in VBScript.

Another example:

Alan's Turing machine (as well as the BF language) is incapable of O(1) random-access. Accessing an arbitrary memory cell is an O(n) operation, where n is the difference between the target address and the current address. But many other languages/machines, like D, ARE capable of O(1) random-access. Therefore, any algorithm which relies on O(1) random-access (of which there are many) CANNOT be implemented with the same algorithmic complexity in BF and it could be in D. And yet, both BF and D are Turing-complete.

Therefore, we have Turing-complete languages (BF, VBScript) which are INCAPABLE of doing something another Turing-complete language (D) can do. Thus, Turing-completeness does not imply a language/machine "can do anything" another language/machine can do.

And then, of course, *in addition* to all that, there's the separate matter you brought up of how convenient or masochistic it is to do ABC in language XYZ. ;)


And actually, speaking of Big-O, one thing that bugs me all the time is
that the constant factor in front of the Big-O term is rarely
considered.
[...]
And that's not even beginning to consider practical factors like the
hardware you're running on, and why the theoretically-superior O(1) hash
is in practice inferior to supposedly inferior algorithms that
nevertheless run faster because they are cache-coherent, whereas hashing
essentially throws caching out the window.  Thankfully, recently there's
been a slew of papers on cache-aware and cache-oblivious algorithms that
are reflect reality closer than ivory-tower Big-O analyses that
disregard reality.


Certainly good points.

Well, LR parsing is useful for writing compilers that tell you
"congratulations, you have successfully written a program without syntax
errors!".  What's that?  Where's the executable?  Sorry, I don't know
what that word means.  And what?  Which line did the syntax error occur
in?  Who knows!  That's your problem, my job is just to approve or
reject the program in its entirety! :-P

Exactly! :)

(Ugh, I would've saved myself sooooo much bother if ANY of that had been even remotely clear from any of the parsing books I had read. But nope! One of the items on my bucket list is to write a "CS Theory for Programmers" book that actually fills in all this stuff, along with going easy on the math-theory syntax that you can't realistically expect programmers to be fluent in. The average CS book is the equivalent of marketing a "How to speak German book" in the US...but writing it in French. Sure, *some* Americans will be able to read it, but...)

(And don't get me started on computability theory courses where the sole
purpose is to explore the structure of the hierarchy of unsolvable
problems.  I mean, OK, it's kinda useful to know when something is
unsolvable (i.e., when not to waste your time trying to do something
that's impossible), but seriously, what is even the point of the tons of
research that has gone into discerning entire *hierarchies* of
unsolvability?!  I recall taking a course where the entire term
consisted of proving things about the length of proofs. No, we weren't
actually *writing* proofs. We were proving things *about* proofs, and I
might add, the majority of the "proofs" we worked with were of infinite
length.  I'm sure that it will prove (ha!) to be extremely important in
the next iPhone release, which will also cure world hunger and solve
world peace, but I'm a bit fuzzy on the details, so don't quote me on
that.  :-P)

Well, I think a big part of it is the general acceptance that we can never really be certain what knowledge will/won't be useful. So science is all about learning whatever we can about reality on the presumption that the pursuit of scientific knowledge is a virtue in and of itself. Maybe it's all the Star Trek I've watched, but I can get onboard with that[1].

The part where things really start to bug me, though, is when the scientific knowledge *does* cross outside the realm of pure science, but in the process gets misrepresented as implying something it really doesn't imply, or critically important details get ignored or under-emphasised[2].

[1] Although I would certainly prefer to see preferential focus placed on scientific inquiries that ARE already known to have practical, real-world impact (like: "Can this LR parser (which we can theoretically create to validate the same grammar as some LL parser) be constructed to emit the same parse-tree as the LL parser? (Hint: Probably not) If so, what costs are involved?")

[2] It also bugs me how they say "abstract" when they don't actually mean "abstract" at all, but really mean "summary" or "overview", but...well...that's considerably less important ;)

Reply via email to