Re: Why exceptions for error handling is so important

2015-01-15 Thread Tobias M via Digitalmars-d
On Thursday, 15 January 2015 at 21:28:59 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Unless, of course, the *purpose* of the program is specifically 
to deal

with problem situations -- in which case, you wouldn't be using
exceptions to indicate those situations, you'd treat them as 
"normal"

input instead. You wouldn't be using try/throw/catch, but normal
flow-control constructs.


But for almost every environmental error, there's a use case 
where it is normal or at least expected. That means, you have to 
have two versions of every function, one that throws and one that 
uses "normal" flow control.


Example:
Creating a file: Throws if already exists.
Creating a unique filename:
Create file.1.exp
Create file.2.exp
Create file.3.exp
... until it succeeds.
Here failure is expected and normal, still trial/error is the 
only option because a querying the file first is not safe.


How can I (as the implementer of the "Create file" function) 
decide if a failure is actually expected or not?

I can't, because I cannot foresee every use case of the function.

*Especially* with environmental errors that caller has to decide 
what's an error and what not.
You cannot just require certain environmental preconditions, 
because they can change unexpectedly.


Re: Always false float comparisons

2016-05-20 Thread Tobias M via Digitalmars-d

On Friday, 20 May 2016 at 12:32:40 UTC, Tobias Müller wrote:

Let me cite Prof. John L Gustafson


Not "Prof." but "Dr.", sorry about that. Still an authority, 
though.


Re: Always false float comparisons

2016-05-21 Thread Tobias M via Digitalmars-d

On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:

On 5/20/2016 5:36 AM, Tobias M wrote:

Still an authority, though.


If we're going to use the fallacy of appeal to authority, may I 
present Kahan who concurrently designed the IEEE 754 spec and 
the x87.


Since I'm just in the mood of appealing to authorities, let me 
cite Wikipedia:


The argument from authority (Latin: argumentum ad verecundiam) 
also appeal to authority, is a common argument form which can be 
fallacious, *such as when an authority is cited on a topic 
outside their area of expertise, when the authority cited is not 
a true expert or if the cited authority is stating a contentious 
or controversial position.*


(Emphasis is mine)


Re: Always false float comparisons

2016-05-21 Thread Tobias M via Digitalmars-d

On Saturday, 21 May 2016 at 17:58:49 UTC, Walter Bright wrote:

On 5/21/2016 2:26 AM, Tobias Müller wrote:

On Friday, 20 May 2016 at 22:22:57 UTC, Walter Bright wrote:

On 5/20/2016 5:36 AM, Tobias M wrote:

Still an authority, though.


If we're going to use the fallacy of appeal to authority, may 
I present Kahan

who concurrently designed the IEEE 754 spec and the x87.


Actually cited this *because* of you mentioning Kahan several 
times. And because
you said "The people who design these things are not fools, 
and there are good

reasons for the way things are."


I meant two things by this:

1. Do the homework before disagreeing with someone who 
literally wrote the book and designed the hardware for it.


2. Complaining that the x87 is not IEEE compliant, when the guy 
that designed the x87 wrote the spec at the same time, suggests 
a misunderstanding the spec. I.e. again, gotta do the homework 
first.


Sorry but this is a misrepresentation. I never claimed that the 
x87 doesn't conform to the IEEE standard. That's completely 
missing the point. Again.


Dismissing several decades of FP designs, and every programming 
language, as being "obviously wrong" and "insane" is an 
extraordinary claim, and so requires extraordinary evidence.


After all, what would your first thought be when a sophomore 
physics student tells you that Feynman got it all wrong? It's 
good to revisit existing dogma now and then, and challenge the 
underlying assumptions of it, but again, you gotta understand 
the existing dogma all the way down first.


If you don't, you're very likely to miss something fundamental 
and produce a design that is less usable.


The point is, that is IS possible to provide fairly reasonable 
and consistent semantics within the existing standards (C, C++, 
IEEE, ...). They provide a certain degree of freedom to 
accomodate for different hardware, but this doesn't mean that 
software should use this freedom to do arbitrary things.


Regarding the decades of FP design, the initial edition of K&R C 
contained the following clause:
"Notice that all floats in an expression are converted to double; 
all floating point arithmethic in C is done in double precision".
That passus was removed quite quickly because users complained 
about it.


Re: Always false float comparisons

2016-05-21 Thread Tobias M via Digitalmars-d

On Saturday, 21 May 2016 at 21:56:02 UTC, Walter Bright wrote:

On 5/21/2016 11:36 AM, Tobias M wrote:
Sorry but this is a misrepresentation. I never claimed that 
the x87 doesn't

conform to the IEEE standard.


My point was directed to more than just you. Sorry I didn't 
make that clear.



The point is, that is IS possible to provide fairly reasonable 
and consistent

semantics within the existing standards (C, C++, IEEE, ...).


That implies what I propose, which is what many C/C++ compilers 
do, is unreasonable, inconsistent, not Standard compliant, and 
not IEEE. I.e. that the x87 is not conformant :-)


I'm trying to understand what you want to say here, but I just 
don't get it. Can you maybe formulate it differently?


Read the documentation on the FP switches for VC++, g++, clang, 
etc. You'll see there are tradeoffs. There is no "obvious, 
sane" way to do it.


There just isn't.


As I see it, the only real trade off is speed/optimization vs 
correctness.



They provide a
certain degree of freedom to accomodate for different 
hardware, but this doesn't
mean that software should use this freedom to do arbitrary 
things.


Nobody is suggesting doing arbitrary things, but to write 
portable fp, take into account what the Standard says rather 
than what your version of the compiler does with various 
default and semi-documented switches.


https://gcc.gnu.org/wiki/FloatingPointMath

Seems relatively straight forward to me and well documented to 
me...
Dangerous optimizations like reordering expressions are all 
opt-in.


Sure it's probably not 100% consistent across 
implementations/platforms, but it's also *that* bad. And it's 
certainly not an excuse to make it even worse.


And yes, I think that in such an underspecified domain like FP, 
you cannot just rely on the standard but have to take the 
individual implementations into account.

Again, this is not ideal, but let's not make it even worse.

Regarding the decades of FP design, the initial edition of K&R 
C contained the

following clause:
"Notice that all floats in an expression are converted to 
double; all floating

point arithmethic in C is done in double precision".
That passus was removed quite quickly because users complained 
about it.


It was changed to allow floats to be computed as floats, not 
require it. And the reason at the time, as I recall, was to get 
faster floating point ops, not because anyone desired reduced 
precision.


I don't think that anyone has argued that lower precision is 
better. But a compiler should just do what it is told, not trying 
to be too clever.


Re: The Case Against Autodecode

2016-05-29 Thread Tobias M via Digitalmars-d

On Sunday, 29 May 2016 at 12:08:52 UTC, default0 wrote:
I am pretty sure that a single grapheme in unicode does not 
correspond to your notion of "character". I am pretty sure that 
what you think of as a "character" is officially called 
"Grapheme Cluster" not "Grapheme".


Grapheme is a linguistic term. AFAIUI, a grapheme cluster is a 
cluster of codepoints representing a grapheme. It's called 
"cluster" in the unicode spec, because there there is no 
dedicated grapheme unit.


I put "character" into quotes, because the term is not really 
well defined. I just used it for a short and pregnant answer. I'm 
sure there's a better/more correct definition of graphem/phoneme, 
but it's probably also much longer and complicated.


Re: The Case Against Autodecode

2016-05-29 Thread Tobias M via Digitalmars-d

On Sunday, 29 May 2016 at 12:41:50 UTC, Chris wrote:
Ok, you have a point there, to be precise  is a multigraph 
(a digraph)(cf. [1]). In French you can have multigraphs 
consisting of three or more characters  /o/, as in Irish 
 => /i:/. However, a phoneme is not necessarily a spoken 
"character" as  represents one phoneme but consists of two 
"characters" or graphemes.  can represent two different 
phonemes (voiced and unvoiced "th" as in `this` vs. `thorough`).


What I meant was, a phoneme is the "character" (smallest unit) in 
a spoken language, not that it corresponds to a character 
(whatever that means).


My point was that we have to be _very_ careful not to mix our 
cultural experience with written text with machine 
representations. There's bound to be confusion. That's why we 
should always make clear what we refer to when we use the words 
grapheme, character, code point etc.


I used 'character' in quotes, because it's not a well defined 
therm. Code point, grapheme and phoneme are well defined.


Re: The Case Against Autodecode

2016-05-29 Thread Tobias M via Digitalmars-d

On Friday, 27 May 2016 at 19:43:16 UTC, H. S. Teoh wrote:
On Fri, May 27, 2016 at 03:30:53PM -0400, Andrei Alexandrescu 
via Digitalmars-d wrote:

On 5/27/16 3:10 PM, ag0aep6g wrote:
> I don't think there is value in distinguishing by language. 
> The point of Unicode is that you shouldn't need to do that.


It seems code points are kind of useless because they don't 
really mean anything, would that be accurate? -- Andrei


That's what we've been trying to say all along! :-P  They're a 
kind of low-level Unicode construct used for building "real" 
characters, i.e., what a layperson would consider to be a 
"character".


Code points are *the fundamental unit* of unicode. AFAIK most 
(all?) algorithms in the unicode spec are defined in terms of 
code points.
Sure, some algorithms also work on the code unit level. That can 
be used as an optimization, but they are still defined on code 
points.


Code points are also abstracting over the different 
representations (UTF-...), providing a uniform "interface".


Re: ADL

2016-09-03 Thread Tobias M via Digitalmars-d
On Saturday, 3 September 2016 at 10:33:22 UTC, Walter Bright 
wrote:
I don't think it is a template issue. It's a name lookup issue. 
There's LINQ in C#, for example.


I think it is.

The problem is lookup of dependent symbols (see C++ two phase 
lookup). Without real templates, all lookup can be done at 
definition time.
I'm not very familiar with LINQ, but generally C# uses uses 
interfaces as constraints on generics, similar to traits/type 
classes. Lookup is then done once, considering only the 
interfaces, not for each the concrete type.


Re: ADL

2016-09-03 Thread Tobias M via Digitalmars-d

On Saturday, 3 September 2016 at 12:40:26 UTC, ZombineDev wrote:
No, LINQ doesn't work because of interfaces, but because of 
extension methods (C#'s variant of UFCS). The IEnumerable 
interface defines only a single method. All the useful 
functionality is implemented as extension methods which are 
only available if the user specifically imports the namespace 
in which where they're defined (just like D's ranges and range 
primitive implementations for arrays). Those extension methods 
are used as a fallback, similarly to UFCS in D: every type can 
override the extension methods by implementing the method 
itself. Also more inner namespaces (more closer to the method 
invocation) override more outer namespaces.


I know extension methods, that's not the point.
The point is, that you cannot have a generic method like this in 
C#, it won't compile:


class Bar
{
void GenericMethod(T arg)
{
arg.Foo();
}
}

Instead you need a constraint like this:

interface IFoo
{
void Foo();
}

class Bar
{
void GenericMethod(T arg) where T: IFoo
{
arg.Foo();
}
}

Similarly for LINQ, you cannot just implement a generic "Sum" 
extension method for IEnumerable that works for all T, because 
you cannot just use the + operator in that method. It is not 
defined on T if there are no respective constraints.


Look at how it is implemented separately for every type T that 
supports +:

https://msdn.microsoft.com/de-de/library/system.linq.enumerable.sum(v=vs.110).aspx


Re: ADL

2016-09-03 Thread Tobias M via Digitalmars-d
On Saturday, 3 September 2016 at 12:25:11 UTC, Andrei 
Alexandrescu wrote:

What problems are you referring to? -- Andrei


The problems discussed here in the thread related to name lookup 
at template instantiation time.
And also similar problems related to visibility (public/private) 
that were discussed in a different thread recently.


Re: ADL

2016-09-03 Thread Tobias M via Digitalmars-d
On Saturday, 3 September 2016 at 16:33:07 UTC, Andrei 
Alexandrescu wrote:
I see. This is a matter orthogonal to DbI - introspection 
should be able to figure out whether a member can be found, or 
a nonmember if the design asks for it. I wouldn't like 
"tricking" DbI into thinking a member is there when there 
isn't. -- Andrei


The problem I see with DbI is rather that the user of a function 
thinks that an optional constraint is satisfied, while in reality 
it isn't, due to a non-obvious lookup/visibility problem.


Re: ADL

2016-09-03 Thread Tobias M via Digitalmars-d

On Saturday, 3 September 2016 at 16:32:16 UTC, ZombineDev wrote:
No you're wrong. There's no need for interfaces or for generic 
constraints. It's not static vs duck typing. It's just a method 
lookup issue. See for yourself: http://rextester.com/GFKNSK99121


Ok, Interfaces and other generic methods with compatible 
constraints.
But in the end you cannot do much without any interface 
constraints except writing out to the console as you do in the 
example.


But the main point still holds, name lookup is only done at 
definition time, not at instantiation time. That's why you can 
only call generic methods. Overloads don't work.


Sum is implemented in that stupid way, because unlike C++, in 
C# operators need to be implemented as static methods, so you 
can't abstract them with an interface. If they were instance 
methods, you could implement them outside of the class as 
extension methods and there would be no need to write a 
distinct method for each type. Here's an example: 
http://rextester.com/PQFPC46087
The only thing missing is syntax sugar to forward the '+' 
operator to 'Add' in my example.


With runtime reflection you can do almost anything... That's 
circumventing the type system and doesn't disprove anything.
I mean, it even "works" for types that cannot be added at all, by 
just returning a default value...