Re: Final by default?

2014-03-19 Thread Marc Schütz

On Monday, 17 March 2014 at 01:05:09 UTC, Manu wrote:
Whole program optimisation can't do anything to improve the 
situation; it
is possible that DLL's may be loaded at runtime, so there's 
nothing the

optimiser can do, even at link time.


With everything being exported by default, this is true. But 
should DIP45 be implemented, LTO/WPO will be able to achieve a 
lot more, at least if the classes in question are not (or not 
fully) exported.


Re: Final by default?

2014-03-18 Thread Ola Fosheim Grøstad

On Tuesday, 18 March 2014 at 18:34:14 UTC, bearophile wrote:
In this phase of D life commercial developers can't justify to 
have the language so frozen that you can't perform reasonable 
improvements as the one discussed in this thread.


I don't disagree, but D is suffering from not having a production 
ready compiler/runtime with a solid optimizing backend in 
maintenance mode. So it is giving other languages "free traction" 
rather than securing its own position.


I think there is a bit too much focus on standard libraries, 
because not having libraries does not prevent commercial 
adoption. Commercial devs can write their own C-bindings if the 
core language, compiler and runtime is solid. If the latter is 
not solid then only commercial devs that can commit lots of 
resources to D will pick it up and keep using it (basically the 
ones that are willing to turn themselves into D shops).


Perhaps also D2 was announced too early, and then people jumped 
onto it expecting it to come about "real soon". Hopefully the 
language designers will do D3 design on paper behind closed doors 
for a while before announcing progress and perhaps even 
deliberately keep it at gamma/alpha quality in order to prevent 
devs jumping ship to D2 prematurely. :-)


That is how I view it, anyway.


Re: Final by default?

2014-03-18 Thread bearophile

Ola Fosheim Grøstad:


Is attracting commercial developers important for D?


In this phase of D life commercial developers can't justify to 
have the language so frozen that you can't perform reasonable 
improvements as the one discussed in this thread.


Bye,
bearophile


Re: Final by default?

2014-03-18 Thread Ola Fosheim Grøstad

On Tuesday, 18 March 2014 at 18:11:27 UTC, dude wrote:
Nobody uses D, so worrying about breaking backwards compatibly 
for such an obvious improvement is pretty funny:)


I kind of agree with you if it happens once and is a sweeping 
change that fix the syntactical warts as well as the semantical 
ones.


 Lua breaks backwards compatibility at every version. Why is it 
not a problem? If you don't want to upgrade, just keep using 
the older compiler! It isn't like it ceased to exist--


It is a problem because commercial developers have to count hours 
and need a production compiler that is maintained.


If your budget is 4 weeks of development, then you don't want 
another 1 week to fix compiler induced bugs.


Why?

1. Because you have already signed a contract on a certain amount 
of money based on estimates of how much work it is. All extra 
costs are cutting into profitability.


2. Because you have library dependencies. If a bug is fixed in 
library version 2 which requires version 3 of the compiler, then 
you need to upgrade to the version 3 of the compiler. That 
compiler better not break the entire application and bring you 
into a mess of unprofitable work.


Is attracting commercial developers important for D? I think so, 
not because they contribute lots of code, but because they care 
about the production quality of the narrow libraries they do 
create and are more likely to maintain them over time. They also 
have a strong interest in submitting good bug reports and fixing 
performance bottle necks.







Re: Final by default?

2014-03-18 Thread dude
 Nobody uses D, so worrying about breaking backwards compatibly 
for such an obvious improvement is pretty funny:)


 D should just do what Lua does.

 Lua breaks backwards compatibility at every version. Why is it 
not a problem? If you don't want to upgrade, just keep using the 
older compiler! It isn't like it ceased to exist--


Re: Final by default?

2014-03-18 Thread Ola Fosheim Grøstad

On Tuesday, 18 March 2014 at 13:01:56 UTC, Marco Leise wrote:

Let's just say it will never detect all cases, so the "final"
keyword will still be around. Can you find any research papers
that indicate that such compiler technology can be implemented
with satisfactory results? Because it just sounds like a nice
idea on paper to me that only works when a lot of questions
have been answered with yes.


I don't think this is such a theoretical interesting question. 
Isn't this actually a special case of a partial correctness proof 
where you try to establish constraints on types? I am sure you 
can find a lot of papers covering bits and pieces of that.



These not entirely random objects from a class hierarchy could
well have frequently used final methods like a name or
position. I also mentioned objects passed as parameters into
delegates.


I am not sure I understand what you are getting at.

You start with the assumption that a pointer to base class A is 
the full set of that hierarchy. Then establish constraints for 
all the subclasses it cannot be. Best effort. Then you can inline 
any virtual function call that is not specialized across that 
constrained result set. Or you can inline all candidates in a 
switch statement and let the compiler do common subexpression 
elimination & co.


If you want speed you create separate paths for the dominant 
instance types. Whole program optimizations is guided by 
profiling data.


Another optimization, ok. The compiler still needs to know
that the instance type cannot be sub-classed.


Not really. It only needs to know that in the current execution 
path you do have instance of type X (which is most frequent) then 
you have another execution path for the inverted set.



Thinking about it, it might not even be good to duplicate
code. It could easily lead to instruction cache misses.


You have heuristics for that. After all, you do have the 
execution pattern. You have the data of a running system on 
typical input. If you log all input events (which is useful for a 
simulation) you can rerun the program in as many configurations 
you want. Then you skip the optimizations that leads to worse 
performance.



Also this is way too much involvement from both the coder and
the compiler.


Why? Nobody claimed that near optimal whole program optimization 
has to be fast.


At this point I'd ask for "final" if it wasn't already there, 
if just to be sure the compiler gets it right.


Nobody said that you should not have final, but final won't help 
you inlining virtual functions where possible.


you can have a high level specification language asserting pre 
and post conditions if you insist on closed source.


More shoulds and cans and ifs... :-(


Err… well, you can of course start with a blank slate after 
calling a closed source library function.



I don't get the big picture. What does the compiler have to do
with plugins? And what do you mean by allowed to do and
access and how does it interact with virtuality of a method?
I'm confused.


In my view plugins should not be allowed to subclass. I think it 
is ugly, but IFF then you need to tell the compiler which classes 
it can subclass, instantiate etc. As well as what side effects 
the call to the plugin may and may not have.


Why is that confusing? If you shake the world, you need to tell 
the compiler what the effect is. Otherwise you have to assume 
"anything" upon return from said function call.


That said, I am personally not interested in plugins without 
constraints imposed on them (or at all). Most programs can do 
fine with just static linkage, so I find the whole dynamic 
linkage argument less interesting.


Closed source library calls are more interesting, especially if 
you can say something about the state of that library. That could 
provide you with detectors for wrong library usage (which could 
be the OS itself). E.g. that a file has to be opened before it is 
closed etc.


Re: Final by default?

2014-03-18 Thread Marco Leise
Am Mon, 17 Mar 2014 18:16:13 +
schrieb "Ola Fosheim Grøstad"
:

> On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote:
> > About two years ago we had that discussion and my opinion
> > remains that there are too many "if"s and "assume"s for the
> > compiler.
> > It is not so simple to trace back where an object originated
> > from when you call a method on it.
> 
> It might not be easy, but in my view the language should be 
> designed to support future advanced compilers. If D gains 
> traction on the C++ level then the resources will become 
> available iff the language has the right constructs or affords 
> extensions that makes advanced optimizations tractable. What is 
> possible today is less imoortant...

Let's just say it will never detect all cases, so the "final"
keyword will still be around. Can you find any research papers
that indicate that such compiler technology can be implemented
with satisfactory results? Because it just sounds like a nice
idea on paper to me that only works when a lot of questions
have been answered with yes.

>   >It could be created though
> > the factory mechanism in Object using a runtime string or it
> 
> If it is random then you know that it is random.

These not entirely random objects from a class hierarchy could
well have frequently used final methods like a name or
position. I also mentioned objects passed as parameters into
delegates.

> If you want 
> speed you create separate paths for the dominant instance types. 
> Whole program optimizations is guided by profiling data.

Another optimization, ok. The compiler still needs to know
that the instance type cannot be sub-classed.

> > There are plenty of situations where it is virtually
> > impossible to know the instance type statically.
> 
> But you might know that it is either A and B or C and D in most 
> cases. Then you inline those cases and create specialized 
> execution paths where profitable.

Thinking about it, it might not even be good to duplicate
code. It could easily lead to instruction cache misses.
Also this is way too much involvement from both the coder and
the compiler. At this point I'd ask for "final" if it wasn't
already there, if just to be sure the compiler gets it right.

> > Whole program analysis only works on ... well, whole programs.
> > If you split off a library or two it doesn't work. E.g. you
> > have your math stuff in a library and in your main program
> > you write:
> >
> >   Matrix m1, m2;
> >   m1.crossProduct(m2);
> >
> > Inside crossProduct (which is in the math lib), the compiler
> > could not statically verify if it is the Matrix class or a
> > sub-class.
> 
> In my view you should avoid not having source access, but even 
> then it is sufficient to know the effect of the function. E.g. 
> you can have a high level specification language asserting pre 
> and post conditions if you insist on closed source.

More shoulds and cans and ifs... :-(

> >> With a compiler switch or pragmas that tell the compiler what 
> >> can be dynamically subclassed the compiler can assume all 
> >> leaves in the compile time specialization hierarchies to be 
> >> final.
> >
> > Can you explain, how this would work and where it is used?
> 
> You specify what plugins are allowed to do and access at whatever 
> resolution is necessary to enable the optimizations your program 
> needs?
> 
> Ola.

I don't get the big picture. What does the compiler have to do
with plugins? And what do you mean by allowed to do and
access and how does it interact with virtuality of a method?
I'm confused.

-- 
Marco



Re: Final by default?

2014-03-18 Thread Rainer Schuetze



On 18.03.2014 02:15, Marco Leise wrote:

Am Mon, 17 Mar 2014 20:10:31 +0100
schrieb Rainer Schuetze :


In that specific case, why does this not work for you?:

nothrow extern(Windows) {
HANDLE GetCurrentProcess();
}



The attributes sometimes need to be selected conditionally, e.g. when
building a library for static or dynamic linkage (at least on windows
where not everything is exported by default). Right now, you don't have
an alternative to code duplication or heavy use of string mixins.


Can we write this? It just came to my mind:

enum attribs = "nothrow extern(C):";

{
 mixin(attribs);
 HANDLE GetCurrentProcess();
}



Interesting idea, though it doesn't seem to work:

enum attribs = "nothrow extern(C):";

extern(D) { // some dummy attribute to make it parsable
mixin(attribs);
int GetCurrentProcess();
}

int main() nothrow // Error: function 'D main' is nothrow yet may throw
{
	return GetCurrentProcess(); // Error: 'attr.GetCurrentProcess' is not 
nothrow

}

I guess this is by design, the mixin introduces declarations after the 
parser has already attached attributes to the non-mixin declarations.


Re: Final by default?

2014-03-17 Thread H. S. Teoh
On Tue, Mar 18, 2014 at 02:31:02AM +, deadalnix wrote:
> On Tuesday, 18 March 2014 at 01:25:34 UTC, Nick Sabalausky wrote:
> >While I personally would have been perfectly ok with changing to
> >final-by-default (I'm fine either way), I can't help wondering: Is
> >it really that big of a deal to sprinkle some "final"s into the
> >occasional third party library if you really need to?
> 
> It makes that much noise because this is a problem that everybody
> understand. Much bigger problem do not receive any attention.

http://en.wikipedia.org/wiki/Parkinson's_law_of_triviality

:-)


T

-- 
They say that "guns don't kill people, people kill people." Well I think the 
gun helps. If you just stood there and yelled BANG, I don't think you'd kill 
too many people. -- Eddie Izzard, Dressed to Kill


Re: Final by default?

2014-03-17 Thread deadalnix

On Tuesday, 18 March 2014 at 01:25:34 UTC, Nick Sabalausky wrote:
While I personally would have been perfectly ok with changing 
to final-by-default (I'm fine either way), I can't help 
wondering: Is it really that big of a deal to sprinkle some 
"final"s into the occasional third party library if you really 
need to?


It makes that much noise because this is a problem that everybody 
understand. Much bigger problem do not receive any attention.


Re: Final by default?

2014-03-17 Thread Nick Sabalausky

On 3/14/2014 6:20 AM, Regan Heath wrote:


Yes.. but doesn't help Manu or any other consumer concerned with speed
if the library producer neglected to do this.  This is the real issue,
right?  Not whether class *can* be made final (trivial), but whether
they *actually will* *correctly* be marked final/virtual where they
ought to be.

Library producers range in experience and expertise and are "only human"
so we want the option which makes it more likely they will produce good
code.  In addition we want the option which means that if they get it
wrong, less will break if/when they want to correct it.



While I personally would have been perfectly ok with changing to 
final-by-default (I'm fine either way), I can't help wondering: Is it 
really that big of a deal to sprinkle some "final"s into the occasional 
third party library if you really need to?




Re: Final by default?

2014-03-17 Thread Marco Leise
Am Mon, 17 Mar 2014 20:10:31 +0100
schrieb Rainer Schuetze :

> > In that specific case, why does this not work for you?:
> >
> > nothrow extern(Windows) {
> >HANDLE GetCurrentProcess();
> > }
> >
> 
> The attributes sometimes need to be selected conditionally, e.g. when 
> building a library for static or dynamic linkage (at least on windows 
> where not everything is exported by default). Right now, you don't have 
> an alternative to code duplication or heavy use of string mixins.

Can we write this? It just came to my mind:

enum attribs = "nothrow extern(C):";

{
mixin(attribs);
HANDLE GetCurrentProcess();
}

-- 
Marco



Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 1:23 PM, Sean Kelly wrote:

I like the idea of a file per platform, and am undecided whether
I prefer this or the publishing solution.  This one sounds more
flexible, but it may be more difficult to produce installs that
contain only the files relevant to some particular platform.


At the worst, you could do a recursive delete on freebsd.d, etc. :-)

Note that even with all these platform files, it only adds one extra file to the 
compilation process: package.d and the platform specific file.


Re: Final by default?

2014-03-17 Thread Sean Kelly

On Monday, 17 March 2014 at 19:42:47 UTC, Walter Bright wrote:

On 3/17/2014 12:18 PM, Sean Kelly wrote:

So I'd import "core.sys.ucontext.package" if I didn't want a
system-specific module (which should be always)?


No,

import core.sys.ucontext;

Yes, ucontext is a directory. The package.d is a magic file 
name. This is the new package design that was incorporated last 
year, designed specifically to allow single imports to replaced 
by packages without affecting user code.


Ah.  I suspected this might be the case and searched the language
docs before posting, but couldn't find any mention of this so I
thought I'd ask.

I like the idea of a file per platform, and am undecided whether
I prefer this or the publishing solution.  This one sounds more
flexible, but it may be more difficult to produce installs that
contain only the files relevant to some particular platform.


Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 11:46 AM, Johannes Pfau wrote:

With versions the user has no way to know if the library actually
supports PNG or not. He can only guess and the optional case can't be
implemented at all.


I don't know cairoD's design requirements or tradeoffs so I will speak 
generally.

I suggest solving this by raising the level of abstraction. At some point, in 
user code, there's got to be:


if (CAIRO_HAS_PNG_SUPPORT)
doThis();
else
doThat();


I suggest adding the following to the Cairo module:

void doSomething()
{
  if (CAIRO_HAS_PNG_SUPPORT)
doThis();
  else
doThat();

}

and the user code becomes:

doSomething();



Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 11:31 AM, Johannes Pfau wrote:

Clever, but potentially dangerous once cross-module inlining starts
working (The inlined code could be different from the code in the
library).


True, but you can use .di files to prevent that.



Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 12:18 PM, Sean Kelly wrote:

So I'd import "core.sys.ucontext.package" if I didn't want a
system-specific module (which should be always)?


No,

import core.sys.ucontext;

Yes, ucontext is a directory. The package.d is a magic file name. This is the 
new package design that was incorporated last year, designed specifically to 
allow single imports to replaced by packages without affecting user code.




Why this approach and not publishing modules from somewhere into core.sys
on install?


The short answer is I happen to have a fondness for installs that are simple 
directory copies that do not modify/add/remove files. I think we are safely 
beyond the days that even a few hundred extra files installed on the disk are a 
negative.


Even if the non-used platform packages are simply deleted on install, this will 
not affect compilation. I think that's still better than modifying files. I've 
never trusted installers that edited files.


Besides that, there are other strong reasons for this approach:

1. New platforms can be added without affecting user code.

2. New platforms can be added without touching files for other platforms.

3. User follows simple no-brainer rule when looking at OS documentation:

#include 

  rewrites to:

import core.sys.ucontext;

4. Bugs in particular platform support files can be fixed without concern with 
breaking other platforms.


5. Changes in platform support will not touch files for other platforms, greatly 
simplifying the QA review process.


6. D installed on Platform X can be mounted as a remote drive and used to 
compile for Platform Y.


Re: Final by default?

2014-03-17 Thread Sean Kelly

On Monday, 17 March 2014 at 10:25:26 UTC, Walter Bright wrote:

On 3/17/2014 12:53 AM, Iain Buclaw wrote:
If it's in any relation to your comments in the PR, my opinion 
is that they are
irrelevant to to PR in question, but they *are* relevant in 
their own right and

warrant a new bug/PR to be raised.



Here it is:

https://github.com/D-Programming-Language/druntime/pull/741

I think it shows it is very relevant to your PR, as in fact I 
included your files essentially verbatim, I just changed the 
package layout.


So I'd import "core.sys.ucontext.package" if I didn't want a
system-specific module (which should be always)?  Why this
approach and not publishing modules from somewhere into core.sys
on install?


Re: Final by default?

2014-03-17 Thread Sean Kelly

On Sunday, 16 March 2014 at 08:04:24 UTC, Iain Buclaw wrote:


Indeed other stuff needs to be done, it just so happens that  
thanks to
sys.posix's bad design splitting out other modules into ports  
will be more

a pain.  But it shows how *no one* in that thread who responded
 either
against the first pull, or went off and hijacked the second had
 a Scooby
about the issue being addressed.  Didn't even have the 
curiosity to give

alternate suggestions.


Pretty sure I agreed with your motivation here, though I figured
I'd defer the design to someone who has experience actually
dealing with this many ports.


Re: Final by default?

2014-03-17 Thread Rainer Schuetze



On 17.03.2014 04:45, Marco Leise wrote:

Am Sun, 16 Mar 2014 12:28:24 +0100
schrieb Rainer Schuetze :

Are we still in the same discussion?


I guess we are drifting off. I was just considering some alternatives to 
"final(false)" which doesn't work.



The only thing I miss is that among the several ways to
express function signatures in D, some don't allow you to
specify all attributes. My memory is blurry, but I think it
was function literals that I used to write stubs for runtime
loaded library functionality.


[…] though I would prefer avoiding string mixins, maybe by providing a
function type as prototype:

alias export extern(Windows) void function() fnWINAPI;

@functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();


That is too pragmatic for my taste. Something that you define
in code should be usable as is. It is like taking the picture
of a red corner sofa just to describe the color to someone.

In that specific case, why does this not work for you?:

nothrow extern(Windows) {
   HANDLE GetCurrentProcess();
}



The attributes sometimes need to be selected conditionally, e.g. when 
building a library for static or dynamic linkage (at least on windows 
where not everything is exported by default). Right now, you don't have 
an alternative to code duplication or heavy use of string mixins.


Re: Final by default?

2014-03-17 Thread Johannes Pfau
Am Mon, 17 Mar 2014 03:49:24 -0700
schrieb Walter Bright :

> On 3/17/2014 2:32 AM, Iain Buclaw wrote:
> > On 17 March 2014 08:55, Walter Bright 
> > wrote:
> >> On 3/17/2014 1:35 AM, Iain Buclaw wrote:
> >>>
> >>> Right,
> >>
> >>
> >> If so, why do all modules need the version statement?
> >
> > That is a question to ask the historical maintainers of cairoD.
> > Having a look at it now.
> > It has a single config.d with enum bools to
> > turn on/off features.
> 
> If those enums are controlled by a version statement, then the
> version will have to be set for every source file that imports it.
> This is not the best design - the abstractions are way too leaky.

It's meant to be set at configure time, when the library
is being built, by a configure script or similar. They're not controlled
by version statements at all.

That's nothing special, it's config.h for D.


The reason all modules needed the version statement was that I didn't
use the stub-function trick. Cairo also has classes which can be
available or unavailable. Stubbing all these classes doesn't seem to be
a good solution. I also think it's bad API design if a user can call a
stub 'savePNG' function which just does nothing.

A perfect solution for cairoD needs to handle all these cases:
cairo has PNG support: truefalse
user wants to use PNG:   optional, true, falseoptional, true, false
 ok okok   ok   error  ok


with config.d and static if:
---
enum bool CAIRO_HAS_PNG_SUPPORT = true; //true/false is inserted by
//configure script
static if(CAIRO_HAS_PNG_SUPPORT)
void savePNG();
---


library users can do this:
---
import cairo.config;
static if(!CAIRO_HAS_PNG_SUPPORT)
   assert(false, "Need PNG support");

static if(CAIRO_HAS_PNG_SUPPORT)
   //Offer option to optionally save file as PNG as well
---

if they don't check for CAIRO_HAS_PNG_SUPPORT and just use
savePNG then
(1) it'll work if PNG support is available
(2) the function is not defined if png support is not available

With versions the user has no way to know if the library actually
supports PNG or not. He can only guess and the optional case can't be
implemented at all.


Re: Final by default?

2014-03-17 Thread Johannes Pfau
Am Mon, 17 Mar 2014 08:35:45 +
schrieb Iain Buclaw :

> > If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that
> > imports it, you have completely misunderstood the design I
> > suggested.
> >
> > 1. Encapsulate the feature in a function.
> >
> > 2. Implement the function in module X. Module X is the ONLY module
> > that needs the version. In module X, define the function to do
> > nothing if version is false.
> >
> > 3. Nobody who imports X has to define the version.
> >
> > 4. Just call the function as if the feature always exists.
> >

Clever, but potentially dangerous once cross-module inlining starts
working (The inlined code could be different from the code in the
library).


Re: Final by default?

2014-03-17 Thread Ola Fosheim Grøstad

On Monday, 17 March 2014 at 06:26:09 UTC, Marco Leise wrote:

About two years ago we had that discussion and my opinion
remains that there are too many "if"s and "assume"s for the
compiler.
It is not so simple to trace back where an object originated
from when you call a method on it.


It might not be easy, but in my view the language should be 
designed to support future advanced compilers. If D gains 
traction on the C++ level then the resources will become 
available iff the language has the right constructs or affords 
extensions that makes advanced optimizations tractable. What is 
possible today is less imoortant...


 >It could be created though

the factory mechanism in Object using a runtime string or it


If it is random then you know that it is random. If you want 
speed you create separate paths for the dominant instance types. 
Whole program optimizations is guided by profiling data.




There are plenty of situations where it is virtually
impossible to know the instance type statically.


But you might know that it is either A and B or C and D in most 
cases. Then you inline those cases and create specialized 
execution paths where profitable.



Whole program analysis only works on ... well, whole programs.
If you split off a library or two it doesn't work. E.g. you
have your math stuff in a library and in your main program
you write:

  Matrix m1, m2;
  m1.crossProduct(m2);

Inside crossProduct (which is in the math lib), the compiler
could not statically verify if it is the Matrix class or a
sub-class.


In my view you should avoid not having source access, but even 
then it is sufficient to know the effect of the function. E.g. 
you can have a high level specification language asserting pre 
and post conditions if you insist on closed source.


With a compiler switch or pragmas that tell the compiler what 
can be dynamically subclassed the compiler can assume all 
leaves in the compile time specialization hierarchies to be 
final.


Can you explain, how this would work and where it is used?


You specify what plugins are allowed to do and access at whatever 
resolution is necessary to enable the optimizations your program 
needs?


Ola.


Re: Final by default?

2014-03-17 Thread Michel Fortin

On 2014-03-17 01:20:37 +, Walter Bright  said:


On 3/15/2014 6:44 AM, Johannes Pfau wrote:

Then in cairo.d
version(CAIRO_HAS_PNG_SUPPORT)
{
extern(C) int cairo_save_png(char* x);
void savePNG(string x){cairo_save_png(toStringz(x))};
}


try adding:

   else
   {
void savePNG(string x) { }
   }

and then your users can just call savePNG without checking the version.


Adding a stub that does nothing, not even a runtime error, isn't a very 
good solution in my book. If this function call should fail, it should 
fail early and noisily.


So here's my suggestion: use a template function for the wrapper.

extern(C) int cairo_save_png(char* x);
void savePNG()(string x){cairo_save_png(toStringz(x));}

If you call it somewhere it and cairo_save_png was not compiled in 
Cairo, you'll get a link-time error (undefined symbol cairo_save_png). 
If you don't call savePNG anyhere there's no issue because savePNG was 
never instantiated.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 2:32 AM, Iain Buclaw wrote:

On 17 March 2014 08:55, Walter Bright  wrote:

On 3/17/2014 1:35 AM, Iain Buclaw wrote:


Right,



If so, why do all modules need the version statement?


That is a question to ask the historical maintainers of cairoD.
Having a look at it now.
It has a single config.d with enum bools to
turn on/off features.


If those enums are controlled by a version statement, then the version will have 
to be set for every source file that imports it. This is not the best design - 
the abstractions are way too leaky.


Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 12:53 AM, Iain Buclaw wrote:

If it's in any relation to your comments in the PR, my opinion is that they are
irrelevant to to PR in question, but they *are* relevant in their own right and
warrant a new bug/PR to be raised.



Here it is:

https://github.com/D-Programming-Language/druntime/pull/741

I think it shows it is very relevant to your PR, as in fact I included your 
files essentially verbatim, I just changed the package layout.


Re: Final by default?

2014-03-17 Thread Iain Buclaw
On 17 March 2014 08:55, Walter Bright  wrote:
> On 3/17/2014 1:35 AM, Iain Buclaw wrote:
>>
>> Right,
>
>
> If so, why do all modules need the version statement?

That is a question to ask the historical maintainers of cairoD.
Having a look at it now.  It has a single config.d with enum bools to
turn on/off features.


Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 1:35 AM, Iain Buclaw wrote:

Right,


If so, why do all modules need the version statement?


Re: Final by default?

2014-03-17 Thread Iain Buclaw
On 17 March 2014 08:06, Walter Bright  wrote:
> On 3/17/2014 12:39 AM, Iain Buclaw wrote:
>>
>> If I recall, he was saying that you must pass
>> -fversion=CAIRO_HAS_PNG_SUPPORT to
>> every file that imports it was the problem, because you want PNG support,
>> not stubs.
>
>
> Stub out the functions for PNG support. Then just call them, they will do
> nothing if PNG isn't supported. There is NO NEED for the callers to set
> version.
>
>
>
>> It's more an example where you need a build system in place for a simple
>> hello
>> world in cairoD if you don't want to be typing too much just to get your
>> test
>> program built. :)
>
>
> If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that imports
> it, you have completely misunderstood the design I suggested.
>
> 1. Encapsulate the feature in a function.
>
> 2. Implement the function in module X. Module X is the ONLY module that
> needs the version. In module X, define the function to do nothing if version
> is false.
>
> 3. Nobody who imports X has to define the version.
>
> 4. Just call the function as if the feature always exists.
>
> 5. If you find you still need a version in the importer, then you didn't
> fully encapsulate the feature. Go back to step 1.

Right, but going back full circle to the original comment:

"For example cairoD wraps the cairo C library. cairo can be compiled
without or with PNG support. Historically cairoD used
version(CAIRO_HAS_PNG_SUPPORT) for this."

Requires that cairoD have this encapsulation you suggest, but also
requires detection in some form of configure system that checks:

1) Is cairo installed? (Mandatory, fails without)
2) Does the installed version of cairo have PNG suport (If true, set
build to compile a version of module X with
version=CAIRO_HAS_PNG_SUPPORT)


Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 12:39 AM, Iain Buclaw wrote:

If I recall, he was saying that you must pass -fversion=CAIRO_HAS_PNG_SUPPORT to
every file that imports it was the problem, because you want PNG support, not 
stubs.


Stub out the functions for PNG support. Then just call them, they will do 
nothing if PNG isn't supported. There is NO NEED for the callers to set version.




It's more an example where you need a build system in place for a simple hello
world in cairoD if you don't want to be typing too much just to get your test
program built. :)


If you need to -fversion=CAIRO_HAS_PNG_SUPPORT for every file that imports it, 
you have completely misunderstood the design I suggested.


1. Encapsulate the feature in a function.

2. Implement the function in module X. Module X is the ONLY module that needs 
the version. In module X, define the function to do nothing if version is false.


3. Nobody who imports X has to define the version.

4. Just call the function as if the feature always exists.

5. If you find you still need a version in the importer, then you didn't fully 
encapsulate the feature. Go back to step 1.


Re: Final by default?

2014-03-17 Thread Iain Buclaw
On 17 Mar 2014 07:40, "Walter Bright"  wrote:
>
> On 3/17/2014 12:23 AM, Iain Buclaw wrote:
>>
>> No one really gave any feedback I could work with.
>
>
> I'll take the files you created and simply do it to show what it looks
like. I infer I've completely failed at explaining it otherwise.
>

If it's in any relation to your comments in the PR, my opinion is that they
are irrelevant to to PR in question, but they *are* relevant in their own
right and warrant a new bug/PR to be raised.


Re: Final by default?

2014-03-17 Thread Iain Buclaw
On 17 Mar 2014 01:25, "Walter Bright"  wrote:
>
> On 3/15/2014 6:44 AM, Johannes Pfau wrote:
>>
>> Then in cairo.d
>> version(CAIRO_HAS_PNG_SUPPORT)
>> {
>> extern(C) int cairo_save_png(char* x);
>> void savePNG(string x){cairo_save_png(toStringz(x))};
>> }
>
>
> try adding:
>
>   else
>   {
>void savePNG(string x) { }
>   }
>
> and then your users can just call savePNG without checking the version.

If I recall, he was saying that you must pass
-fversion=CAIRO_HAS_PNG_SUPPORT to every file that imports it was the
problem, because you want PNG support, not stubs.

It's more an example where you need a build system in place for a simple
hello world in cairoD if you don't want to be typing too much just to get
your test program built. :)


Re: Final by default?

2014-03-17 Thread Walter Bright

On 3/17/2014 12:23 AM, Iain Buclaw wrote:

No one really gave any feedback I could work with.


I'll take the files you created and simply do it to show what it looks like. I 
infer I've completely failed at explaining it otherwise.




Re: Final by default?

2014-03-17 Thread Iain Buclaw
On 17 Mar 2014 00:05, "Walter Bright"  wrote:
>
> On 3/16/2014 1:04 AM, Iain Buclaw wrote:
>>
>> Indeed other stuff needs to be done, it just so happens that thanks to
>> sys.posix's bad design splitting out other modules into ports will be
more a
>> pain.  But it shows how *no one* in that thread who responded either
against the
>> first pull, or went off and hijacked the second had a Scooby about the
issue
>> being addressed.  Didn't even have the curiosity to give alternate
suggestions.
>
>
> The 731 pull added more files under the old package layout.
>

I acknowledged that in the original PR comments. I said it wasn't ideal
before you commented. I had made a change after you commented to make
things a little more ideal.  But came across problems as described in my
previous message above when I tried to do the same with more modules.

No one really gave any feedback I could work with.  But 731 is the idea
that people I've spoken to agree with (at least when they say separate
files they make no reference to packaging it), and no one has contended it
in 11666 either.  It just needs some direction when it comes to actually
doing it, and I feel the two showstoppers are the sys.linux/sys.windows
lark, and the absence of any configure in the build system.

> The 732 added more files to the config system, which I objected to.
>

Better than creating a new ports namespace.  But at least I toyed round the
idea. It seems sound to move things to packages and have

version (X86)
  public import x86stuff;
version (ARM)
  public import armstuff;

But it just doesn't scale beyond a few files, and I think I showed that
through the PR, and I'm satisfied that it didn't succeed and that became
the logical conclusion.

Everyone in the conversation however never once used the words ARM, MIPS,
PPC, X86... The strange fixation on the word POSIX had me scratching my
head all the way through.


Re: Final by default?

2014-03-16 Thread Marco Leise
Am Mon, 17 Mar 2014 04:37:10 +
schrieb "Ola Fosheim Grøstad"
:

> Manu wrote:
> > Whole program optimisation can't do anything to improve the 
> > situation; it
> > is possible that DLL's may be loaded at runtime, so there's 
> > nothing the
> > optimiser can do, even at link time.
> 
> Not really true. If you know the instance type then you can 
> inline.
> 
> It is only when you call through the super class of the instance 
> that you have to explicitly call a function through a pointer.

About two years ago we had that discussion and my opinion
remains that there are too many "if"s and "assume"s for the
compiler.
It is not so simple to trace back where an object originated
from when you call a method on it. It could be created though
the factory mechanism in Object using a runtime string or it
could have been passed through a delegate like this:

  window.onClick(myObject);

There are plenty of situations where it is virtually
impossible to know the instance type statically.
Whole program analysis only works on ... well, whole programs.
If you split off a library or two it doesn't work. E.g. you
have your math stuff in a library and in your main program
you write:

  Matrix m1, m2;
  m1.crossProduct(m2);

Inside crossProduct (which is in the math lib), the compiler
could not statically verify if it is the Matrix class or a
sub-class.

> With a compiler switch or pragmas that tell the compiler what can 
> be dynamically subclassed the compiler can assume all leaves in 
> the compile time specialization hierarchies to be final.

Can you explain, how this would work and where it is used?
  -nosubclasses=math.matrix.Matrix
would be the same as using this in the project, no?:
  final class FinalMatrix : Matrix {}

-- 
Marco



Re: Final by default?

2014-03-16 Thread Ola Fosheim Grøstad

Manu wrote:
Whole program optimisation can't do anything to improve the 
situation; it
is possible that DLL's may be loaded at runtime, so there's 
nothing the

optimiser can do, even at link time.


Not really true. If you know the instance type then you can 
inline.


It is only when you call through the super class of the instance 
that you have to explicitly call a function through a pointer.


With a compiler switch or pragmas that tell the compiler what can 
be dynamically subclassed the compiler can assume all leaves in 
the compile time specialization hierarchies to be final.


Re: Final by default?

2014-03-16 Thread Marco Leise
Am Sun, 16 Mar 2014 12:28:24 +0100
schrieb Rainer Schuetze :

Are we still in the same discussion?
The only thing I miss is that among the several ways to
express function signatures in D, some don't allow you to
specify all attributes. My memory is blurry, but I think it
was function literals that I used to write stubs for runtime
loaded library functionality.

> […] though I would prefer avoiding string mixins, maybe by providing a 
> function type as prototype:
> 
> alias export extern(Windows) void function() fnWINAPI;
> 
> @functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();

That is too pragmatic for my taste. Something that you define
in code should be usable as is. It is like taking the picture
of a red corner sofa just to describe the color to someone.

In that specific case, why does this not work for you?:

nothrow extern(Windows) {
  HANDLE GetCurrentProcess();
}

-- 
Marco



Re: Final by default?

2014-03-16 Thread Walter Bright

On 3/15/2014 6:44 AM, Johannes Pfau wrote:

Then in cairo.d
version(CAIRO_HAS_PNG_SUPPORT)
{
extern(C) int cairo_save_png(char* x);
void savePNG(string x){cairo_save_png(toStringz(x))};
}


try adding:

  else
  {
   void savePNG(string x) { }
  }

and then your users can just call savePNG without checking the version.


Re: Final by default?

2014-03-16 Thread Manu
On 17 March 2014 01:25,
<7d89a89974b0ff40.invalid@internationalized.invalid>wrote:

> On Sunday, 16 March 2014 at 13:23:33 UTC, Araq wrote:
>
>> I note that you are not able to counter my argument and so you escape to
>> the meta level. But don't worry, I won't reply anymore.
>>
>
> Discussing OO without a context is kind of pointless since there is
> multiple schools in the OO arena. The two main ones being:
>
> 1. The original OO analysis & design set forth by the people behind
> Simula67. Which basically is about representing abstractions (subsets) of
> the real word in the computer.
>
> 2. The ADT approach which you find in C++ std libraries & co.
>
> These two perspectives are largely orthogonal…
>
> That said, I think it to be odd to not use the term "virtual" since it has
> a long history (Simula has the "virtual" keyword). It would look like a
> case of being different for the sake of being different.
>
> Then again, I don't really mind virtual by default if whole program
> optimization is still a future goal for D.
>

Whole program optimisation can't do anything to improve the situation; it
is possible that DLL's may be loaded at runtime, so there's nothing the
optimiser can do, even at link time.


Re: Final by default?

2014-03-16 Thread Walter Bright

On 3/16/2014 1:04 AM, Iain Buclaw wrote:

Indeed other stuff needs to be done, it just so happens that thanks to
sys.posix's bad design splitting out other modules into ports will be more a
pain.  But it shows how *no one* in that thread who responded either against the
first pull, or went off and hijacked the second had a Scooby about the issue
being addressed.  Didn't even have the curiosity to give alternate suggestions.


The 731 pull added more files under the old package layout.

The 732 added more files to the config system, which I objected to.

I believe my comments were apropos and suggested a better package structure than 
the one in the PR's.


Re: Final by default?

2014-03-16 Thread Andrej Mitrovic
On 3/13/14, Dmitry Olshansky  wrote:
> This:
>
> final class A {
>  int i;
>  void f() { ++i; }
>  void g() { ++i; }
>
> }
> pragma(msg, __traits(isFinalFunction, A.g));
> pragma(msg, __traits(isFinalFunction, A.f));

Speaking of final classes, I've ran into this snippet a few weeks ago
in src/gc/gc.d:

-
// This just makes Mutex final to de-virtualize member function calls.
final class GCMutex : Mutex {}
-

But does this actually happen?


Re: Final by default?

2014-03-16 Thread Timon Gehr

On 03/14/2014 04:31 PM, ponce wrote:

On Friday, 14 March 2014 at 15:17:08 UTC, Andrei Alexandrescu wrote:

Allowing computed qualifiers/attributes would be a very elegant and
general approach, and plays beautifully into the strength of D and our
current investment in Boolean compile-time predicates.



Bonus points if inout can be replaced that way :)




It cannot.


Re: Final by default?

2014-03-16 Thread bearophile

Andrei Alexandrescu:

I literally can't say anything more on the subject than I've 
already had. I've said it all. I could, of course, reiterate my 
considerations, and that would have other people reiterate 
theirs and their opinion on how my pros don't hold that much 
weight and how my cons hold a lot more weight than they should.


But was the decision taken on the basic of experimental data? I 
mean the kind of data Daniel Murphy has shown here:


http://forum.dlang.org/thread/lfqoan$5qq$1...@digitalmars.com?page=27#post-lg147n:242u14:241:40digitalmars.com

Bye,
bearophile


Re: Final by default?

2014-03-16 Thread Andrei Alexandrescu

On 3/16/14, 11:46 AM, Joseph Rushton Wakeling wrote:

Actually, rather the opposite -- I know you understand the arguments
very well, and therefore I have much higher expectations in terms of how
detailed an explanation I think you should be prepared to offer to
justify your decisions ;-)


I literally can't say anything more on the subject than I've already 
had. I've said it all. I could, of course, reiterate my considerations, 
and that would have other people reiterate theirs and their opinion on 
how my pros don't hold that much weight and how my cons hold a lot more 
weight than they should.



One of the problems with this particular issue is that probably as far
as most people are concerned, a discussion was had, people pitched
arguments one way and the other, some positions became entrenched, other
people changed their minds, and finally, a decision was made -- by you
and Walter -- in favour of final by default.


For the record I never decided that way, which explains my surprise when 
I saw the pull request that adds "virtual". Upon discussing with Walter 
it became apparent that he made that decision against his own judgment. 
We are presently happy and confident we did the right thing.


Please do not reply to this. Let sleeping dogs tell the truth.


Thanks,

Andrei



Re: Final by default? [Decided: No]

2014-03-16 Thread Joseph Rushton Wakeling

On 13/03/14 16:22, Andrei Alexandrescu wrote:

On 3/13/14, 2:15 AM, John Colvin wrote:

In light of this and as a nod to Manu's expertise and judgment on the
matter:

We should make his reasoning on the importance of deliberately choosing
virtual vs private in API-public classes prominent in documentation,
wikis, books and other learning materials.

It may not be an important enough to justify a large language break, but
if Manu says it is genuinely a problem in his industry, we should do our
best to alleviate as much as is reasonable.


I think that's a great idea.


Related suggestion.

I know that Walter really doesn't like compiler warnings, and to a large degree 
I understand his dislike.


However, in this case I think we could do much to alleviate the negative effects 
of virtual-by-default by making it a compiler warning for a class method to be 
without an explicit indicator of whether it's to be final or virtual.


That warning would have to be tolerant of e.g. the whole class itself being 
given a "final" or "virtual" marker, or of tags like "final:" or "virtual:" 
which capture multiple methods.


The warning could include an indication to the user: "If you're not certain 
which is preferable, pick final."


The idea would be that it be a strongly enforced D style condition to be 
absolutely explicit about your intentions final- and virtual-wise.  (If a 
compiler warning is considered too strong, then the task could be given to a 
lint tool.)


Re: Final by default?

2014-03-16 Thread Joseph Rushton Wakeling

On 13/03/14 16:57, Andrei Alexandrescu wrote:

At a level it's clear it's not a matter of right or wrong but instead a judgment
call, right? Successful languages go with either default.


Sorry for the delay in responding here.

Yes, it is a judgement call, and what's more, I think that probably just about 
all of us here recognize that you and Walter need to make such judgement calls 
sometimes, to mediate between the different requirements of different users.  In 
this case, what really bothers me is less that I disagree with the judgement 
call (that happens), more that it was a decision reached without any kind of 
community engagement before finalizing it.


This isn't meant as some kind of misguided call for democracy or voting or 
"respecting the community's wishes" -- the point is simply that every decision 
made without prior discussion and debate carries a social cost in terms of 
people's ability to make reliable plans for future development.


This is particularly true when the decision (like this) is to reverse what most 
people seem to have understood was an accepted and agreed-on development goal.



The breakage was given as an example. We would have decided the same without
that happening.


Understood.  I hope it's clear why this was not apparent from the original 
announcement.



More than sure a well-executed deprecation process helps although it's not
perfect. We're not encumbered by exhausting confidentiality requirements etc.


Thanks for clarifying that.  I'm sorry if my question about this seemed 
excessively paranoid, but it really wasn't clear from the original announcement 
how much of the motivation for the decision arose out of client pressure.  I 
felt it was better to ask rather than to be uncertain.


Regarding deprecation processes: I do take your point that no matter how well 
planned, and no matter how obvious the deprecation path may seem, any managed 
change has the potential to cause unexpected breakage _simply because things are 
being changed_.


On the other hand, one thing that's apparent is that while substantial parts of 
the language are now stable and settled, there _are_ still going to have to be 
breaking changes in future -- both to fix outright bugs, and in areas where 
judgement calls over the value of the change go the other way.  So, I think 
there needs to be some better communication of the principles by which that 
threshold is determined.  (Obviously people will still argue over whether that 
threshold has been reached or not, but if the general framework for deciding yes 
or no is well understood then it should defuse 90% of the arguments.)



There's some underlying assumption here that if we "really" understood the
arguments we'd be convinced. Speaking for myself, I can say I understand the
arguments very well. I don't know how to acknowledge them better than I've
already done.


Actually, rather the opposite -- I know you understand the arguments very well, 
and therefore I have much higher expectations in terms of how detailed an 
explanation I think you should be prepared to offer to justify your decisions ;-)


In this case part of the problem is that we got the decision first and then the 
more detailed responses have come in the ensuing discussion.  In that context I 
think it's pretty important to respond to questions about this or that bit of 
evidence by seeing them as attempts to understand your train of thought, rather 
than seeing them as assumptions that you don't understand something.


That said, I think it's worth noting that in this discussion we have had 
multiple examples of genuinely different understandings -- not just different 
priorities -- about how certain language features may be used or how it's 
desirable to use them.  So it's natural that people question whether all the 
relevant evidence was really considered.



Thanks for being candid about this. I have difficulty, however, picturing how to
do a decision point better. At some point a decision will be made. It's a
judgment call, that in some reasonable people's opinion, is wrong, and in some
other reasonable people's opinion, is right. For such, we're well past
arguments' time - no amount of arguing would convince. I don't see how to give
better warning about essentially a Boolean decision point that precludes
pursuing the converse design path.


I think that it's a mistake to see discussion as only being about pitching 
arguments or changing people's minds -- discussion is also necessary and useful 
to set the stage for a decision, to build understanding about why a decision is 
necessary and what are the factors that are informing it.


One of the problems with this particular issue is that probably as far as most 
people are concerned, a discussion was had, people pitched arguments one way and 
th

Re: Final by default?

2014-03-16 Thread bearophile

Daniel Murphy:

If anyone wants to try this out on their code, the patch I used 
was to add this:


if (ad && !ad->isInterfaceDeclaration() && isVirtual() && 
!isFinal() &&
   !isOverride() && !(storage_class & STCvirtual) && 
!(ad->storage_class & STCfinal))

{
   warning(loc, "virtual required");
}

Around line 623 in func.c (exact line doesn't matter, just 
stick it in with the rest of the checks)


I also had to disable the "static member functions cannot be 
virtual" error.


In the meantime has someone else measured experimentally the 
amount of breakage a "final by default" causes in significant D 
programs?


Bye,
bearophile


Re: Final by default?

2014-03-16 Thread Rainer Schuetze



On 16.03.2014 15:24, Manu wrote:

On 16 March 2014 21:28, Rainer Schuetze mailto:r.sagita...@gmx.de>> wrote:


alias export extern(Windows) void function() fnWINAPI;

@functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();


I frequently find myself needing something like this. What's wrong with
aliasing attributes directly?
DGC/LDC offer their own internal attributes, but you can't make use of
them and remain portable without an ability to do something like the
#define hack in C.


Unfortunately, it doesn't fit very well with the grammar to allow 
something like


alias @property const final nothrow @safe pure propertyGet;

(or some special syntax) and then parse

propertyGet UserType fun();

because it's ambiguous whithout semantic knowlegde of the identifiers. 
It becomes unambiguous with UDA syntax, though:


@propertyGet UserType fun();

I suspect propertyGet would have to describe some new "entity" that 
needs to be able to be passed around, aliased, used in CTFE, etc.


Re: Final by default?

2014-03-16 Thread Ola Fosheim Grøstad

On Sunday, 16 March 2014 at 13:23:33 UTC, Araq wrote:
I note that you are not able to counter my argument and so you 
escape to the meta level. But don't worry, I won't reply 
anymore.


Discussing OO without a context is kind of pointless since there 
is multiple schools in the OO arena. The two main ones being:


1. The original OO analysis & design set forth by the people 
behind Simula67. Which basically is about representing 
abstractions (subsets) of the real word in the computer.


2. The ADT approach which you find in C++ std libraries & co.

These two perspectives are largely orthogonal…

That said, I think it to be odd to not use the term "virtual" 
since it has a long history (Simula has the "virtual" keyword). 
It would look like a case of being different for the sake of 
being different.


Then again, I don't really mind virtual by default if whole 
program optimization is still a future goal for D.


Ola.


Re: Final by default?

2014-03-16 Thread Manu
On 16 March 2014 21:28, Rainer Schuetze  wrote:

>
>
> On 14.03.2014 23:25, deadalnix wrote:
>
>> On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
>>
>>> First I think have something like
>>>
>>> @disable(final,nothrow) would be the best way, but than I think about it
>>> and realize that final(false) is much more better.
>>>
>>>
>> If I may, final!false . We have a syntax for compile time
>> parameter. Let's be consistent for once.
>>
>> The concept is solid and is the way to go. DIP anyone ?
>>
>
> To me, it's not a decision "final or virtual", but "final, virtual or
> override", so a boolean doesn't work. final!false could infer "virtual or
> override", but then it would loose the explicitness of introducing or
> overriding virtual.
>
> I'm in favor of adding the keyword "virtual", it is known by many from
> other languages with the identical meaning. Using anything else feels like
> having to invent something different because of being opposed to it at the
> start.
>
> Adding compile time evaluation of function attributes is still worth
> considering, but I'd like a more generic approach, perhaps something along
> a mixin functionality:
>
> enum WINAPI = "export extern(Windows)";
>
> @functionAttributes!WINAPI HANDLE GetCurrentProcess();
>
> though I would prefer avoiding string mixins, maybe by providing a
> function type as prototype:
>
> alias export extern(Windows) void function() fnWINAPI;
>
> @functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();
>

I frequently find myself needing something like this. What's wrong with
aliasing attributes directly?
DGC/LDC offer their own internal attributes, but you can't make use of them
and remain portable without an ability to do something like the #define
hack in C.


Re: Final by default?

2014-03-16 Thread Araq

On Saturday, 15 March 2014 at 22:50:27 UTC, deadalnix wrote:

On Saturday, 15 March 2014 at 20:15:16 UTC, Araq wrote:
However, this is clear that it come at a cost. I don't doubt 
an OO language pushing this to the extreme would see concept 
that confuse everybody emerging, pretty much like monads 
confuse the hell out of everybody in functional languages.


Looks like explicit continuation passing style to me. So "OO 
done right" means "Human compiler at work"...


Sound like you are enjoying criticize OOP, but so far, you 
didn't come up with anything interesting. Please bring 
something to the table or cut the noise.


I note that you are not able to counter my argument and so you 
escape to the meta level. But don't worry, I won't reply anymore.


Re: Final by default?

2014-03-16 Thread Rainer Schuetze



On 14.03.2014 23:25, deadalnix wrote:

On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:

First I think have something like

@disable(final,nothrow) would be the best way, but than I think about it
and realize that final(false) is much more better.



If I may, final!false . We have a syntax for compile time
parameter. Let's be consistent for once.

The concept is solid and is the way to go. DIP anyone ?


To me, it's not a decision "final or virtual", but "final, virtual or 
override", so a boolean doesn't work. final!false could infer "virtual 
or override", but then it would loose the explicitness of introducing or 
overriding virtual.


I'm in favor of adding the keyword "virtual", it is known by many from 
other languages with the identical meaning. Using anything else feels 
like having to invent something different because of being opposed to it 
at the start.


Adding compile time evaluation of function attributes is still worth 
considering, but I'd like a more generic approach, perhaps something 
along a mixin functionality:


enum WINAPI = "export extern(Windows)";

@functionAttributes!WINAPI HANDLE GetCurrentProcess();

though I would prefer avoiding string mixins, maybe by providing a 
function type as prototype:


alias export extern(Windows) void function() fnWINAPI;

@functionAttributesOf!fnWINAPI HANDLE GetCurrentProcess();


Re: Final by default?

2014-03-16 Thread Jacob Carlborg

On 2014-03-16 00:11, Marco Leise wrote:


What about the way Microsoft went with the Win32 API?
- struct fields are exposed
- layouts may change only by appending fields to them
- they are always passed by pointer
- the actual size is stored in the first data field

I think this is worth a look. Since all these function calls
don't come for free. (Imagine a photo management software
that has to check various properties of 20_000 images.)


The modern runtime for Objective-C has a non-fragile ABI for its 
classes. Instead of accessing a field with an compile time known offset 
an offset calculated at runtime/load time is used when accessing a 
field. This allows to freely reorganize fields without breaking subclasses.


--
/Jacob Carlborg


Re: Final by default?

2014-03-16 Thread Iain Buclaw
On 15 Mar 2014 13:45, "Johannes Pfau"  wrote:
>
> Am Fri, 14 Mar 2014 19:29:27 +0100
> schrieb Paulo Pinto :
>
> > That is why the best approach is to have one module per platform
> > specific code, with a common interface defined in .di file.
>
> Which is basically what Iain proposed for druntime. Then the thread got
> hijacked and talked about three different issues in the end. Walter
> answered to the other issues, but not to Iain's original request, Andrei
> agreed with Walter, the discussion ended, pull request closed and
> nothing will happen ;-)
>
>
> https://github.com/D-Programming-Language/druntime/pull/731
> https://github.com/D-Programming-Language/druntime/pull/732
>
> I think we'll have to revisit this at some point, but right now there's
> other stuff to be done...

Indeed other stuff needs to be done, it just so happens that thanks to
sys.posix's bad design splitting out other modules into ports will be more
a pain.  But it shows how *no one* in that thread who responded either
against the first pull, or went off and hijacked the second had a Scooby
about the issue being addressed.  Didn't even have the curiosity to give
alternate suggestions.


Re: Final by default?

2014-03-15 Thread Walter Bright

On 3/15/2014 11:33 AM, Michel Fortin wrote:

And it also breaks binary compatibility.


Inlining also breaks binary compatibility. If you want optimizations, and be 
able to change things, you've got to give up binary compatibility. If you want 
maximum flexibility, such as changing classes completely, use interfaces with 
virtual dispatch.


Maximum flexibility, maximum optimization, and binary compatibility, all while 
not putting any thought into the API design, isn't going to happen no matter 
what the defaults are.


Re: Final by default?

2014-03-15 Thread Ola Fosheim Grøstad

I think this is worth a look. Since all these function calls
don't come for free. (Imagine a photo management software
that has to check various properties of 20_000 images.)


It comes for free if you enforce inlining and recompile for major 
revisions of libs.


Re: Final by default?

2014-03-15 Thread Marco Leise
Am Sat, 15 Mar 2014 21:25:51 +
schrieb "Kapps" :

> On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:
> > On 3/15/2014 2:21 AM, Paulo Pinto wrote:
> >> In any language with properties, accessors also allow for:
> >>
> >> - lazy initialization
> >>
> >> - changing the underlying data representation without 
> >> requiring client code to
> >> be rewritten
> >>
> >> - implement access optimizations if the data is too costly to 
> >> keep around
> >
> > You can always add a property function later without changing 
> > user code.
> 
> In many situations you can't. As was already mentioned, ++ and 
> taking the address of it were two such situations.
> 
> ABI compatibility is also a large problem (less so in D for now, 
> but it will be in the future). Structs change, positions change, 
> data types change. If users use your struct directly, accessing 
> its fields, then once you make even a minor change, their code 
> will break in unpredictable ways. This was a huge annoyance for 
> me when trying to deal with libjpeg. There are multiple versions 
> and these versions have a different layout for the struct. If the 
> wrong library is linked, the layout is different. Since it's a D 
> binding to a C file, you can't just use the C header which you 
> know to be up to date on your system, instead you have to make 
> your own binding and hope for the best. They tr
> y to work around this by making you pass in a version string when 
> creating the libjpeg structs and failing if this string does not 
> exactly match what the loaded version. This creates a further 
> mess. It's a large problem, and there's talk of trying to 
> eventually deprecate public field access in libjpeg in favour of 
> accessors like libpng has done (though libpng still uses the 
> annoying passing in version since they did not use accessors from 
> the start and some fields remained public). Accessors are 
> absolutely required if you intend to make a public library and 
> exposed fields should be avoided completely.

What about the way Microsoft went with the Win32 API?
- struct fields are exposed
- layouts may change only by appending fields to them
- they are always passed by pointer
- the actual size is stored in the first data field

I think this is worth a look. Since all these function calls
don't come for free. (Imagine a photo management software
that has to check various properties of 20_000 images.)

-- 
Marco



Re: Final by default?

2014-03-15 Thread Anonymouse

On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:

On 3/15/2014 2:21 AM, Paulo Pinto wrote:

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without 
requiring client code to

be rewritten

- implement access optimizations if the data is too costly to 
keep around


You can always add a property function later without changing 
user code.


Cough getopt.


Re: Final by default?

2014-03-15 Thread deadalnix

On Saturday, 15 March 2014 at 20:15:16 UTC, Araq wrote:
However, this is clear that it come at a cost. I don't doubt 
an OO language pushing this to the extreme would see concept 
that confuse everybody emerging, pretty much like monads 
confuse the hell out of everybody in functional languages.


Looks like explicit continuation passing style to me. So "OO 
done right" means "Human compiler at work"...


Sound like you are enjoying criticize OOP, but so far, you didn't 
come up with anything interesting. Please bring something to the 
table or cut the noise.


Re: Final by default?

2014-03-15 Thread Walter Bright

On 3/15/2014 2:22 AM, Jacob Carlborg wrote:

On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:


else version (OSX) {
version (PPC)
   iSomeWackyFunction();
else
   SomeWackyFunction();   // In hope there's no other Apple hardware.


There's also ARM, ARM64, x86 32bit and PPC64.


Right, there should not be else clauses with the word "hope" in them. They 
should be "static assert(0);" or else be portable.




Re: Final by default?

2014-03-15 Thread Kapps

On Saturday, 15 March 2014 at 18:18:28 UTC, Walter Bright wrote:

On 3/15/2014 2:21 AM, Paulo Pinto wrote:

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without 
requiring client code to

be rewritten

- implement access optimizations if the data is too costly to 
keep around


You can always add a property function later without changing 
user code.


In many situations you can't. As was already mentioned, ++ and 
taking the address of it were two such situations.


ABI compatibility is also a large problem (less so in D for now, 
but it will be in the future). Structs change, positions change, 
data types change. If users use your struct directly, accessing 
its fields, then once you make even a minor change, their code 
will break in unpredictable ways. This was a huge annoyance for 
me when trying to deal with libjpeg. There are multiple versions 
and these versions have a different layout for the struct. If the 
wrong library is linked, the layout is different. Since it's a D 
binding to a C file, you can't just use the C header which you 
know to be up to date on your system, instead you have to make 
your own binding and hope for the best. They tr
y to work around this by making you pass in a version string when 
creating the libjpeg structs and failing if this string does not 
exactly match what the loaded version. This creates a further 
mess. It's a large problem, and there's talk of trying to 
eventually deprecate public field access in libjpeg in favour of 
accessors like libpng has done (though libpng still uses the 
annoying passing in version since they did not use accessors from 
the start and some fields remained public). Accessors are 
absolutely required if you intend to make a public library and 
exposed fields should be avoided completely.


Re: Final by default?

2014-03-15 Thread Araq
However, this is clear that it come at a cost. I don't doubt an 
OO language pushing this to the extreme would see concept that 
confuse everybody emerging, pretty much like monads confuse the 
hell out of everybody in functional languages.


Looks like explicit continuation passing style to me. So "OO done 
right" means "Human compiler at work"...


Re: Final by default?

2014-03-15 Thread Michel Fortin

On 2014-03-15 18:18:27 +, Walter Bright  said:


On 3/15/2014 2:21 AM, Paulo Pinto wrote:

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without requiring client code to
be rewritten

- implement access optimizations if the data is too costly to keep around


You can always add a property function later without changing user code.


In some alternate universe where clients restrict themselves to 
documented uses of APIs yes. Not if the client decides he want to use 
++ on the variable, or take its address, or pass it by ref to another 
function (perhaps without even noticing).


And it also breaks binary compatibility.

If you control the whole code base it's reasonable to say you won't 
bother with properties until they're actually needed for some reason. 
It's easy enough to refactor your things whenever you decide to make 
the change.


But if you're developing a library for other to use though, it's better 
to be restrictive from the start... if you care about not breaking your 
client's code base that is. It basically comes to the same reasons as 
to why final-by-default is better than virtual-by-default: it's better 
to start with a restrictive API and then expand the API as needed than 
being stuck with an API that restricts your implementation choices 
later on.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-15 Thread Walter Bright

On 3/15/2014 2:21 AM, Paulo Pinto wrote:

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without requiring client code to
be rewritten

- implement access optimizations if the data is too costly to keep around


You can always add a property function later without changing user code.



Re: Final by default?

2014-03-15 Thread deadalnix

On Saturday, 15 March 2014 at 08:32:32 UTC, Paulo Pinto wrote:

Am 15.03.2014 06:29, schrieb deadalnix:

On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
That said, function inlining is perhaps the single most 
important API

level
performance detail, and especially true in OO code (which 
advocates

accessors/properties).


OOP say ask, don't tell. Accessors, especially getters, are 
very anti
OOP. The haskell of OOP would prevent you from returning 
anything from a

function.


What?!?

Looking at Smalltalk, SELF, CLOS and Eiffel I fail to see what 
you mean, given that they are the grand daddies of OOP and all 
have getters/properties.




And LISP is grand daddy of functional, and do not have most of 
the features of modern functional languages.


OOP is about asking the object to do something, not getting infos 
from an object and acting depending on that. In pseudo code, 
you'd prefers


object.DoTheJob()

to

auto infos = object.getInfos();
doTheJob(infos);

If you push that principle to the extreme, you must not return 
anything. Obviously, most principle pushed to the extreme often 
become impractical, btu here you go.


Now, what about a processing that would give me back a result ? 
Like the following:


auto f = new File("file");
writeln(f.getContent());

Sound cool, right ? But you are telling not asking. You could do:

interface FileProcessor {
void processContent(string);
}

class WritelnFileProcessor : FileProcessor {
void processContent(string s) { writeln(s); }
}

auto f = new File("file");
f.process(new WritelnFileProcessor());

This has several advantages that I don't have much time to expose 
in detail. For instance, the FileContentProcessor could have more 
methods, so it can express a richer interface that what could be 
returned. If some checks need to be done (for security reasons 
for instance) you can ensure withing the File class that they are 
done properly. It makes things easier to test. You can completely 
change the way the file class works internally without disturbing 
any of the code that use it, etc ...


However, this is clear that it come at a cost. I don't doubt an 
OO language pushing this to the extreme would see concept that 
confuse everybody emerging, pretty much like monads confuse the 
hell out of everybody in functional languages.


Re: Final by default?

2014-03-15 Thread Francesco Cattoglio
I don't think that the virtual-by-default is the most important 
aspect of the language, so I can live with it (even if I strongly 
dislike it). What actually scares me is this:


On Saturday, 15 March 2014 at 11:59:41 UTC, Marco Leise wrote:


One message that this sends out is that a proposal, even with
almost complete lack of opposition, an in-depth discussion,
long term benefits and being in line with the language's goals
can be turned down right when it is ready to be merged.


A decision was taken, being backed by several people, work had 
begun (yebbiles pulls) and now the proposal gets reverted out of 
the blue.
Somehow makes me wonder how the future of D is decided upon. To 
me, it really feels like it's made by last-second decisions. I 
think it can really make a bad impression to newcomers.



I neither see a small vocal faction intimidating (wow!) the
leadership, nor do I see a dictate of the majority.
Agree. But I think that the "vocal faction intimidating" was just 
a horrible choice of words, with no harmful intent. Just like the 
"But we nearly lost a major client over it." used at the begin of 
the thread.


Re: Final by default?

2014-03-15 Thread Daniel Murphy

"bearophile"  wrote in message news:yzhcfevgjdzjtzghx...@forum.dlang.org...

Andrei has decided to not introduce "final by default" because he thinks 
it's a too much large breaking change. So your real world data is an 
essential piece of information to perform an informed decision on this 
topic (so much essential that I think deciding before having such data is 
void decision). So are you willing to perform your analysis on some other 
real D code? Perhaps dub?


If anyone wants to try this out on their code, the patch I used was to add 
this:


if (ad && !ad->isInterfaceDeclaration() && isVirtual() && !isFinal() &&
   !isOverride() && !(storage_class & STCvirtual) && !(ad->storage_class & 
STCfinal))

{
   warning(loc, "virtual required");
}

Around line 623 in func.c (exact line doesn't matter, just stick it in with 
the rest of the checks)


I also had to disable the "static member functions cannot be virtual" error. 



Re: Final by default?

2014-03-15 Thread Johannes Pfau
Am Fri, 14 Mar 2014 10:53:59 -0700
schrieb Walter Bright :

> On 3/14/2014 10:26 AM, Johannes Pfau wrote:
> > I use manifest constants instead of version identifiers as well. If
> > a version identifier affects the public API/ABI of a library, then
> > the library and all code using the library always have to be
> > compiled with the same version switches(inlining and templates make
> > this an even bigger problem). This is not only inconvenient, it's
> > also easy to think of examples where the problem will only show up
> > as crashes at runtime. The only reason why that's not an issue in
> > phobos/druntime is that we only use compiler defined versions
> > there, but user defined versions are almost unusable.
> 
> Use this method:
> 
> 
>  
>  import wackyfunctionality;
>  ...
>  WackyFunction();
>  
>  module wackyfunctionality;
> 
>  void WackyFunction() {
>  version (Linux)
>SomeWackyFunction();
>  else version (OSX)
>  SomeWackyFunction();
>  else
>  ... workaround ...
>  }
>  

I meant really 'custom versions', not OS-related. For example cairoD
wraps the cairo C library. cairo can be compiled without or with PNG
support. Historically cairoD used version(CAIRO_HAS_PNG_SUPPORT) for
this.

Then in cairo.d
version(CAIRO_HAS_PNG_SUPPORT)
{
   extern(C) int cairo_save_png(char* x);
   void savePNG(string x){cairo_save_png(toStringz(x))};
}

now I have to use version=CAIRO_HAS_PNG_SUPPORT when compiling cairoD,
but every user of cairoD also has to use version=CAIRO_HAS_PNG_SUPPORT
or the compiler will hide the savePNG functions. There are also
examples where not using the same version= switches causes runtime
crashes.

Compiler defined version(linux, OSX) are explicitly not affected by
this issue as they are always defined by the compiler for all modules.


Re: Final by default?

2014-03-15 Thread Johannes Pfau
Am Fri, 14 Mar 2014 19:29:27 +0100
schrieb Paulo Pinto :

> That is why the best approach is to have one module per platform 
> specific code, with a common interface defined in .di file.

Which is basically what Iain proposed for druntime. Then the thread got
hijacked and talked about three different issues in the end. Walter
answered to the other issues, but not to Iain's original request, Andrei
agreed with Walter, the discussion ended, pull request closed and
nothing will happen ;-)


https://github.com/D-Programming-Language/druntime/pull/731
https://github.com/D-Programming-Language/druntime/pull/732

I think we'll have to revisit this at some point, but right now there's
other stuff to be done...


Re: Final by default?

2014-03-15 Thread Steven Schveighoffer

On Sat, 15 Mar 2014 04:08:51 -0400, Daniel Kozák  wrote:


deadalnix píše v Pá 14. 03. 2014 v 22:25 +:

On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
> First I think have something like
>
> @disable(final,nothrow) would be the best way, but than I think
> about it
> and realize that final(false) is much more better.
>

If I may, final!false . We have a syntax for compile time
parameter. Let's be consistent for once.

The concept is solid and is the way to go. DIP anyone ?


final!true
final!(true)
final(!true) oops :)


If final!true is valid, final(true) and final(!true) will not be.

-Steve


Re: Final by default?

2014-03-15 Thread Marco Leise
Am Fri, 14 Mar 2014 13:48:40 +1000
schrieb Manu :

> I feel like this was aimed at me, and I also feel it's unfair.
> 
> If you recall back to the first threads on the topic, I was the absolute
> minority, almost a lone voice. Practically nobody agreed, infact, there was
> quite aggressive objection across the board, until much discussion about it
> has passed.
> I was amazed to see in this thread how many have changed their minds from
> past discussions. Infact, my impression from this thread is that the change
> now has almost unanimous support, and by my recollection, many(/most?) of
> those people were initially against.
> 
> To say this is a small vocal faction is unfair (unless you mean me
> personally?). A whole bunch of people who were originally against, but were
> convinced by argument and evidence is not a 'faction' with an agenda to
> intimidate their will upon leadership.
> I suspect what seems strange to the participants in this thread, that
> despite what eventually appears to have concluded in almost unanimous
> agreement (especially surprising considering the starting point years
> back!), is the abrupt refusal.
> That's Walter's prerogative I guess... if he feels that strongly about it,
> then I'm not going to force the issue any more.
> 
> I am surprised though, considering the level of support for the change
> expressed in this thread, which came as a surprise to me; it's the highest
> it's ever been... much greater than in prior discussions on the topic.
> You always say forum participation is not a fair representation of the
> community, but when the forum representation is near unanimous, you have to
> begin to be able to make some assumptions about the wider communities
> opinion.

Me too, I got the impression, that once the library authoring
issue was on the table, suddenly everyone could relate to
final-by-default and the majority of the forum community found
it to be a reasonable change.

For once in a decade it seemed that one of the endless
discussions reached a consensus and a plan of action: issue a
warning, then deprecate. I was seriously relieved to see an
indication of a working decision making process initiated by
the forum community. After all digitalmars.D is for discussing
the language.

Then this process comes to a sudden halt, because Walter gets
negative feedback about some unrelated braking change and
Andrei considers final-by-default good, but too much of a
breaking change for what it's worth. Period. After such a long
community driven discussion about it.

One message that this sends out is that a proposal, even with
almost complete lack of opposition, an in-depth discussion,
long term benefits and being in line with the language's goals
can be turned down right when it is ready to be merged.

The other message is that the community as per this forum is
not representative of the target audience, so our decisions
may not be in the best interest of D. Surprisingly though,
most commercial adapters that ARE here except for one, have no
problem with this announced language change for the better.

I neither see a small vocal faction intimidating (wow!) the
leadership, nor do I see a dictate of the majority. At least
2 people mentioned different reasons for final-by-default that
convinced most of us that it positively changes D. ...without
threats like "we won't use D any more if you don't agree".

Paying customers including Facebook can have influence
on what is worked on, but D has become a community effort and
freezing the language for the sake of creating a stable target
for them while core language features are still to be
finalized (i.e. shared, allocation) is not convincing.

-- 
Marco



Re: Final by default?

2014-03-15 Thread bearophile
So are you willing to perform your analysis on some other real 
D code? Perhaps dub?


Or vibe?

Bye,
bearophile


Re: Final by default?

2014-03-15 Thread bearophile

Daniel Murphy:

This is nonsense.  I tried out the warning on some of my 
projects, and they required ZERO changes - because it's a 
warning!


Phobos requires 37 "virtual:"s to be added - or just change the 
makefile to use '-wi' instead of '-w'.  Druntime needed 25.


Andrei has decided to not introduce "final by default" because he 
thinks it's a too much large breaking change. So your real world 
data is an essential piece of information to perform an informed 
decision on this topic (so much essential that I think deciding 
before having such data is void decision). So are you willing to 
perform your analysis on some other real D code? Perhaps dub?


Bye,
bearophile


Re: Final by default?

2014-03-15 Thread develop32

On Saturday, 15 March 2014 at 08:50:00 UTC, Daniel Murphy wrote:
This is nonsense.  I tried out the warning on some of my 
projects, and they required ZERO changes - because it's a 
warning!


Phobos requires 37 "virtual:"s to be added - or just change the 
makefile to use '-wi' instead of '-w'.  Druntime needed 25.


We don't even need to follow the usual 6-months per stage 
deprecation - We could leave it as a warning for 2 years if we 
wanted!


Grepping for class declarations and sticking in "virtual:" is 
as trivial as a fix can possibly be.


When virtual keyword was introduced in Github master I 
immediately went to add a bunch of "virtual" in my projects... 
only to find myself done after few minutes.


I see some irony in the fact that if classes are made 
final-by-default, removing all the unnecessary "final" attributes 
would be an order of magnitude longer task.


Re: Final by default?

2014-03-15 Thread Daniel Murphy
"Manu"  wrote in message 
news:mailman.133.1394879414.23258.digitalmar...@puremagic.com...


Phobos is a standard library, surely it's unacceptable for phobos calls to 
break the optimiser?
Consider std.xml for instance; 100% certain to appear in hot data 
crunching loops.
What can be done about this? It can't be fixed, because that's a breaking 
change. Shall we
document that phobos classes should be avoided or factored outside of high 
frequency code, and

hope people read it?


I think std.xml should be avoided for other reasons... 



Re: Final by default?

2014-03-15 Thread Manu
On 15 March 2014 18:50, Daniel Murphy  wrote:

> "Walter Bright"  wrote in message news:lg0vtc$2q94$1...@digitalmars.com...
>
>
>  I find it peculiar to desire a 'final accessor'. After all,
>>
>>  class C {
>>  int x;
>>  final int getX() { return x; } <= what the heck is this function
>> for?
>>  }
>>
>
> Yeah, it's stupid, but people do it all over the place anyway.


Religiously. They're taught to do this in books and at university,
deliberately.
Seriously though, there are often reasons to put an interface in the way;
you can change the implementation without affecting the interface at some
later time, data can be compressed or stored in an internal format that is
optimal for internal usage, or some useful properties can be implied rather
than stored explicitly. Programmers (reasonably) expect they are inlined.

For instance framesPerSecond() and timeDelta() are the reciprocal of
eachother, only one needs to be stored.
I also have very many instances of classes with accessors to provide
user-facing access to packed internal data, which may require some minor
bit-twiddling and casting to access. I don't think this is unusual, any
programmer is likely to do this. empty(), length(), front(), etc are
classic examples where it might not just return a variable directly.
Operator overloads... >_<

 It's a major breaking change. It'll break nearly every D program out there
>> that uses classes.
>>
>
> This is nonsense.  I tried out the warning on some of my projects, and
> they required ZERO changes - because it's a warning!
>
> Phobos requires 37 "virtual:"s to be added - or just change the makefile
> to use '-wi' instead of '-w'.  Druntime needed 25.
>
> We don't even need to follow the usual 6-months per stage deprecation - We
> could leave it as a warning for 2 years if we wanted!
>
> Grepping for class declarations and sticking in "virtual:" is as trivial
> as a fix can possibly be.
>

My game that I'm hacking on at the moment has only 2 affected classes. The
entire game is OO. Most virtuals are introduced by interfaces.
So with that in mind, it's not even necessarily true that projects that use
classes will be affected by this if they make use of interfaces (I
certainly did at Remedy, exclusively).

Phobos is a standard library, surely it's unacceptable for phobos calls to
break the optimiser? Consider std.xml for instance; 100% certain to appear
in hot data crunching loops.
What can be done about this? It can't be fixed, because that's a breaking
change. Shall we document that phobos classes should be avoided or factored
outside of high frequency code, and hope people read it?


Re: Final by default?

2014-03-15 Thread Iain Buclaw
On 15 Mar 2014 09:44, "Jacob Carlborg"  wrote:
>
> On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:
>
>> else version (OSX) {
>> version (PPC)
>>iSomeWackyFunction();
>> else
>>SomeWackyFunction();   // In hope there's no other Apple
hardware.
>
>
> There's also ARM, ARM64, x86 32bit and PPC64.
>
> --
> /Jacob Carlborg

Wonderful - so the OSX bindings in druntime are pretty much in a dia state
for someone who wishes to port to a non-X86 architecture?  I know the BSD
and Solaris code needs fixing up and testing.


Re: Final by default?

2014-03-15 Thread Jacob Carlborg

On Friday, 14 March 2014 at 18:06:47 UTC, Iain Buclaw wrote:


else version (OSX) {
version (PPC)
   iSomeWackyFunction();
else
   SomeWackyFunction();   // In hope there's no other 
Apple hardware.


There's also ARM, ARM64, x86 32bit and PPC64.

--
/Jacob Carlborg


Re: Final by default?

2014-03-15 Thread Paulo Pinto

Am 15.03.2014 08:36, schrieb Walter Bright:

On 3/14/2014 9:02 PM, Manu wrote:

That said, function inlining is perhaps the single most important API
level
performance detail, and especially true in OO code (which advocates
accessors/properties).


I find it peculiar to desire a 'final accessor'. After all,

 class C {
 int x;
 final int getX() { return x; } <= what the heck is this
function for?
 }

The only reason to have an accessor function is so it can be virtual.


I don't agree.

In any language with properties, accessors also allow for:

- lazy initialization

- changing the underlying data representation without requiring client 
code to be rewritten


- implement access optimizations if the data is too costly to keep around


--
Paulo



Re: Final by default?

2014-03-15 Thread monarch_dodra

On Saturday, 15 March 2014 at 07:36:12 UTC, Walter Bright wrote:

On 3/14/2014 9:02 PM, Manu wrote:
That said, function inlining is perhaps the single most 
important API level
performance detail, and especially true in OO code (which 
advocates

accessors/properties).


I find it peculiar to desire a 'final accessor'. After all,

class C {
int x;
final int getX() { return x; } <= what the heck is this 
function for?

}

The only reason to have an accessor function is so it can be 
virtual.


Um... Read only attributes? Forgot the discussions about 
@property ?


This makes sense to me:
class C
{
private int _x;

///Gets x
final int x() @property { return x; }
}


Re: Final by default?

2014-03-15 Thread Daniel Murphy

"Walter Bright"  wrote in message news:lg0vtc$2q94$1...@digitalmars.com...


I find it peculiar to desire a 'final accessor'. After all,

 class C {
 int x;
 final int getX() { return x; } <= what the heck is this function 
for?

 }


Yeah, it's stupid, but people do it all over the place anyway.

It's a major breaking change. It'll break nearly every D program out there 
that uses classes.


This is nonsense.  I tried out the warning on some of my projects, and they 
required ZERO changes - because it's a warning!


Phobos requires 37 "virtual:"s to be added - or just change the makefile to 
use '-wi' instead of '-w'.  Druntime needed 25.


We don't even need to follow the usual 6-months per stage deprecation - We 
could leave it as a warning for 2 years if we wanted!


Grepping for class declarations and sticking in "virtual:" is as trivial as 
a fix can possibly be. 



Re: Final by default?

2014-03-15 Thread Paulo Pinto

Am 15.03.2014 06:29, schrieb deadalnix:

On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:

That said, function inlining is perhaps the single most important API
level
performance detail, and especially true in OO code (which advocates
accessors/properties).


OOP say ask, don't tell. Accessors, especially getters, are very anti
OOP. The haskell of OOP would prevent you from returning anything from a
function.


What?!?

Looking at Smalltalk, SELF, CLOS and Eiffel I fail to see what you mean, 
given that they are the grand daddies of OOP and all have 
getters/properties.


--
Paulo


Re: Final by default?

2014-03-15 Thread Ola Fosheim Grøstad

On Saturday, 15 March 2014 at 07:36:12 UTC, Walter Bright wrote:
The only reason to have an accessor function is so it can be 
virtual.


No:

1. To have more readable code: using x, y, z, w to access an 
array vector

2. Encapsulation/interfacing to differing implementations.

Seems to me that the final by default transition can be automated 
by source translation.


Please don't send D further into the land if obscurity by adding 
!final...

At some point someone will create D--...



Re: Final by default?

2014-03-15 Thread Daniel Kozák
deadalnix píše v Pá 14. 03. 2014 v 22:25 +:
> On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:
> > First I think have something like
> >
> > @disable(final,nothrow) would be the best way, but than I think 
> > about it
> > and realize that final(false) is much more better.
> >
> 
> If I may, final!false . We have a syntax for compile time
> parameter. Let's be consistent for once.
> 
> The concept is solid and is the way to go. DIP anyone ?

final!true
final!(true)
final(!true) oops :)





Re: Final by default?

2014-03-15 Thread Walter Bright

On 3/14/2014 9:02 PM, Manu wrote:

That said, function inlining is perhaps the single most important API level
performance detail, and especially true in OO code (which advocates
accessors/properties).


I find it peculiar to desire a 'final accessor'. After all,

class C {
int x;
final int getX() { return x; } <= what the heck is this function for?
}

The only reason to have an accessor function is so it can be virtual. If 
programmers are going to thoughtlessly follow rules like this, they might as 
well follow the rule:


class C { final:



Compile some release code without -inline and see what the performance
difference is,


I'm well aware of the advantages of inline.



The length you're willing to go to to resist a relatively minor breaking change,


It's a major breaking change. It'll break nearly every D program out there that 
uses classes.




I understand that you clearly don't believe in this change, and I grant that is
your prerogative, but I really don't get why... I just can't see it when
considering the balance.


You may not agree with me, but understanding my position shouldn't be too hard. 
I've expounded at length on it.




Can you honestly tell me that you truly believe that library authors will
consider, as a matter of common sense, the implications of virtual (the silent
default state) in their api?


I thought I was clear in that I believe it is a pipe dream to believe that code 
with nary a thought given to performance is going to be performant.


Besides, they don't have to consider anything or have any sense. Just blindly do 
this:


 class C { final:



Or you don't consider that to be something worth worrying about, ie, you truly
believe that I'm making a big deal out of nothing; that I will never actually,
in practise, encounter trivial accessors and properties that can't inline
appearing in my hot loops, or other related issues?


I think we're just going around in circles. I've discussed all this before, in 
this thread.




Re: Final by default?

2014-03-15 Thread Araq

On Saturday, 15 March 2014 at 05:29:04 UTC, deadalnix wrote:

On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
That said, function inlining is perhaps the single most 
important API level
performance detail, and especially true in OO code (which 
advocates

accessors/properties).


OOP say ask, don't tell. Accessors, especially getters, are 
very anti OOP. The haskell of OOP would prevent you from 
returning anything from a function.


Yeah, as I said, OO encourages bad design like no other 
paradigm...


Re: Final by default?

2014-03-14 Thread deadalnix

On Saturday, 15 March 2014 at 04:03:20 UTC, Manu wrote:
That said, function inlining is perhaps the single most 
important API level
performance detail, and especially true in OO code (which 
advocates

accessors/properties).


OOP say ask, don't tell. Accessors, especially getters, are very 
anti OOP. The haskell of OOP would prevent you from returning 
anything from a function.


Re: Final by default?

2014-03-14 Thread Manu
On 15 March 2014 10:49, Walter Bright  wrote:

> On 3/14/2014 5:06 AM, Manu wrote:
>
>> In my experience, API layout is the sort of performance detail that
>> library
>> authors are much more likely to carefully consider and get right. It's
>> higher
>> level, easier to understand, and affects all architectures equally.
>> It's also something that they teach in uni. People write books about that
>> sort
>> of thing.
>> Not to say there aren't terrible API designs out there, but D doesn't make
>> terrible-api-design-by-default a feature.
>> Stuff like virtual is the sort of thing that only gets addressed when it
>> is
>> reported by a user that cares, and library authors are terribly reluctant
>> to
>> implement a breaking change because some user reported it. I know this
>> from
>> experience.
>> I can say with confidence, poor API design has caused me less problems
>> than
>> virtual in my career.
>>
>> Can you honestly tell me that you truly believe that library authors will
>> consider, as a matter of common sense, the implications of virtual (the
>> silent
>> default state) in their api?
>> Do you truly believe that I'm making a big deal out of nothing; that I
>> will
>> never actually, in practise, encounter trivial accessors and properties
>> that
>> can't inline appearing in my hot loops, or other related issues.
>>
>> Inline-ability is a very strong API level performance influence,
>> especially in a
>> language with properties.
>>
>> Most programmers are not low-level experts, they don't know how to protect
>> themselves from this sort of thing. Honestly, almost everyone will just
>> stick
>> with the default.
>>
>
>
> I find it incongruous to take the position that programmers know all about
> layout for performance and nothing about function indirection? It leads me
> to believe that these programmers never once tested their code for
> performance.
>

They probably didn't. Library authors often don't if it's not a library
specifically intended for aggressive realtime use. Like most programmers,
especially PC programmers, their opinion is often "that's the optimisers
job".

That said, function inlining is perhaps the single most important API level
performance detail, and especially true in OO code (which advocates
accessors/properties).
Function calls scattered throughout your function serialise your code; they
inhibit the optimiser from pipelining properly in many cases, ie,
rescheduling across a function call is often dangerous, and compilers will
always take a conservative approach. Locals may need to be saved to the
stack across trivial function calls. I'm certain it will make a big
difference in many instances.

Compile some release code without -inline and see what the performance
difference is, that is probably a fairly realistic measure of the penalty
to expect in OO-heavy code.


I know what I'm doing, and even I, when I don't test things, always make
> some innocuous mistake that eviscerates performance. I find it very hard to
> believe that final-by-default will fix untested code.
>

I don't find it hard to believe at all, infact, I find it very likely that
there will be a significant benefit to client code that the library author
will probably have never given a moments thought to. It's usually
considered fairly reasonable for programmers to trust the optimiser to at
least do a decent job. virtual-by-default inhibits many of the most
important optimisations; inlining, rescheduling, pipelining, and also
increases pressure on the stack and caches.

And that's the whole thing here... I just don't see this as obscure or
unlikely at all. If I did, I wouldn't care anywhere near as much as I do.
All code has loops somewhere.


And the library APIs still are fixable. Consider:
>
> class C {
> void foo() { ... }
> }
>
> and foo() needs to be final for performance, but we don't want to break
> existing users:
>
> class C {
> void foo() { foo2(); }
> final void foo2() { ... }
> }
>

The length you're willing to go to to resist a relatively minor breaking
change, with an unusually smooth migration path, that virtually everyone
agrees with is surprising to me.
Daniel Murphy revealed that it only affects 13% of classes in DMD's OO
heavy code. That is in line with my past predictions; most classes aren't
base classes, so most classes aren't actually affected.

I understand that you clearly don't believe in this change, and I grant
that is your prerogative, but I really don't get why... I just can't see it

Re: Final by default?

2014-03-14 Thread Walter Bright

On 3/14/2014 5:06 AM, Manu wrote:

In my experience, API layout is the sort of performance detail that library
authors are much more likely to carefully consider and get right. It's higher
level, easier to understand, and affects all architectures equally.
It's also something that they teach in uni. People write books about that sort
of thing.
Not to say there aren't terrible API designs out there, but D doesn't make
terrible-api-design-by-default a feature.
Stuff like virtual is the sort of thing that only gets addressed when it is
reported by a user that cares, and library authors are terribly reluctant to
implement a breaking change because some user reported it. I know this from
experience.
I can say with confidence, poor API design has caused me less problems than
virtual in my career.

Can you honestly tell me that you truly believe that library authors will
consider, as a matter of common sense, the implications of virtual (the silent
default state) in their api?
Do you truly believe that I'm making a big deal out of nothing; that I will
never actually, in practise, encounter trivial accessors and properties that
can't inline appearing in my hot loops, or other related issues.

Inline-ability is a very strong API level performance influence, especially in a
language with properties.

Most programmers are not low-level experts, they don't know how to protect
themselves from this sort of thing. Honestly, almost everyone will just stick
with the default.



I find it incongruous to take the position that programmers know all about 
layout for performance and nothing about function indirection? It leads me to 
believe that these programmers never once tested their code for performance.


I know what I'm doing, and even I, when I don't test things, always make some 
innocuous mistake that eviscerates performance. I find it very hard to believe 
that final-by-default will fix untested code.


And the library APIs still are fixable. Consider:

class C {
void foo() { ... }
}

and foo() needs to be final for performance, but we don't want to break existing 
users:


class C {
void foo() { foo2(); }
final void foo2() { ... }
}



Re: Final by default?

2014-03-14 Thread deadalnix

On Saturday, 15 March 2014 at 00:00:48 UTC, Michel Fortin wrote:
On 2014-03-14 20:51:08 +, "monarch_dodra" 
 said:


I hate code "commented out" in an "#if 0" with a passion. 
Just... Why?


Better this:


#if 0
...
#else
...
#endif


than this:


/*
...
/*/
...
//*/


/+
...
/*+//*//+*/
...
//+/


Re: Final by default?

2014-03-14 Thread Michel Fortin

On 2014-03-14 20:51:08 +, "monarch_dodra"  said:


I hate code "commented out" in an "#if 0" with a passion. Just... Why?


Better this:


#if 0
...
#else
...
#endif


than this:


/*
...
/*/
...
//*/


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Final by default?

2014-03-14 Thread Jonathan M Davis
On Friday, March 14, 2014 08:17:08 Andrei Alexandrescu wrote:
> On 3/14/14, 4:37 AM, Daniel Murphy wrote:
> > "Walter Bright" wrote in message news:lfu74a$8cr$1...@digitalmars.com...
> > 
> >> > No, it doesn't, because it is not usable if C introduces any virtual
> >> > methods.
> >> 
> >> That's what the !final storage class is for.
> > 
> > My mistake, I forgot you'd said you were in favor of this. Being able
> > to 'escape' final certainly gets us most of the way there.
> > 
> > !final is really rather hideous though.
> 
> A few possibilities discussed around here:
> 
> !final
> ~final
> final(false)
> @disable final
> 
> I've had an epiphany literally a few seconds ago that "final(false)" has
> the advantage of being generalizable to "final(bool)" taking any
> CTFE-able Boolean.
> 
> On occasion I needed a computed qualifier (I think there's code in
> Phobos like that) and the only way I could do it was through ugly code
> duplication or odd mixin-generated code. Allowing computed
> qualifiers/attributes would be a very elegant and general approach, and
> plays beautifully into the strength of D and our current investment in
> Boolean compile-time predicates.

That sounds like a good approach and could definitely reduce the number of 
static ifs in some generic code (though as Daniel points out, I'm not sure how 
common that really is).

- Jonathan M Davis


Re: Final by default?

2014-03-14 Thread deadalnix

On Friday, 14 March 2014 at 22:06:13 UTC, Daniel Kozák wrote:

First I think have something like

@disable(final,nothrow) would be the best way, but than I think 
about it

and realize that final(false) is much more better.



If I may, final!false . We have a syntax for compile time
parameter. Let's be consistent for once.

The concept is solid and is the way to go. DIP anyone ?


Re: Final by default?

2014-03-14 Thread deadalnix

On Friday, 14 March 2014 at 20:51:09 UTC, monarch_dodra wrote:

On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:

Am 14.03.2014 19:50, schrieb H. S. Teoh:

The real code had been moved elsewhere, you see, and
whoever moved the code "kindly" decided to leave the old copy 
in the
original file inside an #if 0 block, "for reference", 
whatever that
means. Then silly old me came along expecting the code to 
still be in
the old place, and sure enough it was -- except that 
unbeknownst to me

it's now inside an #if 0 block. Gah!)


T




Ouch! I feel your pain.

This type of experience is what lead me to fight #ifdef 
spaghetti code.


--
Paulo


I hate code "commented out" in an "#if 0" with a passion. 
Just... Why?


Because one does not know how to use git.

PS: This sarcastic note also exists with mercurial flavor.


Re: Final by default?

2014-03-14 Thread Daniel Kozák
Andrei Alexandrescu píše v Pá 14. 03. 2014 v 08:17 -0700:
> On 3/14/14, 4:37 AM, Daniel Murphy wrote:
> > "Walter Bright"  wrote in message news:lfu74a$8cr$1...@digitalmars.com...
> >
> >> > No, it doesn't, because it is not usable if C introduces any virtual
> >> > methods.
> >>
> >> That's what the !final storage class is for.
> >
> > My mistake, I forgot you'd said you were in favor of this.  Being able
> > to 'escape' final certainly gets us most of the way there.
> >
> > !final is really rather hideous though.
> 
> A few possibilities discussed around here:
> 
> !final
> ~final
> final(false)
> @disable final
> 
> I've had an epiphany literally a few seconds ago that "final(false)" has 
> the advantage of being generalizable to "final(bool)" taking any 
> CTFE-able Boolean.
> 
> On occasion I needed a computed qualifier (I think there's code in 
> Phobos like that) and the only way I could do it was through ugly code 
> duplication or odd mixin-generated code. Allowing computed 
> qualifiers/attributes would be a very elegant and general approach, and 
> plays beautifully into the strength of D and our current investment in 
> Boolean compile-time predicates.
> 
> 
> Andrei
> 

First I think have something like 

@disable(final,nothrow) would be the best way, but than I think about it
and realize that final(false) is much more better.

Only advantege of @disable(all) or @disable(something, something_else)
is we can disable more things more easily. But I almost never have
needed this.




Re: Final by default?

2014-03-14 Thread Iain Buclaw
On 14 Mar 2014 18:30, "Paulo Pinto"  wrote:
>
> Am 14.03.2014 19:06, schrieb Iain Buclaw:
>>
>> On 14 March 2014 17:53, Walter Bright  wrote:
>>>
>>> On 3/14/2014 10:26 AM, Johannes Pfau wrote:


 I use manifest constants instead of version identifiers as well. If a
 version identifier affects the public API/ABI of a library, then the
 library and all code using the library always have to be compiled with
 the same version switches(inlining and templates make this an even
 bigger problem). This is not only inconvenient, it's also easy to think
 of examples where the problem will only show up as crashes at runtime.
 The only reason why that's not an issue in phobos/druntime is that we
 only use compiler defined versions there, but user defined versions are
 almost unusable.
>>>
>>>
>>>
>>> Use this method:
>>>
>>>
>>>  
>>>  import wackyfunctionality;
>>>  ...
>>>  WackyFunction();
>>>  
>>>  module wackyfunctionality;
>>>
>>>  void WackyFunction() {
>>>  version (Linux)
>>>SomeWackyFunction();
>>>  else version (OSX)
>>>  SomeWackyFunction();
>>>  else
>>>  ... workaround ...
>>>  }
>>>  
>>
>>
>>
>> Some years down the line (and some platform testing) turns into:
>>
>> 
>> module wackyfunctionality;
>>
>> void WackyFunction() {
>>  version (Linux) {
>>  version (ARM)
>>  _SomeWackyFunction();
>>  else version (MIPS)
>> MIPS_SomeWackyFunction();
>>  else version (X86)
>> SomeWackyFunction();
>>  else version (X86_64)
>> SomeWackyFunction();
>>  else
>>... should be some wacky function, but workaround for general
case ...
>>  }
>>  else version (OSX) {
>>  version (PPC)
>> iSomeWackyFunction();
>>  else
>> SomeWackyFunction();   // In hope there's no other Apple
hardware.
>>  }
>>  else version (OpenBSD) {
>>/// Blah
>>  }
>>  else version (Haiku) {
>>/// Blah
>>  }
>>  else
>>  ... workaround ...
>> }
>> 
>>
>
>
> That is why the best approach is to have one module per platform specific
code, with a common interface defined in .di file.
>

Don't tell me, tell the druntime maintainers.  :)


Re: Final by default?

2014-03-14 Thread Joakim

On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:

Am 14.03.2014 19:50, schrieb H. S. Teoh:

On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:

Am 14.03.2014 19:06, schrieb Iain Buclaw:
On 14 March 2014 17:53, Walter Bright 
 wrote:

On 3/14/2014 10:26 AM, Johannes Pfau wrote:

--snip---
+1. Once versioned code gets more than 2 levels deep, it 
becomes an

unreadable mess. The .di approach is much more manageable.


Back on my C/C++ days at work, any conditional code would be 
killed

by me during code reviews.

[...]

Ah, how I wish I could do that... over here at my job, parts 
of the code
are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and 
"functions"
that aren't defined anywhere (they are generated by macros, 
including
their names!). It used to be relatively sane while the project 
still
remained a single project... Unfortunately, about a year or so 
ago, the
PTBs decided to merge another project into this one, and by 
"merge" they
meant, graft the source tree of the other project into this 
one, hack it
with a hacksaw until it compiles, then call it a day. We've 
been
suffering from the resulting schizophrenic code ever since, 
where some
files are compiled when configuring for platform A, and 
skipped over and
some other files are compiled when configuring for platform B 
(often
containing conflicting functions of the same name but with 
incompatible
parameters), and a ton of #if's and #ifdef's nested to the 
n'th level
got sprinkled everywhere in the common code in order to glue 
the
schizophrenic mess into one piece. One time, I spent almost an 
hour
debugging some code that turned out to be inside an #if 0 ... 
#endif
block. >:-(  (The real code had been moved elsewhere, you see, 
and
whoever moved the code "kindly" decided to leave the old copy 
in the
original file inside an #if 0 block, "for reference", whatever 
that
means. Then silly old me came along expecting the code to 
still be in
the old place, and sure enough it was -- except that 
unbeknownst to me

it's now inside an #if 0 block. Gah!)


T




Ouch! I feel your pain.

This type of experience is what lead me to fight #ifdef 
spaghetti code.


--
Paulo


Yeah, having had to deal with macro spaghetti when porting code 
to new platforms, I completely agree with Walter on this one.  
Whatever small inconveniences are caused by not allowing any 
logic inside or with version checks is made up for many times 
over in clarity and maintenance down the line.


Re: Final by default?

2014-03-14 Thread monarch_dodra

On Friday, 14 March 2014 at 20:45:35 UTC, Paulo Pinto wrote:

Am 14.03.2014 19:50, schrieb H. S. Teoh:

The real code had been moved elsewhere, you see, and
whoever moved the code "kindly" decided to leave the old copy 
in the
original file inside an #if 0 block, "for reference", whatever 
that
means. Then silly old me came along expecting the code to 
still be in
the old place, and sure enough it was -- except that 
unbeknownst to me

it's now inside an #if 0 block. Gah!)


T




Ouch! I feel your pain.

This type of experience is what lead me to fight #ifdef 
spaghetti code.


--
Paulo


I hate code "commented out" in an "#if 0" with a passion. Just... 
Why?


Re: Final by default?

2014-03-14 Thread Paulo Pinto

Am 14.03.2014 19:50, schrieb H. S. Teoh:

On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:

Am 14.03.2014 19:06, schrieb Iain Buclaw:

On 14 March 2014 17:53, Walter Bright  wrote:

On 3/14/2014 10:26 AM, Johannes Pfau wrote:


I use manifest constants instead of version identifiers as well. If
a version identifier affects the public API/ABI of a library, then
the library and all code using the library always have to be
compiled with the same version switches(inlining and templates make
this an even bigger problem). This is not only inconvenient, it's
also easy to think of examples where the problem will only show up
as crashes at runtime.  The only reason why that's not an issue in
phobos/druntime is that we only use compiler defined versions
there, but user defined versions are almost unusable.



Use this method:


 
 import wackyfunctionality;
 ...
 WackyFunction();
 
 module wackyfunctionality;

 void WackyFunction() {
 version (Linux)
   SomeWackyFunction();
 else version (OSX)
 SomeWackyFunction();
 else
 ... workaround ...
 }
 



Some years down the line (and some platform testing) turns into:


module wackyfunctionality;

void WackyFunction() {
 version (Linux) {
 version (ARM)
 _SomeWackyFunction();
 else version (MIPS)
MIPS_SomeWackyFunction();
 else version (X86)
SomeWackyFunction();
 else version (X86_64)
SomeWackyFunction();
 else
   ... should be some wacky function, but workaround for general case 
...
 }
 else version (OSX) {
 version (PPC)
iSomeWackyFunction();
 else
SomeWackyFunction();   // In hope there's no other Apple hardware.
 }
 else version (OpenBSD) {
   /// Blah
 }
 else version (Haiku) {
   /// Blah
 }
 else
 ... workaround ...
}





That is why the best approach is to have one module per platform
specific code, with a common interface defined in .di file.


+1. Once versioned code gets more than 2 levels deep, it becomes an
unreadable mess. The .di approach is much more manageable.



Back on my C/C++ days at work, any conditional code would be killed
by me during code reviews.

[...]

Ah, how I wish I could do that... over here at my job, parts of the code
are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and "functions"
that aren't defined anywhere (they are generated by macros, including
their names!). It used to be relatively sane while the project still
remained a single project... Unfortunately, about a year or so ago, the
PTBs decided to merge another project into this one, and by "merge" they
meant, graft the source tree of the other project into this one, hack it
with a hacksaw until it compiles, then call it a day. We've been
suffering from the resulting schizophrenic code ever since, where some
files are compiled when configuring for platform A, and skipped over and
some other files are compiled when configuring for platform B (often
containing conflicting functions of the same name but with incompatible
parameters), and a ton of #if's and #ifdef's nested to the n'th level
got sprinkled everywhere in the common code in order to glue the
schizophrenic mess into one piece. One time, I spent almost an hour
debugging some code that turned out to be inside an #if 0 ... #endif
block. >:-(  (The real code had been moved elsewhere, you see, and
whoever moved the code "kindly" decided to leave the old copy in the
original file inside an #if 0 block, "for reference", whatever that
means. Then silly old me came along expecting the code to still be in
the old place, and sure enough it was -- except that unbeknownst to me
it's now inside an #if 0 block. Gah!)


T




Ouch! I feel your pain.

This type of experience is what lead me to fight #ifdef spaghetti code.

--
Paulo


Re: Final by default?

2014-03-14 Thread H. S. Teoh
On Fri, Mar 14, 2014 at 07:29:27PM +0100, Paulo Pinto wrote:
> Am 14.03.2014 19:06, schrieb Iain Buclaw:
> >On 14 March 2014 17:53, Walter Bright  wrote:
> >>On 3/14/2014 10:26 AM, Johannes Pfau wrote:
> >>>
> >>>I use manifest constants instead of version identifiers as well. If
> >>>a version identifier affects the public API/ABI of a library, then
> >>>the library and all code using the library always have to be
> >>>compiled with the same version switches(inlining and templates make
> >>>this an even bigger problem). This is not only inconvenient, it's
> >>>also easy to think of examples where the problem will only show up
> >>>as crashes at runtime.  The only reason why that's not an issue in
> >>>phobos/druntime is that we only use compiler defined versions
> >>>there, but user defined versions are almost unusable.
> >>
> >>
> >>Use this method:
> >>
> >>
> >> 
> >> import wackyfunctionality;
> >> ...
> >> WackyFunction();
> >> 
> >> module wackyfunctionality;
> >>
> >> void WackyFunction() {
> >> version (Linux)
> >>   SomeWackyFunction();
> >> else version (OSX)
> >> SomeWackyFunction();
> >> else
> >> ... workaround ...
> >> }
> >> 
> >
> >
> >Some years down the line (and some platform testing) turns into:
> >
> >
> >module wackyfunctionality;
> >
> >void WackyFunction() {
> > version (Linux) {
> > version (ARM)
> > _SomeWackyFunction();
> > else version (MIPS)
> >MIPS_SomeWackyFunction();
> > else version (X86)
> >SomeWackyFunction();
> > else version (X86_64)
> >SomeWackyFunction();
> > else
> >   ... should be some wacky function, but workaround for general 
> > case ...
> > }
> > else version (OSX) {
> > version (PPC)
> >iSomeWackyFunction();
> > else
> >SomeWackyFunction();   // In hope there's no other Apple 
> > hardware.
> > }
> > else version (OpenBSD) {
> >   /// Blah
> > }
> > else version (Haiku) {
> >   /// Blah
> > }
> > else
> > ... workaround ...
> >}
> >
> >
> 
> 
> That is why the best approach is to have one module per platform
> specific code, with a common interface defined in .di file.

+1. Once versioned code gets more than 2 levels deep, it becomes an
unreadable mess. The .di approach is much more manageable.


> Back on my C/C++ days at work, any conditional code would be killed
> by me during code reviews.
[...]

Ah, how I wish I could do that... over here at my job, parts of the code
are a nasty rats'-nest of #if's, #ifdef's, #ifndef's, and "functions"
that aren't defined anywhere (they are generated by macros, including
their names!). It used to be relatively sane while the project still
remained a single project... Unfortunately, about a year or so ago, the
PTBs decided to merge another project into this one, and by "merge" they
meant, graft the source tree of the other project into this one, hack it
with a hacksaw until it compiles, then call it a day. We've been
suffering from the resulting schizophrenic code ever since, where some
files are compiled when configuring for platform A, and skipped over and
some other files are compiled when configuring for platform B (often
containing conflicting functions of the same name but with incompatible
parameters), and a ton of #if's and #ifdef's nested to the n'th level
got sprinkled everywhere in the common code in order to glue the
schizophrenic mess into one piece. One time, I spent almost an hour
debugging some code that turned out to be inside an #if 0 ... #endif
block. >:-(  (The real code had been moved elsewhere, you see, and
whoever moved the code "kindly" decided to leave the old copy in the
original file inside an #if 0 block, "for reference", whatever that
means. Then silly old me came along expecting the code to still be in
the old place, and sure enough it was -- except that unbeknownst to me
it's now inside an #if 0 block. Gah!)


T

-- 
People tell me that I'm paranoid, but they're just out to get me.


  1   2   3   4   >