Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Brian Schott via Digitalmars-d

On Monday, 5 January 2015 at 00:50:57 UTC, Brian Schott wrote:

Looks like it's time to spend some more time with perf:

http://i.imgur.com/k50dFbU.png

X-axis: Meaningless (Phobos module file names)
Y-axis: Time in "hnsecs" (Lower is better)

I had to hack the ddmd code to get it compile (more "1337 h4x" 
were required to compile with LDC than with DMD), so I haven't 
uploaded the code for the benchmark to Github yet.


Both tests were in the same binary and thus had the same 
compiler flags.


Now with more copy-paste inlining!

http://i.imgur.com/D5IAlvl.png

I'm glad I could get this kind of speed up, but not happy with 
how ugly the changes were.


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Walter Bright via Digitalmars-d

On 1/5/2015 2:04 PM, Steven Schveighoffer wrote:

To give you an example of why that sucks, imagine that your accessor for
member_x is nothrow, but your setter is not. This means you either "make an
exception", or you just split up obvious file-mates into separate corners.
Source control gets confused if one of those attributes changes. Nobody is 
happy.

Grouping by attributes is probably one of the worst ways to have
readable/maintainable code.

One of the most important reasons why unittests are so successful is that you
can just plop the code that tests a function right next to it. So easy to find
the code, so easy to maintain when you change the target of the test. Making
some way to bundle attributes, or be able to negate currently one-way attributes
would go a long way IMO.



I know and agree. I was just responding to the 'impossible' characterization.


decodeReverse

2015-01-05 Thread HaraldZealot via Digitalmars-d
For my particular project (it binds with something like finite 
state machine) I will write some counterpart of decode function 
from std.utf. Future function will decode string backward, return 
dchar and change index passed by reference.


Is it interesting for community that I code this feature in 
general way targeting in phobos for future?


Re: Questions about TDPL book

2015-01-05 Thread Brad Anderson via Digitalmars-d

On Tuesday, 6 January 2015 at 03:20:27 UTC, weaselcat wrote:
Is it still worth buying TDPL since it's almost 5 years old? I 
realize classics like K&R C are near timeless, but D has seen a 
lot of changes.

Has the ebook version been updated at all(i.e, with the errata?)
How is the physical quality of the print book?

Thanks. -


I'd definitely recommend reading it. The vast majority of it is 
still very accurate and it's just a great read.


You may find this handy: 
http://wiki.dlang.org/Differences_With_TDPL


My Kindle version has not been updated with errata changes the 
last time I looked. Andrei has mentioned that there would be 
another printing with changes but that was awhile back so I'm not 
sure if that's still planned.


Re: An idea for commercial support for D

2015-01-05 Thread Joakim via Digitalmars-d
On Monday, 5 January 2015 at 22:51:25 UTC, Joseph Rushton 
Wakeling via Digitalmars-d wrote:

On 05/01/15 21:57, Joakim via Digitalmars-d wrote:
If you're not paying, you're not a customer.  The alternative 
is to use the
bug-ridden OSS implementation you're using now for free, and 
not have a paid
version for those who want those bugs fixed.  I don't doubt 
that some irrational
people interpret the existence of a paid version in the way 
you laid out, and in
extreme cases that _can_ happen (just as there are OSS vendors 
who write bad OSS
code just so they can make more money off your favored support 
model), but
that's more an issue with their sloppy thinking than anything 
else.


See, this is where I find _your_ point of view irrational, 
because you fail to see how straightforwardly damaging closed 
source can be to adoption.  The fact of the matter is that for 
a great many users, and particularly for a great many corporate 
adopters of development toolchains, today it matters hugely 
that the toolchain is free-as-in-freedom.  Not free 6 months 
down the line -- free, now, in its entirety.


Non-free code (even temporarily), secret development, etc., are 
simply deal-breakers for a great many people.  A smart business 
model will engage with this fact and find a way to drive money 
to development without closing things up.


I don't think such people matter, ie they're a very small but 
vocal minority.  Also, these people are deeply irrational, as 
every piece of hardware they're using comes with many closed 
binary blobs.  They are either ignorant of this fact or just 
choose to make silly demands anyway.


There are also "fully open source" languages which are "fully 
commercially
supported."  How do your managers wrap their minds around such 
a paradox? ;)


See, if I was in your shoes, I'd be trying to take on board the 
feedback about why your proposed model would be unattractive to 
his managers, rather than making sarcastic points that don't 
actually identify a conflict with their position.


Heh, the whole point of the sarcastic comment was to point out 
the obvious conflict in their position. :)


Most commercial adopters are going to consider it very 
important to have a support option that says, "If you have a 
serious blocker, you can pay us money to guarantee that it gets 
fixed."


They are not going to be at all happy about a support option 
that says, "If we develop a fix, then you are not going to get 
it in a timely manner unless you pay."


Understanding that distinction is very important.


Haha, you do realize that those two quotes you laid out are the 
exact same option?  In the first option, you pay for a fix.  In 
the second option, you pay for a fix.  What distinction you're 
hoping to draw has not been made.


My point is that such artificial distinctions are silly, 
whether because of the
amount of support or source available.  The alternative to 
paid bug fixes is not
that all the bugs you want fixed get done for free: it's _no_ 
bug fixes, as we
see today. For example, selective imports at module scope has 
been broken for
more than eight years now, as those symbols are leaked into 
any module that
imports the module with the selective import. There are many 
more bugs like
that, that could actually be fixed much faster if there were 
more paid devs

working on D.


You're talking about "the alternative to paid bug fixes" as if 
the only way of having paid bug fixes is to follow your model 
of locking them away from the wider community.  That's simply 
not true.


I wait with bated breath for your model of paid bug fixes that 
doesn't involve closing the code for the bug fixes at all.  You 
must have discovered some billion-dollar scheme, because every 
software company in the world is waiting to copy your brilliant 
method.


Having both paid and free versions available is not a 
"paywall" on a language.


Unless those versions are identical, yes it is.


No, it isn't.  Your being able to use the always OSS dmd/gdc for 
free means the language is always available to you.  Just because 
someone else is using an enhanced version of ldc doesn't make the 
free version any less available to you.  To suggest otherwise is 
to distort the language to make your argument, ie flat out lying.


A company is not going to just write a bunch of patches and 
open source all of
them unless they have some complementary business model to go 
with it, whether
google making more mobile revenue off Android or Apple 
providing clang as the

system compiler on OS X and making money off the bundled Mac.


So why not focus on creating those complementary business 
models?


If you have a complementary business model for a D compiler, feel 
free to suggest one and get people to use it.  I don't think 
complementary business models are generally a good idea, because 
the people making money are usually going to focus on the place 
they're making money.  This is why google doesn't care that much 
i

Re: Questions about TDPL book

2015-01-05 Thread via Digitalmars-d

On Tuesday, 6 January 2015 at 03:20:27 UTC, weaselcat wrote:
Is it still worth buying TDPL since it's almost 5 years old? I 
realize classics like K&R C are near timeless, but D has seen a 
lot of changes.

Has the ebook version been updated at all(i.e, with the errata?)
How is the physical quality of the print book?

Thanks. -


Book quality is fine, although the paper is quite thin.
I don't think there is that much outdated information in TDPL.

I don't have any ideas about the ebook version.


Re: An idea for commercial support for D

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Joseph Rushton Wakeling via Digitalmars-d"  wrote in message 
news:mailman.4177.1420498284.9932.digitalmar...@puremagic.com...



> A company is not going to just write a bunch of patches and open source 
> all of
> them unless they have some complementary business model to go with it, 
> whether
> google making more mobile revenue off Android or Apple providing clang 
> as the

> system compiler on OS X and making money off the bundled Mac.


However, I don't see it making any sense for a company to invest in 
proprietary patches to a toolchain, because 99% of the time, when you need 
a patch written, it's a bugfix.  And when you want a bugfix, you don't 
want a patch that applies only to your version of the toolchain and which 
you (or your friendly proprietary-patch-writing consultant) have to keep 
rebasing on top of upstream for the next 6 months -- you want upstream 
fixed.  Otherwise you'll wind up paying far more merely for maintenance of 
your proprietary extensions, than you would have just to get someone to 
write a patch and get it straight into the open-source upstream.


This is very important - upstreaming your patches means that the community 
will maintain them for you.  This is why it's useful for a company to 
develop their own patches and still contribute back upstream. 



Re: @api: One attribute to rule them All

2015-01-05 Thread Zach the Mystic via Digitalmars-d
On Monday, 5 January 2015 at 23:48:17 UTC, Joseph Rushton 
Wakeling via Digitalmars-d wrote:
Here's the rationale.  Suppose that I have a bunch of functions 
that are all intended to be part of the public API of my 
project.  I accidentally forget to tag one of them with the 
@api attribute,


A more likely scenario is that your library starts small enough 
not to need the @api attribute, then at some point it gets 
really, really huge. Then in one fell swoop you decide to "@api:" 
your whole file so that the public interface won't change so 
often. I'm picking the most extreme case I can think of, in order 
to argue the point from a different perspective.


so its attributes will be auto-inferred, but the function is 
still public, so downstream users will wind up using it.


3 months later, I realize my mistake, and add the @api 
attribute -- at which point downstream users' code will break 
if their code was relying on the unintended inferred attributes.


Attribute inference provides convenience, not guarantees. If a 
user was relying on the purity of a function which was never 
marked 'pure', it's only convenience which allows him to do it, 
both on the part of the user, for adding 'pure', and the library 
writer, for *not* adding it. Adding @api (or 'extern (noinfer)') 
cancels that convenience for the sake of modularity. It's a 
tradeoff. The problem  itself is solved either by the library 
writer marking the function 'pure', or the user removing 'pure' 
from his own function. Without @api, the problem only arises when 
the library writer actually does something impure, which makes 
perfect sense. It's @api (and D's existing default, by the way) 
which adds the artificiality to the process, not my suggested 
default.


It's quite analogous in this respect to the argument about 
final vs. virtual by default for class methods.


I don't think so, because of so-called covariance. Final and 
virtual each have their own advantages and disadvantages, whereas 
inferring attributes only goes one way. There is no cost to 
inferring in the general case. My suggestion, (I now prefer 
'extern(noinfer)'), does absolutely nothing except to restore D's 
existing default, for what I think are the rare cases it is 
needed. I could be wrong about just how rare using 
extern(noinfer) will actually be, but consider that phobos, for 
example, just doesn't need it, because it's too small a library 
to cause trouble if all of a sudden one of its non-templated 
functions becomes impure. A quick recompile, a new interface 
file, and now everyone's using the new thing. Even today, it's 
not even marked up with attributes completely, thus indicating 
that you never even *could* have used it for all it's worth.


Have I convinced you?


Questions about TDPL book

2015-01-05 Thread weaselcat via Digitalmars-d
Is it still worth buying TDPL since it's almost 5 years old? I 
realize classics like K&R C are near timeless, but D has seen a 
lot of changes.

Has the ebook version been updated at all(i.e, with the errata?)
How is the physical quality of the print book?

Thanks. -


Re: @api: One attribute to rule them All

2015-01-05 Thread Joseph Rushton Wakeling via Digitalmars-d

On 06/01/15 00:48, Joseph Rushton Wakeling via Digitalmars-d wrote:

IMHO if anything like this is to be implemented, the extra flag should be to
indicate that a function is _not_ intended to be part of the API and that
therefore it is OK to infer its attributes.


Hmm.  On thinking about this some more, it occurs to me that this might be 
fundamentally about protection.  If it were forbidden to auto-infer attributes 
for a non-templated public function, then quite a few of my objections above 
might go away.


Re: @api: One attribute to rule them All

2015-01-05 Thread Joseph Rushton Wakeling via Digitalmars-d

On 05/01/15 22:14, Zach the Mystic via Digitalmars-d wrote:

I get a compiler error. The only way to stop it is to add unnecessary visual
noise to the first function. All of these attributes should be something that
you *want* to add, not something that you *need*. The compiler can obviously
figure out if the function throws or not. Just keep an additional internal flag
for each of the attributes. When any attribute is violated, flip the bit and
boom, you have your implicit function signature.


Bear in mind one quite important factor -- all that alleged noise isn't simply 
about getting stuff to work, it's about promises that the function makes to 
downstream users.  You do touch on this yourself, but I think you have missed 
how your @api flag could go wrong.



I suggest a new attribute, @api, which does nothing more than to tell the
compiler to generate the function signature and mangle the name only with its
explicit attributes, and not with its inferred ones. Inside the program, there's
no reason the compiler can't continue to use inference, but with @api, the
exposed interface will be stabilized, should the programmer want that. Simple.


IMHO if anything like this is to be implemented, the extra flag should be to 
indicate that a function is _not_ intended to be part of the API and that 
therefore it is OK to infer its attributes.


Here's the rationale.  Suppose that I have a bunch of functions that are all 
intended to be part of the public API of my project.  I accidentally forget to 
tag one of them with the @api attribute, so its attributes will be 
auto-inferred, but the function is still public, so downstream users will wind 
up using it.


3 months later, I realize my mistake, and add the @api attribute -- at which 
point downstream users' code will break if their code was relying on the 
unintended inferred attributes.


If on the other hand you take the assumption that attributes should by default 
_not_ be auto-inferred, and you accidentally forget to tag a function to 
auto-infer its attributes, that can be fixed without breaking downstream.


It's quite analogous in this respect to the argument about final vs. virtual by 
default for class methods.


Re: Phobos colour module?

2015-01-05 Thread Manu via Digitalmars-d
On 6 January 2015 at 04:11, via Digitalmars-d
 wrote:
> On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote:
>>
>> Yeah, in my misc repo, there used to be stand along image.d and
>> simpledisplay.d. Now, they both depend on color.d. Even just a basic
>> definition we can use elsewhere is nice to have so other libs can interop on
>> that level without annoying casts or pointless conversions just to please
>> the type system when the contents are identical.
>
>
> Yes, that too. I was more thinking about the ability to create an adapter
> that extracts colour information from an existing data structure and adds
> context information such as gamma. Then let  you build a function that say
> reads floats from 3 LAB pointers and finally returns a tuple with a 16 bit
> RGB pixel with gamma correction and the residue in a specified format
> suitable for dithering... ;-]
>
> It is quite common error to do computations on colours that are ignorant of
> gamma (or do it wrong) which results in less accurate imaging. E.g. When
> dithering you need to make sure that the residue that is left when doing bit
> truncation is added to the neighbouring pixels in a "linear addition"
> (without gamma). Making stuff like that less tedious would make it a very
> useful library.

I have thought about how to handle residue from lossy-encoding, but I
haven't thought of an API I like for that yet.
Dithering operates on neighbourhoods of pixels, so in some ways I feel
it is beyond the scope of colour.d, but residue is an important detail
to enable dithering that should probably be expressed while encoding.

Currently, I have a colour template which can be arbitrarily typed and
components defined in some user-specified order. It binds the
colourspace to colours. 'CTo to(CTo, CFrom)(CFrom colour)' is defined
and performs arbitrary conversions between colours.

I'm finding myself at a constant struggle between speed and
maximizing-precision. I feel like a lib should maximise precision, but
the trouble then is that it's not actually useful to me...
Very few applications care about colour precision beyond ubyte, so I
feel like using double for much of the processing is overkill :/
I'm not sure what the right balance would look like exactly.
I can make fast-paths for common formats, like ubyte conversions
between sRGB/Linear, etc use tables. Performing colourspace
conversions in fixed point (where both sides of conversion are integer
types) might be possible without significant loss of precision, but
it's tricky... I just pipe through double now, and that's way
overkill.

I'll make a PR tonight some time for criticism.


Re: Bad error message example

2015-01-05 Thread Benjamin Thaut via Digitalmars-d

Am 05.01.2015 um 18:51 schrieb Daniel Murphy:

"Benjamin Thaut"  wrote in message news:m8eian$21nu$1...@digitalmars.com...

Today I had a bad template error message and I though I might post it
here so something can be done about it, the error message was:


Please report in bugzilla: http://d.puremagic.com/issues/


Done: https://issues.dlang.org/show_bug.cgi?id=13942


Re: An idea for commercial support for D

2015-01-05 Thread Joseph Rushton Wakeling via Digitalmars-d

On 05/01/15 21:57, Joakim via Digitalmars-d wrote:

If you're not paying, you're not a customer.  The alternative is to use the
bug-ridden OSS implementation you're using now for free, and not have a paid
version for those who want those bugs fixed.  I don't doubt that some irrational
people interpret the existence of a paid version in the way you laid out, and in
extreme cases that _can_ happen (just as there are OSS vendors who write bad OSS
code just so they can make more money off your favored support model), but
that's more an issue with their sloppy thinking than anything else.


See, this is where I find _your_ point of view irrational, because you fail to 
see how straightforwardly damaging closed source can be to adoption.  The fact 
of the matter is that for a great many users, and particularly for a great many 
corporate adopters of development toolchains, today it matters hugely that the 
toolchain is free-as-in-freedom.  Not free 6 months down the line -- free, now, 
in its entirety.


Non-free code (even temporarily), secret development, etc., are simply 
deal-breakers for a great many people.  A smart business model will engage with 
this fact and find a way to drive money to development without closing things up.



There are also "fully open source" languages which are "fully commercially
supported."  How do your managers wrap their minds around such a paradox? ;)


See, if I was in your shoes, I'd be trying to take on board the feedback about 
why your proposed model would be unattractive to his managers, rather than 
making sarcastic points that don't actually identify a conflict with their position.


Most commercial adopters are going to consider it very important to have a 
support option that says, "If you have a serious blocker, you can pay us money 
to guarantee that it gets fixed."


They are not going to be at all happy about a support option that says, "If we 
develop a fix, then you are not going to get it in a timely manner unless you pay."


Understanding that distinction is very important.


My point is that such artificial distinctions are silly, whether because of the
amount of support or source available.  The alternative to paid bug fixes is not
that all the bugs you want fixed get done for free: it's _no_ bug fixes, as we
see today. For example, selective imports at module scope has been broken for
more than eight years now, as those symbols are leaked into any module that
imports the module with the selective import. There are many more bugs like
that, that could actually be fixed much faster if there were more paid devs
working on D.


You're talking about "the alternative to paid bug fixes" as if the only way of 
having paid bug fixes is to follow your model of locking them away from the 
wider community.  That's simply not true.



Having both paid and free versions available is not a "paywall" on a language.


Unless those versions are identical, yes it is.


A company is not going to just write a bunch of patches and open source all of
them unless they have some complementary business model to go with it, whether
google making more mobile revenue off Android or Apple providing clang as the
system compiler on OS X and making money off the bundled Mac.


So why not focus on creating those complementary business models?


That community involvement would still be there for the OSS core with D, but you
would get support for a closed patch from the developer who wrote it.

...

There is essentially nothing different from this situation and the hybrid model
I've described, in terms of the product you'd be using.  The only difference is
that it wouldn't be a company, but some selection of independent devs.


Bottom line: if some individual or group of devs want to try and make a business 
selling proprietary patches to the DMD frontend, or phobos, the licensing allows 
them to do that.  Good luck to them, and if they want to submit those patches to 
D mainline in future, good luck to them again.


However, I don't see it making any sense for a company to invest in proprietary 
patches to a toolchain, because 99% of the time, when you need a patch written, 
it's a bugfix.  And when you want a bugfix, you don't want a patch that applies 
only to your version of the toolchain and which you (or your friendly 
proprietary-patch-writing consultant) have to keep rebasing on top of upstream 
for the next 6 months -- you want upstream fixed.  Otherwise you'll wind up 
paying far more merely for maintenance of your proprietary extensions, than you 
would have just to get someone to write a patch and get it straight into the 
open-source upstream.


I also think you assume far too much value on the part of privileged/early 
access to bugfixes.  A bug in a programming language toolchain is either a 
commercial problem for you or it isn't.  If it's a commercial problem, you need 
it fixed, and that fix in itself has a value to you.  There is not really any 
comparable change in valu

Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Zach the Mystic via Digitalmars-d
On Sunday, 4 January 2015 at 01:12:14 UTC, Manu via Digitalmars-d 
wrote:
It's like this: ref is a massive problem when it finds it's way 
into meta.

ref is relatively rare today... so the problem is occasional.
scope on the other hand will be epic compared to ref. If we 
infer
scope (which we'll probably need to), chances are, the vast 
majority

of functions will involve scope.
We can't have the trouble with ref (read: trouble with 'storage
class') applied to the majority of functions.


Hey Manu, I think it would still be a good idea to provide code 
examples of your points right in the forums. I was able to look 
at the file from luaD and see how the problems were occurring, 
but it would hasten my understanding just to see several 'reduced 
test cases' of that example and others, if possible.


Re: @api: One attribute to rule them All

2015-01-05 Thread Zach the Mystic via Digitalmars-d

On Monday, 5 January 2015 at 21:25:01 UTC, Daniel N wrote:

An alternative could be to use the already existing 'export'.


'extern'. Yeah, something like 'extern (noinfer):'.


Re: @api: One attribute to rule them All

2015-01-05 Thread Zach the Mystic via Digitalmars-d

On Monday, 5 January 2015 at 22:00:40 UTC, Zach the Mystic wrote:

On Monday, 5 January 2015 at 21:25:01 UTC, Daniel N wrote:

An alternative could be to use the already existing 'export'.


'extern'. Yeah, something like 'extern (noinfer):'.


Err, yeah, whatever works!


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Steven Schveighoffer via Digitalmars-d

On 1/5/15 4:10 PM, Walter Bright wrote:

On 12/30/2014 4:14 AM, Steven Schveighoffer wrote:

But I agree. The problem is, most times, you WANT to ensure your code
is @safe
pure nothrow (and now @nogc), even for template functions. That's a
lot of
baggage to put on each signature. I just helped someone recently who
wanted to
put @nogc on all the std.datetime code, and every signature had these 4
attributes except a few. I tried to have him put a big @safe: pure:
nothrow:
@nogc: at the top, but the occasional exceptions made this impossible.


The way to do it is one of:

1. reorganize the code so the non-attributed ones come first

2. write the attributes as:

@safe pure nothrow @nogc {
   ... functions ...
}

... non attributed functions ...

@safe pure nothrow @nogc {
   ... more functions ...
}


To give you an example of why that sucks, imagine that your accessor for 
member_x is nothrow, but your setter is not. This means you either "make 
an exception", or you just split up obvious file-mates into separate 
corners. Source control gets confused if one of those attributes 
changes. Nobody is happy.


Grouping by attributes is probably one of the worst ways to have 
readable/maintainable code.


One of the most important reasons why unittests are so successful is 
that you can just plop the code that tests a function right next to it. 
So easy to find the code, so easy to maintain when you change the target 
of the test. Making some way to bundle attributes, or be able to negate 
currently one-way attributes would go a long way IMO.


-Steve


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread deadalnix via Digitalmars-d
On Monday, 5 January 2015 at 19:18:34 UTC, Steven Schveighoffer 
wrote:

On 1/5/15 11:51 AM, deadalnix wrote:
On Monday, 5 January 2015 at 14:00:13 UTC, Steven 
Schveighoffer wrote:
I strongly disagree :) inout enables so many things that just 
aren't

possible otherwise.

Most recent example:
https://github.com/D-Programming-Language/druntime/pull/1079

inout only gets confusing when you start using inout 
delegates.




You are arguing that inout is useful. That simply makes it a 
useful

disaster :)


I guess you and me have different ideas of what a disaster is :)

-Steve


Nop. Great usefulness makes it almost impossible to get rid of in 
its current form.


Re: @api: One attribute to rule them All

2015-01-05 Thread Daniel N via Digitalmars-d

On Monday, 5 January 2015 at 21:15:00 UTC, Zach the Mystic wrote:

Now, "Bombard with your gunships."



An alternative could be to use the already existing 'export'.

http://dlang.org/attribute.html
"Export means that any code outside the executable can access the 
member. Export is analogous to exporting definitions from a DLL."


@api: One attribute to rule them All

2015-01-05 Thread Zach the Mystic via Digitalmars-d
Hello everybody. My name is Zach, and I have a suggestion for the 
improvement of D. I've been looking at the following stalled pull 
request for a while now:


https://github.com/D-Programming-Language/dmd/pull/1877

...in which Walter Bright wants to introduce built-in-attribute 
inference for a relatively small set of functions. It seems like 
the most obvious thing in the world to me to desire this, and not 
even just for 'auto' and templated functions, but for *every* 
function. And there's no reason it can't be done. So long as the 
compiler has everything it needs to determine which attributes 
can be applied, there's no reason to demand anything from the 
programmer. Look how simple this function is:


int plusOne(int a) { return a+1; }

Let's say I later want to call it, however, from a fully 
attributed function:


int plusTwo(int a) pure nothrow @safe @nogc  {
  return plusOne(plusOne(a));
}

I get a compiler error. The only way to stop it is to add 
unnecessary visual noise to the first function. All of these 
attributes should be something that you *want* to add, not 
something that you *need*. The compiler can obviously figure out 
if the function throws or not. Just keep an additional internal 
flag for each of the attributes. When any attribute is violated, 
flip the bit and boom, you have your implicit function signature.


I think this is how it always should have been. It's important to 
remember that the above attributes have the 'covariant' property, 
which means they can always be called by any function without 
that property. Therefore no existing code will start failing to 
compile. Only certain things which would have *errored* before 
will stop. Plus new optimizations can be done.


So what's the problem? As you can read in the vehement opposition 
to pull 1877 above, the big fear is that function signatures will 
start changing willy-nilly, causing the exposed interface of the 
function to destabilize, which will cause linker errors or 
require code intended to be kept separate in large projects to be 
recompiled at every little change.


I find this depressing! That something so good should be ruined 
by something so remote as the need for separate compilation in 
very large projects? I mean, most projects aren't even very 
large. Also, because D compiles so much faster than its 
predecessors, is it even such a big deal to have to recompile 
everything?


But let's admit the point may be valid. Yes, under attribute 
inference, the function signatures in the exposed API will indeed 
find themselves changing every time one so much as adds a 
'printf' or calls something that throws.


But they don't *have* to change. The compiler doesn't need to 
include the inferred attributes when it generates the mangled 
name and the .di signature, only the explicit ones. From within 
the program, all the opportunities for inference and optimization 
could be left intact, while outside programs accessing the code 
in precompiled form could only access the functions as explicitly 
indicated.


This makes no change to the language, except that it allows new 
things to compile. The only hitch is this: What if you want the 
full advantages of optimization and inference from across 
compilation boundaries? You'd have to add each of the covariant 
function attributes manually to every function you exposed. From 
my perspective, this is still a chore.


I suggest a new attribute, @api, which does nothing more than to 
tell the compiler to generate the function signature and mangle 
the name only with its explicit attributes, and not with its 
inferred ones. Inside the program, there's no reason the compiler 
can't continue to use inference, but with @api, the exposed 
interface will be stabilized, should the programmer want that. 
Simple.


I anticipate a couple of objections to my proposal:

The first is that we would now demand that the programmer decide 
whether he wants his exposed functions stabilized or not. For a 
large library used by different people, this choice might pose 
some difficulty. But it's not that bad. You just choose: do you 
want to improve compilation times and/or closed-source 
consistency by ensuring a stable interface, or do you want to 
speed up runtime performance without having to clutter your code? 
Most projects would choose the latter. @api is made available for 
the those who don't. The opposition to attribute inference put 
forth in pull 1877 is thereby appeased.


A second objection to this proposal: Another attribute? Really? 
Well, yeah.


But it's not a problem, I say, for these reasons:

1. This one little attribute allows you to excise gajillions of 
unnecessary little attributes which are currently forced on the 
programmer by the lack of inference, simply by appeasing the 
opponents of inference and allowing it to be implemented.


2. It seems like most people will be okay just recompiling 
projects instead of preferring to stabilize their apis. Thus, 
@api w

Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Walter Bright via Digitalmars-d

On 12/30/2014 4:14 AM, Steven Schveighoffer wrote:

But I agree. The problem is, most times, you WANT to ensure your code is @safe
pure nothrow (and now @nogc), even for template functions. That's a lot of
baggage to put on each signature. I just helped someone recently who wanted to
put @nogc on all the std.datetime code, and every signature had these 4
attributes except a few. I tried to have him put a big @safe: pure: nothrow:
@nogc: at the top, but the occasional exceptions made this impossible.


The way to do it is one of:

1. reorganize the code so the non-attributed ones come first

2. write the attributes as:

   @safe pure nothrow @nogc {
  ... functions ...
   }

   ... non attributed functions ...

   @safe pure nothrow @nogc {
  ... more functions ...
   }



Re: An idea for commercial support for D

2015-01-05 Thread Joakim via Digitalmars-d

On Monday, 5 January 2015 at 18:28:39 UTC, Jarrett Tierney wrote:
As a user of D in a corporate environment and personal at home 
environment, I have to say this model won't work for me. In 
fact if this model were implemented, I would more than likely 
have to move my project to a different language because of it. 
Let me explain the issues I see here.


You've proposed a hybrid open and closed source model. Where 
certain segments of code (latest patches) are behind a per 
patch paywall. As a customer I don't want to have to pay for 
each bug fix in the compiler. If there is a bug in the 
language/compiler then I want it fixed. I shouldn't be charged 
to have the language I'm using work properly. It basically says 
to customers, here you can use this language for free, unless 
you want it to work properly, in which case you need to pay for 
each fix you need or wait till developers don't care about 
making money off a fix anymore.


If you're not paying, you're not a customer.  The alternative is 
to use the bug-ridden OSS implementation you're using now for 
free, and not have a paid version for those who want those bugs 
fixed.  I don't doubt that some irrational people interpret the 
existence of a paid version in the way you laid out, and in 
extreme cases that _can_ happen (just as there are OSS vendors 
who write bad OSS code just so they can make more money off your 
favored support model), but that's more an issue with their 
sloppy thinking than anything else.


This will also diminish the growth rate of D. I can tell you, 
it would be significantly harder for me to use D at my 
workplace if this model were in place. Managers are okay in 
letting me use D because its an open source project (minus the 
backend of DMD) and it doesn't cost them a cent. If all of the 
sudden I have to ask them to pay for fixes in order for my 
project to work, that makes it a different situation entirely. 
My company would also frown on the free option of waiting for 
fixes to pass out of the pay wall. Companies like actively 
supported projects. As such, companies (at least the one I work 
for) prefer either fully commercially supported languages (C# 
through VS) or fully open source.


There are also "fully open source" languages which are "fully 
commercially supported."  How do your managers wrap their minds 
around such a paradox? ;)


My point is that such artificial distinctions are silly, whether 
because of the amount of support or source available.  The 
alternative to paid bug fixes is not that all the bugs you want 
fixed get done for free: it's _no_ bug fixes, as we see today.  
For example, selective imports at module scope has been broken 
for more than eight years now, as those symbols are leaked into 
any module that imports the module with the selective import.  
There are many more bugs like that, that could actually be fixed 
much faster if there were more paid devs working on D.


Remember, that there are ways to provide commercial support 
without putting a paywall on using the language itself. What is 
really needed here is buy-in from corporations on the language. 
Having engineers from a company working on D would provide one 
form of commercial support. But this model is very difficult to 
find and the closest to it I've seen is Facebook's involvement 
with D. I agree having developers who are paid to work on D 
would be a great thing, but reaching that point is a very 
difficult road.


Having both paid and free versions available is not a "paywall" 
on a language.  A company is not going to just write a bunch of 
patches and open source all of them unless they have some 
complementary business model to go with it, whether google making 
more mobile revenue off Android or Apple providing clang as the 
system compiler on OS X and making money off the bundled Mac.


While I understand, that D needs some form of commercial 
support for some parties, there is also a stigma that the 
current model doesn't provide support. I've found to the 
contrary actually. I find the full open source model employed 
here, has a better support system than a lot of other 
commercial support models. The reason is the community, there 
is always someone around to answer a question. I find with most 
commercially supported models the development team can't be 
reached and you have to depend on your coworkers or friends who 
may know of a workaround (Microsoft model). Most of the time I 
see bugs get fixed fairly promptly for a compiler project 
despite the fragmented development that is inherent to open 
source projects.


That community involvement would still be there for the OSS core 
with D, but you would get support for a closed patch from the 
developer who wrote it.


I think commercial support for D really comes down to one of 
two situations in the future:


* A company decides to make a commercial D compiler that is 
closed source but compatible with phobos, etc. They fully 
support the compiler. (Does

Re: call for GC benchmarks

2015-01-05 Thread Martin Nowak via Digitalmars-d

On 01/05/2015 06:18 PM, Benjamin Thaut wrote:

That won't work. Not only the allocations are important but the pointers
between them as well. Your proposed solution would only work if all
pointers within a D program are known and could be recorded.


And I'm also interested in the type information.


Re: D and Nim

2015-01-05 Thread Andrei Alexandrescu via Digitalmars-d

On 1/5/15 11:03 AM, H. S. Teoh via Digitalmars-d wrote:

On Mon, Jan 05, 2015 at 06:12:41PM +, Brad Anderson via Digitalmars-d wrote:

On Monday, 5 January 2015 at 04:10:41 UTC, H. S. Teoh via Digitalmars-d
wrote:

On Sun, Jan 04, 2015 at 07:25:28PM -0800, Andrei Alexandrescu via
Digitalmars-d wrote:

On 1/4/15 5:07 PM, weaselcat wrote:

Why does reduce! take the seed as its first parameter btw? It >sort
of messes up function chaining.


Mistake. -- Andrei


When are we going to fix this?


T


monarch dodra tried to make a reduce that was backward compatible
while moving the seed but it didn't work out in the end. He was
working on a "fold" which was basically reduce with the arguments
swapped but I'm not sure what happened to it.


Should somebody take over the implementation of "fold"? This is
something that ought to be fixed.


If you could that would be great but please fix groupBy first :o) -- Andrei



Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Steven Schveighoffer via Digitalmars-d

On 1/5/15 10:05 AM, Meta wrote:


IMO, inout (and const/immutable to a degree) is a failure for use with
class/struct methods. This became clear to me when trying to use it for
the toString implementation of Nullable.


You'd have to be more specific for me to understand your point. inout 
was specifically designed for one-implementation accessors for members 
of classes/structs.


-Steve


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Steven Schveighoffer via Digitalmars-d

On 1/5/15 11:51 AM, deadalnix wrote:

On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer wrote:

I strongly disagree :) inout enables so many things that just aren't
possible otherwise.

Most recent example:
https://github.com/D-Programming-Language/druntime/pull/1079

inout only gets confusing when you start using inout delegates.



You are arguing that inout is useful. That simply makes it a useful
disaster :)


I guess you and me have different ideas of what a disaster is :)

-Steve


Re: D and Nim

2015-01-05 Thread H. S. Teoh via Digitalmars-d
On Mon, Jan 05, 2015 at 06:12:41PM +, Brad Anderson via Digitalmars-d wrote:
> On Monday, 5 January 2015 at 04:10:41 UTC, H. S. Teoh via Digitalmars-d
> wrote:
> >On Sun, Jan 04, 2015 at 07:25:28PM -0800, Andrei Alexandrescu via
> >Digitalmars-d wrote:
> >>On 1/4/15 5:07 PM, weaselcat wrote:
> >>>Why does reduce! take the seed as its first parameter btw? It >sort
> >>>of messes up function chaining.
> >>
> >>Mistake. -- Andrei
> >
> >When are we going to fix this?
> >
> >
> >T
> 
> monarch dodra tried to make a reduce that was backward compatible
> while moving the seed but it didn't work out in the end. He was
> working on a "fold" which was basically reduce with the arguments
> swapped but I'm not sure what happened to it.

Should somebody take over the implementation of "fold"? This is
something that ought to be fixed.


T

-- 
Laissez-faire is a French term commonly interpreted by Conservatives to mean 
'lazy fairy,' which is the belief that if governments are lazy enough, the Good 
Fairy will come down from heaven and do all their work for them.


Re: D and Nim

2015-01-05 Thread Paulo Pinto via Digitalmars-d

On Monday, 5 January 2015 at 15:51:14 UTC, Brian Rogoff wrote:

...

I think a C++ successor is a language that 'enough' people 
would choose where before they'd have chosen C++. Java has 
already cleared that bar.


Still it leaves out the systems programming space, which is what 
is being discussed.


Java might have again a shot at it, when AOT, value types and the 
new JNR (Java Native Runtime) land, but that is only planned to 
be fully available by Java 10 timeline.


There are implementations like JikesRVM, Excelsior JET Embedded, 
MicroEJ among others that target bare metal, but many developers 
aren't aware of them.



--
Paulo


Re: An idea for commercial support for D

2015-01-05 Thread Jarrett Tierney via Digitalmars-d
As a user of D in a corporate environment and personal at home 
environment, I have to say this model won't work for me. In fact 
if this model were implemented, I would more than likely have to 
move my project to a different language because of it. Let me 
explain the issues I see here.


You've proposed a hybrid open and closed source model. Where 
certain segments of code (latest patches) are behind a per patch 
paywall. As a customer I don't want to have to pay for each bug 
fix in the compiler. If there is a bug in the language/compiler 
then I want it fixed. I shouldn't be charged to have the language 
I'm using work properly. It basically says to customers, here you 
can use this language for free, unless you want it to work 
properly, in which case you need to pay for each fix you need or 
wait till developers don't care about making money off a fix 
anymore.


This will also diminish the growth rate of D. I can tell you, it 
would be significantly harder for me to use D at my workplace if 
this model were in place. Managers are okay in letting me use D 
because its an open source project (minus the backend of DMD) and 
it doesn't cost them a cent. If all of the sudden I have to ask 
them to pay for fixes in order for my project to work, that makes 
it a different situation entirely. My company would also frown on 
the free option of waiting for fixes to pass out of the pay wall. 
Companies like actively supported projects. As such, companies 
(at least the one I work for) prefer either fully commercially 
supported languages (C# through VS) or fully open source.


Remember, that there are ways to provide commercial support 
without putting a paywall on using the language itself. What is 
really needed here is buy-in from corporations on the language. 
Having engineers from a company working on D would provide one 
form of commercial support. But this model is very difficult to 
find and the closest to it I've seen is Facebook's involvement 
with D. I agree having developers who are paid to work on D would 
be a great thing, but reaching that point is a very difficult 
road.


While I understand, that D needs some form of commercial support 
for some parties, there is also a stigma that the current model 
doesn't provide support. I've found to the contrary actually. I 
find the full open source model employed here, has a better 
support system than a lot of other commercial support models. The 
reason is the community, there is always someone around to answer 
a question. I find with most commercially supported models the 
development team can't be reached and you have to depend on your 
coworkers or friends who may know of a workaround (Microsoft 
model). Most of the time I see bugs get fixed fairly promptly for 
a compiler project despite the fragmented development that is 
inherent to open source projects.


I think commercial support for D really comes down to one of two 
situations in the future:


* A company decides to make a commercial D compiler that is 
closed source but compatible with phobos, etc. They fully support 
the compiler. (Doesn't necessarily mean they charge for the 
compiler itself, they could but they can also charge for support 
plans and/or a IDE tool).
* A company decides to invest engineers in working on the open 
source D compiler. Thus providing commercially supported 
developers to the project. (This would be a hybrid too, where the 
open source developers can still contribute and work but now 
there are a group of paid engineers working to advance the 
language as well).


On Sunday, 4 January 2015 at 08:31:23 UTC, Joakim wrote:
This is an idea I've been kicking around for a while, and given 
the need for commercial support for D, would perhaps work well 
here.


The notion is that individual developers could work on patches 
to fix bugs or add features to ldc/druntime/phobos then sell 
those closed patches to paying customers.  After enough time 
has passed, so that sufficient customers have adequately paid 
for the work or after a set time limit beyond that, the patch 
is open sourced and merged back upstream.  It would have to be 
ldc and not dmd, as the dmd backend is not open source and the 
gdc backend license doesn't allow such closed patches.


This works better than bounties because it avoids the "tragedy 
of the commons" problem inherent to open source and bounties, 
ie any user can just wait for some other contributor or any 
potential individual paying customer has an incentive to wait 
and let somebody else pay a bounty, then use the resulting work 
for free right away.  With this approach, non-paying users only 
get the resulting paid work after the work has been paid for 
and perhaps an additional delay after that.


Two big benefits come out of this approach.  Obviously, this 
would provide commercial support for paying customers, but the 
other big benefit is that it doesn't depend on some company 
providing that support.  A decentralized group of d

Re: GSOC - Holiday Edition

2015-01-05 Thread Johannes Pfau via Digitalmars-d
Am Mon, 05 Jan 2015 15:08:32 +0100
schrieb Martin Nowak :

> On 01/05/2015 04:50 AM, Mike wrote:
> > Exactly, that's good example.
> 
> Can we please file those as betterC bugs in https://issues.dlang.org/.
> If we sort those out, it will be much easier next time.

I'm working on a private GDC branch running on 8bit AVRs and I fix
these issues as I encounter them. I intend to backport all changes
to DMD in the next few months, so filing bug reports only makes sense if
somebody else wants to fix/upstream fixes them faster than I do ;-)


Re: D and Nim

2015-01-05 Thread Brad Anderson via Digitalmars-d
On Monday, 5 January 2015 at 04:10:41 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Sun, Jan 04, 2015 at 07:25:28PM -0800, Andrei Alexandrescu 
via Digitalmars-d wrote:

On 1/4/15 5:07 PM, weaselcat wrote:
>Why does reduce! take the seed as its first parameter btw? It 
>sort of

>messes up function chaining.

Mistake. -- Andrei


When are we going to fix this?


T


monarch dodra tried to make a reduce that was backward compatible 
while moving the seed but it didn't work out in the end. He was 
working on a "fold" which was basically reduce with the arguments 
swapped but I'm not sure what happened to it.


Re: Phobos colour module?

2015-01-05 Thread via Digitalmars-d

On Monday, 5 January 2015 at 16:08:27 UTC, Adam D. Ruppe wrote:
Yeah, in my misc repo, there used to be stand along image.d and 
simpledisplay.d. Now, they both depend on color.d. Even just a 
basic definition we can use elsewhere is nice to have so other 
libs can interop on that level without annoying casts or 
pointless conversions just to please the type system when the 
contents are identical.


Yes, that too. I was more thinking about the ability to create an 
adapter that extracts colour information from an existing data 
structure and adds context information such as gamma. Then let  
you build a function that say reads floats from 3 LAB pointers 
and finally returns a tuple with a 16 bit RGB pixel with gamma 
correction and the residue in a specified format suitable for 
dithering... ;-]


It is quite common error to do computations on colours that are 
ignorant of gamma (or do it wrong) which results in less accurate 
imaging. E.g. When dithering you need to make sure that the 
residue that is left when doing bit truncation is added to the 
neighbouring pixels in a "linear addition" (without gamma). 
Making stuff like that less tedious would make it a very useful 
library.


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 17:44, Daniel Murphy via Digitalmars-d
 wrote:
> "Iain Buclaw via Digitalmars-d"  wrote in message
> news:mailman.4157.1420479008.9932.digitalmar...@puremagic.com...
>
>> For consistency? I would go with (c) as va_list could be anything,
>> even a struct (PPC). That and people shouldn't (*!*) be manipulating
>> va_list directly, though unfortunately we do for std.format, etc.
>>
>> The only realistic option would be (a).
>
>
> I think the only code that needs to manipulate va_list directly is low-level
> enough that forcing use of a union or *cast(void**)&va is reasonable.
>
> I think I've got a handle on this, sort of.  I've moved the declaration of
> __va_argsave into the glue layer, and added intrinsic detection for
> va_start/va_end/va_arg (the two-arg form).
>
> I've implemented them in the backend for win32 and they have passed a simple
> test!
>
> I'll run some more extensive tests tomorrow, and then have a look at some
> other platforms.
>
> Do you think we can change _all_ the druntime and phobos code to just use
> va_arg directly?  It would be nice to have it all portable like that.

Oh, yeah, do it!  You have references to __va_argsave in phobos, don't you?


Re: Bad error message example

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Benjamin Thaut"  wrote in message news:m8eian$21nu$1...@digitalmars.com... 

Today I had a bad template error message and I though I might post it 
here so something can be done about it, the error message was:


Please report in bugzilla: http://d.puremagic.com/issues/



Bad error message example

2015-01-05 Thread Benjamin Thaut via Digitalmars-d
Today I had a bad template error message and I though I might post it 
here so something can be done about it, the error message was:


/usr/include/dlang/dmd/std/conv.d(278): Error: template instance 
isRawStaticArray!() does not match template declaration 
isRawStaticArray(T, A...)


I was not using isRawStaticArray anywhere in my code. And the error was 
generally not helpfull at all and yes that was all of it. So after 
staring at my source file for 10 minutes I finally found the line that 
caused the error:


s.amount = to!double();

I accidentally deleted the contents between the two parentheses. It 
would be great if dmd would give a more meaningfull error message in 
this case. Why are the "instanciated from" messages sometimes given and 
sometimes not?


Kind Regards
Benjamin Thaut


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Iain Buclaw via Digitalmars-d"  wrote in message 
news:mailman.4157.1420479008.9932.digitalmar...@puremagic.com...



For consistency? I would go with (c) as va_list could be anything,
even a struct (PPC). That and people shouldn't (*!*) be manipulating
va_list directly, though unfortunately we do for std.format, etc.

The only realistic option would be (a).


I think the only code that needs to manipulate va_list directly is low-level 
enough that forcing use of a union or *cast(void**)&va is reasonable.


I think I've got a handle on this, sort of.  I've moved the declaration of 
__va_argsave into the glue layer, and added intrinsic detection for 
va_start/va_end/va_arg (the two-arg form).


I've implemented them in the backend for win32 and they have passed a simple 
test!


I'll run some more extensive tests tomorrow, and then have a look at some 
other platforms.


Do you think we can change _all_ the druntime and phobos code to just use 
va_arg directly?  It would be nice to have it all portable like that. 



Re: GSOC - Holiday Edition

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 14:46, Martin Nowak via Digitalmars-d
 wrote:
> On 01/05/2015 02:59 AM, Craig Dillabaugh wrote:
>>
>> Do you feel the current posting on the Wiki accurately best reflects
>> what work needs to be done on this project.
>
>
> Yeah, it's pretty good.
> I've thrown out the hosted ARM project (AFAIK gdc and ldc are almost done)
> and filled in some details for the bare-metal project.

Around the time of Dconf 2013, gdc's ARM port was passing the (as of
then) D2 testsuite.  Things might have changed since though.

Regards
Iain


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 13:37, Daniel Murphy via Digitalmars-d
 wrote:
> "Daniel Murphy"  wrote in message news:m8dv49$1cgs$1...@digitalmars.com...
>
>> And what about explicit casts?
>
>
> Oh yeah, and how does __va_argsave work, why do we need it?
>
> Looking at the druntime and phobos code, I'm not sure which stuff is
> correct, which stuff needs to have the X86_64 version deleted, and which
> should be moved to va_arg.

IIRC, there is some minor duplication between std.format and
core.stdc.stdarg, but I think that we really should be able to get
things working without changing druntime or phobos.


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 12:13, Daniel Murphy via Digitalmars-d
 wrote:
> "Daniel Murphy"  wrote in message news:m8dv1g$1cg4$1...@digitalmars.com...
>>
>> Druntime and phobos rely on va_list converting to void*.  Should this
>> a) be allowed on platforms where va_list is a pointer
>> b) always be allowed
>> c) never be allowed
>> ???
>
>
> And what about explicit casts?

Casts should always be explicit.  I think it would be best if va_list
is treated as a distinct type to others, even if the underlying type
is a char* (x86) or void* (ARM OABI).


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 12:11, Daniel Murphy via Digitalmars-d
 wrote:
> "Iain Buclaw via Digitalmars-d"  wrote in message
> news:mailman.4146.1420457999.9932.digitalmar...@puremagic.com...
>
>> That is correct for user code, but not druntime C bindings.
>>
>> GDC can compile the test in 3568 thanks to the GCC backend providing
>> the va_list struct a name (__va_list_tag).
>>
>> However it for sure cannot run the program though.  Only body-less
>> declarations in core.stdc.* are rewritten to ref va_list.
>
>
> Druntime and phobos rely on va_list converting to void*.  Should this
> a) be allowed on platforms where va_list is a pointer
> b) always be allowed
> c) never be allowed
> ???

For consistency? I would go with (c) as va_list could be anything,
even a struct (PPC). That and people shouldn't (*!*) be manipulating
va_list directly, though unfortunately we do for std.format, etc.

The only realistic option would be (a).


Re: call for GC benchmarks

2015-01-05 Thread Benjamin Thaut via Digitalmars-d

Am 05.01.2015 um 17:02 schrieb Kiith-Sa:

On Monday, 5 January 2015 at 14:52:36 UTC, Martin Nowak wrote:

On 01/05/2015 11:26 AM, Benjamin Thaut wrote:

If you are interrested I might be able to branch of a old revision and
make it compile with the latest dmd again.


I'm interested in realistically simulating your allocation patterns.
That includes types and allocation sizes, allocation order, lifetime
and connectivity.
Definitely sounds interesting.


Maybe make a proxy GC, record all allocations to a file,
then "replay" those allocations as a benchmark?


That won't work. Not only the allocations are important but the pointers 
between them as well. Your proposed solution would only work if all 
pointers within a D program are known and could be recorded.


Re: GSOC - Holiday Edition

2015-01-05 Thread CraigDillabaugh via Digitalmars-d
On Saturday, 3 January 2015 at 03:33:29 UTC, Rikki Cattermole 
wrote:

On 3/01/2015 3:59 p.m., Craig Dillabaugh wrote:
On Saturday, 3 January 2015 at 00:15:42 UTC, Rikki Cattermole 
wrote:

On 3/01/2015 4:30 a.m., Craig Dillabaugh wrote:
On Thursday, 1 January 2015 at 06:19:14 UTC, Rikki 
Cattermole wrote:

clip

10) Rikki had mentioned a 'Web Development' project, but I 
don't have
enough to post on the project ideas page.  Are you still 
interested in

doing this.


Yes I am.
I don't know what I'm doing in the near future (need a job) 
so I can't

explore this too much.
But I know I will be able to mentor for it.

Hope that everyone has a great 2015, and I look forward to 
your

feedback.

Cheers,

Craig


It would be great to have you as a mentor, but we definitely 
need fairly
solidly defined projects.  Any chance you can come up with 
something by

the end of January.

Craig


Indeed.
I created a list for Cmsed
https://github.com/rikkimax/Cmsed/wiki/Road-map#what-does-other-web-service-frameworks-offer

Right now it basically comes down to e.g. QR code, bar code, 
PDF.

QR and bar code isn't that hard. Not really a GSOC project.
PDF definitely is worthy.

PDF is an interesting case, it needs e.g. PostScript support. 
And

preferably image and font loading/exporting.
So it might be a good worth while project. As it expands out 
into

numerous other projects.


Thanks.  Would you like to add something to the Wiki, or would 
you

prefer if I did so.  Also, what license are you using?

Cheers,
Craig


When it comes to my open source code bases I have two rules.
- If you use it commercially at the very least donate what its 
worth to you.
- For non commercial, as long as I'm not held liable you are 
free to use it in any way you want. At the very least, get 
involved e.g. PR's, issues.
So liberal licenses like MIT, BSD. Which are compatible with 
e.g. BOOST.


Please do write up a skeleton for me on the wiki. I can pad it 
out. Will help to keep things consistent.


I will try to add something in the coming days (hopefully by 
mid-week). However, I believe you have to pick a specific OSI 
approved license for the project for it to be considered for GSOC.




Re: Phobos colour module?

2015-01-05 Thread deadalnix via Digitalmars-d
On Thursday, 1 January 2015 at 06:38:41 UTC, Manu via 
Digitalmars-d wrote:
I've been working on a pretty comprehensive module for dealing 
with
colours in various formats and colour spaces and conversions 
between

all of these.
It seems like a hot area for duplicated effort, since anything 
that
deals with multimedia will need this, and I haven't seen a 
really

comprehensive implementation.



Indeed, I stop you right there: I did one as well in the past, 
but definitively not high quality enough to be interesting for 
3rd party.



Does it seem like something we should see added to phobos?



Yes.


Re: GSOC - Holiday Edition

2015-01-05 Thread Mike via Digitalmars-d

On Monday, 5 January 2015 at 11:38:17 UTC, Paulo  Pinto wrote:


Personally I would chose Netduino and MicroEJ capable boards if 
I ever do any electronics again as hobby.


Given your experience wouldn't D be capable to target such 
systems as well?




Yes, D is perfectly capable of targeting those boards using GDC 
and potentially even LDC, although LDC still has a few strange 
bugs [1].  In fact, with the right hackery, I assume D will 
generate far better code (smaller and faster) than the .Net Micro 
Framework or MicroEJ.


Another interesting offering is the Intel Edison/Galileo boards 
[2].  I'm under the impression that DMD would be able to generate 
code for those boards as well.  Although those boards are less 
like microcontrollers and more like micro PCs (e.g. Raspberry Pi, 
BeagleBone Black)


As a hobby, I highly recommend anyone interested getting 
themselves a board and trying it out.  The boards are 
surprisingly inexpensive.  With the right knowledge, it takes 
very little to get started, and can be quite rewarding to see the 
hardware "come alive" with your code.


1. Get yourself a GDC cross-compiler [3], and whatever tools are 
needed to interface a PC to your board (OpenOCD, or 
vendor-supplied tools).
2. Throw out Phobos and D Runtime, and create a small object.d 
with a few stubs as your runtime.
4. Write a simple program (e.g. blinky, semi-hosted "hello world" 
[4])
5. Create a linker script for your board.  This can be difficult 
the first time as you need an intimate understanding of your 
hardware and how the compiler generates code.
6. Use OpenOCD or your vendor's tools to upload the binary to 
your board, and bask in the satisfaction of bringing the board to 
life.


You won't be able to use classes, dynamic arrays, and a multitude 
of other language features unless you find a way to implement 
them in your runtime, but you will be able to write C-like code 
only with added bonuses like CTFE, templates, and mixins.


I'm sure those that actually take the plunge will find it to be a 
fun, educational, and rewarding exploration.


Mike

[1] - https://github.com/ldc-developers/ldc/issues/781
[2] - 
http://www.intel.com/content/www/us/en/do-it-yourself/maker.html
[3] - 
http://wiki.dlang.org/Bare_Metal_ARM_Cortex-M_GDC_Cross_Compiler
[4] - 
http://wiki.dlang.org/Minimal_semihosted_ARM_Cortex-M_%22Hello_World%22




Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread deadalnix via Digitalmars-d
On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer 
wrote:
I strongly disagree :) inout enables so many things that just 
aren't possible otherwise.


Most recent example: 
https://github.com/D-Programming-Language/druntime/pull/1079


inout only gets confusing when you start using inout delegates.

-Steve


You are arguing that inout is useful. That simply makes it a 
useful disaster :)


Re: D and Nim

2015-01-05 Thread via Digitalmars-d

On Monday, 5 January 2015 at 14:52:00 UTC, Paulo  Pinto wrote:

For your reference, http://isocpp.org/files/papers/n4028.pdf


Yeah, I saw that one, but when ABI was brought up in one of the 
CppCon videos I perceived a lack of enthusiasm among the other 
committee members. Maybe I got the wrong impression, we'll see...


It also remains to be seen what Apple and Microsoft do with 
their new babies (Swift, .NET Native, Dafny).


Yes. I bet the management of big corporations often focus more on 
what the competitors are doing than pure technical merits. So I 
am pretty sure that Microsoft is keen to have something like 
Swift, just in case. I guess that means increased internal 
pressure to make C# shine in the C++ domain...


Re: Phobos colour module?

2015-01-05 Thread Adam D. Ruppe via Digitalmars-d
On Monday, 5 January 2015 at 15:57:32 UTC, Ola Fosheim Grøstad 
wrote:
But I agree that colour theory is solid enough to be considered 
stable and that it would be a great benefit to have a single 
library used across multiple projects. It is also very suitable 
for templated types.


Yeah, in my misc repo, there used to be stand along image.d and 
simpledisplay.d. Now, they both depend on color.d. Even just a 
basic definition we can use elsewhere is nice to have so other 
libs can interop on that level without annoying casts or 
pointless conversions just to please the type system when the 
contents are identical.


I went with struct Color { ubyte r,g,b,a; }  not perfect, 
probably not good enough for like a Photoshop, and sometimes the 
bytes need to be shuffled for different formats, but eh it works 
for me.


Re: call for GC benchmarks

2015-01-05 Thread Kiith-Sa via Digitalmars-d

On Monday, 5 January 2015 at 14:52:36 UTC, Martin Nowak wrote:

On 01/05/2015 11:26 AM, Benjamin Thaut wrote:
If you are interrested I might be able to branch of a old 
revision and

make it compile with the latest dmd again.


I'm interested in realistically simulating your allocation 
patterns.
That includes types and allocation sizes, allocation order, 
lifetime and connectivity.

Definitely sounds interesting.


Maybe make a proxy GC, record all allocations to a file,
then "replay" those allocations as a benchmark?


Re: Phobos colour module?

2015-01-05 Thread via Digitalmars-d
On Friday, 2 January 2015 at 08:46:25 UTC, Manu via Digitalmars-d 
wrote:
Not universal enough? Colours are not exactly niche. Loads of 
system

api's, image readers/writers, icons, they all use pixel buffers.
A full-blown image library will require a lot more design work, 
sure,

but I can see room for that in Phobos too.


I feel Phobos need to be broken up. There is too much esoteric 
stuff in there and too much essential stuff missing. I think some 
kind of "extra" hierarchy is needed for more application specific 
functionality.


But I agree that colour theory is solid enough to be considered 
stable and that it would be a great benefit to have a single 
library used across multiple projects. It is also very suitable 
for templated types.


A standard image library would have to be templated with optional 
compiler-specific optimizations (SIMD) for the most usual 
combination. There are too many representations used in different 
types of image processing to find common ground (unless you limit 
yourself and just select PNG as your design base).


Re: D and Nim

2015-01-05 Thread Brian Rogoff via Digitalmars-d

On Monday, 5 January 2015 at 10:21:12 UTC, Paulo  Pinto wrote:

On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote:

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


I'm not sure if you're being serious, but I'd say yes. The space 
where I see Nim being successful is mostly occupied by Python and 
Go. That it can compete with D in some systems programming, or 
for games, is nice, but games are dominated by C++ and I don't 
see how any new language displaces it in the near future. That 
doesn't mean that the OP shouldn't experiment though.


With some effort in the scientific space, I believe that Nim 
could compete with MATLAB/R/Julia, but currently the libraries 
just don't exist. But the language would appeal to scientific 
programmers I think, more so than would D.


A C++ successor is any language that earns its place in a OS 
vendors SDK as the OS official supported language for all OS 
layers.


I think a C++ successor is a language that 'enough' people would 
choose where before they'd have chosen C++. Java has already 
cleared that bar.






Re: D and Nim

2015-01-05 Thread anonymous via Digitalmars-d

On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote:

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


Nim is successor of Nimrod.


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Meta via Digitalmars-d
On Monday, 5 January 2015 at 14:00:13 UTC, Steven Schveighoffer 
wrote:

On 1/5/15 8:06 AM, deadalnix wrote:
On Monday, 29 December 2014 at 20:26:27 UTC, Steven 
Schveighoffer wrote:

On 12/29/14 2:50 PM, Walter Bright wrote:

On 12/29/2014 5:53 AM, Steven Schveighoffer wrote:

On 12/28/14 4:33 PM, Walter Bright wrote:
inout is not transitive, so a ref on the container doesn't 
apply to a
ref on the contents if there's another level of 
indirection in there.
I'm not sure what you mean by this, but inout as a type 
modifier is

definitely
transitive.


As a type modifier, yes, it is transitive. As transferring 
lifetime to

the return value, it is not.



I strongly suggest not to use inout to mean this. This idea 
would be a

disaster.


On the other hand, inout IS a disaster, so why not ?


I strongly disagree :) inout enables so many things that just 
aren't possible otherwise.


Most recent example: 
https://github.com/D-Programming-Language/druntime/pull/1079


inout only gets confusing when you start using inout delegates.

-Steve


IMO, inout (and const/immutable to a degree) is a failure for use 
with class/struct methods. This became clear to me when trying to 
use it for the toString implementation of Nullable.


Re: GSOC - Holiday Edition

2015-01-05 Thread CraigDillabaugh via Digitalmars-d

On Monday, 5 January 2015 at 14:46:25 UTC, Martin Nowak wrote:

On 01/05/2015 02:59 AM, Craig Dillabaugh wrote:
Do you feel the current posting on the Wiki accurately best 
reflects

what work needs to be done on this project.


Yeah, it's pretty good.
I've thrown out the hosted ARM project (AFAIK gdc and ldc are 
almost done) and filled in some details for the bare-metal 
project.


Thanks.


Re: call for GC benchmarks

2015-01-05 Thread Martin Nowak via Digitalmars-d

On 01/05/2015 11:26 AM, Benjamin Thaut wrote:

If you are interrested I might be able to branch of a old revision and
make it compile with the latest dmd again.


I'm interested in realistically simulating your allocation patterns.
That includes types and allocation sizes, allocation order, lifetime and 
connectivity.

Definitely sounds interesting.


Re: D and Nim

2015-01-05 Thread Paulo Pinto via Digitalmars-d
On Monday, 5 January 2015 at 14:22:04 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 5 January 2015 at 13:47:24 UTC, Paulo  Pinto wrote:
For C++ there is the Itanium ABI, COM/WinRT on Windows and the 
upcoming C++17 ABI.


If there will be a C++17 ABI and it is adopted, then that will 
be the beginning of the end for C++ IMO. (Wishful thinking... 
;-)


For your reference, http://isocpp.org/files/papers/n4028.pdf



Yes there are lots of options, still the ones that live longer 
as system programming languages, are the ones that get OS 
vendor adoption.


So far, it has always been the case.


By my definition of "system level programming" the only adopted 
system level programming language since the 1980s has been C 
(and C++ only as C-with-bells-and-whistles). Then you have some 
fringe languages such as Ada, and now probably also Rust as it 
is approaching version 1.0.


Yes, C, C++, Ada have all been adopted by OS vendors for systems 
programming (bare metal/full OS stack).




I cannot really see Nim or D taking that slot. They appear to 
have too wide a scope. I think only a focused language that can 
bring along better optimization and manual memory handling has 
a chance against C/C++ in system programming. (We have to 
remember that C/C++ are moving too with various extensions that 
also are gaining traction: OpenMP, Cilk...)


Sadly me neither. I think C++11/14 has improved the language 
quite a lot.


For those willing to wait until 2017, it will look even better, 
assuming modules and concepts lite get in.


Clang/XCode also brought the .NET/JVM tooling capabilities to 
C++, which is being adopted by other vendors (JetBrains, 
Microsoft, ...).


However, the majority of C++ code out there is mostly pre-C++98 
in style. So what I got to learn from CppCon 2014 videos is that 
I should not miss my C++ days at work.


It also remains to be seen what Apple and Microsoft do with their 
new babies (Swift, .NET Native, Dafny).


--
Paulo


Re: D and Nim

2015-01-05 Thread via Digitalmars-d

On Monday, 5 January 2015 at 14:40:18 UTC, Ary Borenszweig wrote:

You said "Computer Science has found that the right
default for variables is to have them immutable". I don't think 
"Rust == Computer Science". Otherwise their compiler would be 
fast (Computer Science knows how to do fast compilers).


FWIW, proper computer scientists do not care about making fast 
compilers... They care about proving properties such as "why and 
when algorithm 1 is faster than algorithm 2 for data sets that 
approaches infinite size, given infinite memory"... Computer 
Science is the stepchild of Discrete Mathematics. Highly 
impractical, but very useful.




Re: GSOC - Holiday Edition

2015-01-05 Thread Martin Nowak via Digitalmars-d

On 01/05/2015 02:59 AM, Craig Dillabaugh wrote:

Do you feel the current posting on the Wiki accurately best reflects
what work needs to be done on this project.


Yeah, it's pretty good.
I've thrown out the hosted ARM project (AFAIK gdc and ldc are almost 
done) and filled in some details for the bare-metal project.


Re: D and Nim

2015-01-05 Thread Ary Borenszweig via Digitalmars-d

On 1/5/15 8:01 AM, bearophile wrote:

Ary Borenszweig:


Are there proofs of percentage of bugs caused by incorrectly mutating
variables that were supposed to be immutable?


I don't know, probably not, but the progress in language design is still
in its pre-quantitative phase (note: I think Rust variables are constant
by default, and mutable on request with "mut").
It's not just a matter of bugs, it's also a matter of making the code
simpler to better/faster understand what a function is doing and how.


You said "Computer Science has found that the right
default for variables is to have them immutable". I don't think "Rust == 
Computer Science". Otherwise their compiler would be fast (Computer 
Science knows how to do fast compilers).


At least I like that they are introducing a new feature to their 
language that none other has: lifetimes and borrows. But I find it very 
hard to read their code. Take a look for example at the lerp function 
defined in this article:


http://www.willusher.io/2014/12/30/porting-a-ray-tracer-to-rust-part-1/

Rust:
~~~
pub fn lerp + Add + Copy>(t: f32, a: &T, b: &T) 
-> T {

*a * (1.0 - t) + *b * t
}
~~~

C++:
~~~
template
T lerp(float t, const T &a, const T &b){
return a * (1.f - t) + b * t;
}
~~~




I don't remember having such bug in my life.


Perhaps you are very good, but a language like D must be designed for
more common programmers like Kenji Hara, Andrei Alexandrescu, or Raymond
Hettinger.


I don't think those are common programmers :-)


Re: D and Nim

2015-01-05 Thread CraigDillabaugh via Digitalmars-d

On Monday, 5 January 2015 at 08:13:29 UTC, Jonathan wrote:
Thanks everyone for the incite so far! Reading between the 
lines, I gather most thoughts are that both languages are 
similar in their positioning/objectives yet differ in certain 
domains (e.g. generic/template capabilities) and qualities 
(e.g. Nim opinionated choice of scope delimiters). Does that 
sound logical? This was kind of the thing I was fishing for 
when thinking of the post.


First sentence ... did you mean 'insight' or was that some sort
of Freudian slip :o)



Re: D and Nim

2015-01-05 Thread via Digitalmars-d

On Monday, 5 January 2015 at 13:47:24 UTC, Paulo  Pinto wrote:
For C++ there is the Itanium ABI, COM/WinRT on Windows and the 
upcoming C++17 ABI.


If there will be a C++17 ABI and it is adopted, then that will be 
the beginning of the end for C++ IMO. (Wishful thinking... ;-)


Yes there are lots of options, still the ones that live longer 
as system programming languages, are the ones that get OS 
vendor adoption.


So far, it has always been the case.


By my definition of "system level programming" the only adopted 
system level programming language since the 1980s has been C (and 
C++ only as C-with-bells-and-whistles). Then you have some fringe 
languages such as Ada, and now probably also Rust as it is 
approaching version 1.0.


I cannot really see Nim or D taking that slot. They appear to 
have too wide a scope. I think only a focused language that can 
bring along better optimization and manual memory handling has a 
chance against C/C++ in system programming. (We have to remember 
that C/C++ are moving too with various extensions that also are 
gaining traction: OpenMP, Cilk...)


Re: GSOC - Holiday Edition

2015-01-05 Thread Martin Nowak via Digitalmars-d

On 01/05/2015 04:38 AM, Mike wrote:

I forgot to mention in my last post your proposal for moving TypeInfo to
the runtime [1] is also one of the changes I had in mind. It would be an
excellent start, an important precedent, and would avoid the ridiculous
TypeInfo-faking hack necessary to get a build.


And again, you have a good chance to convince people that -betterC 
shouldn't generate TypeInfo.


Re: GSOC - Holiday Edition

2015-01-05 Thread CraigDillabaugh via Digitalmars-d

On Saturday, 3 January 2015 at 16:17:44 UTC, Mathias LANG wrote:
On Wednesday, 31 December 2014 at 03:25:53 UTC, Craig 
Dillabaugh wrote:
I was hoping folks to take a brief break from bickering about 
features, and arguing over which posters have been naughty, 
and which have been nice, to get a bit of input on our 2015 
Google Summer of Code Proposal  ... :o)


Thanks for doing this, we definitely need more manpower.
I would be willing to mentor something related to Vibe.d, 
however I don't have anything to propose ATM. Bt if you find 
something, feel free to email me.


There was a discussion about redesigning the dlang.org. It 
looks like there's some WIP ( 
https://github.com/w0rp/new-dlang.org ), but I didn't follow 
the discussion closely enough (and it's now around 400 posts).
Could it be a possible project, provided that such project 
would have to be done in D ?


Rikki wants to do D web development (see this thread), and his
project is using Vibe D.  Perhaps you can check it out.  Do you
think you might be interested in serving as the backup mentor for
that one?

As for the web page, that would possibly be a tough sell to Google
if they consider it more of a 'documentation' project than a
'coding' project, since they explicitly state that documentation
projects are not allowed (I was considering suggesting a Phobos
documentation project submission, so did a bit of research on 
that).


However, there has been some talk of improvements to DDOC around 
here,

maybe something could be cooked up there ... we still have a bit
more than a month to get projects lined up.




Re: GSOC - Holiday Edition

2015-01-05 Thread Martin Nowak via Digitalmars-d

On 01/05/2015 04:50 AM, Mike wrote:

Exactly, that's good example.


Can we please file those as betterC bugs in https://issues.dlang.org/.
If we sort those out, it will be much easier next time.


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread Steven Schveighoffer via Digitalmars-d

On 1/5/15 8:06 AM, deadalnix wrote:

On Monday, 29 December 2014 at 20:26:27 UTC, Steven Schveighoffer wrote:

On 12/29/14 2:50 PM, Walter Bright wrote:

On 12/29/2014 5:53 AM, Steven Schveighoffer wrote:

On 12/28/14 4:33 PM, Walter Bright wrote:

inout is not transitive, so a ref on the container doesn't apply to a
ref on the contents if there's another level of indirection in there.

I'm not sure what you mean by this, but inout as a type modifier is
definitely
transitive.


As a type modifier, yes, it is transitive. As transferring lifetime to
the return value, it is not.



I strongly suggest not to use inout to mean this. This idea would be a
disaster.


On the other hand, inout IS a disaster, so why not ?


I strongly disagree :) inout enables so many things that just aren't 
possible otherwise.


Most recent example: 
https://github.com/D-Programming-Language/druntime/pull/1079


inout only gets confusing when you start using inout delegates.

-Steve


Re: D and Nim

2015-01-05 Thread Paulo Pinto via Digitalmars-d
On Monday, 5 January 2015 at 13:13:43 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 5 January 2015 at 10:21:12 UTC, Paulo  Pinto wrote:

On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote:

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


A C++ successor is any language that earns its place in a OS 
vendors SDK as the OS official supported language for all OS 
layers.


Which one it will be is still open game.


But C++ gained traction before any OS officially supported it 
(sans BeOS)?


Yes. It was almost immediately adopted by C compiler vendors 
given it came from AT&T and as compatible with C.


UNIX vendors jumped on it for CORBA and telecommunications (C++ 
original field) so by the early 90's pretty much all UNIXes had 
some form of C++ support.


Walter's work was also an influence, given it was the first C++ 
compiler to directly produce native code.


Epoch/Symbian and later OS/400 revisions were also done in C++.

On MS-DOS I was already using C++ back in 1993.



Without an ABI, I think C++ will be it's own successor. And I 
think key C++ people know this and will avoid creating an ABI...


C ABI only works in OS that happen to be written in C. There are 
a few where this is not the case. OS/400 is such an example.


For C++ there is the Itanium ABI, COM/WinRT on Windows and the 
upcoming C++17 ABI.




Besides that I don't think there will be a single replacement. 
It will more likely be several languages aiming at different 
domains where you have different hardware requirements (hpc, 
embedded, servers, interactive apps...)


D needs to pick one area, and do it well there.

* If D is aiming for conserving memory and realtime apps, then 
it needs better memory model/reference type system,


* if D is aiming for convenient server programmer (that can 
afford wasting memory), then it need to tune the language for 
better garbage collection.


With no tuning... other languages will surpass it. Be it Rust, 
Chapel, Go, Nim or one of the many budding language projects 
that LLVM has inspired...


Yes there are lots of options, still the ones that live longer as 
system programming languages, are the ones that get OS vendor 
adoption.


So far, it has always been the case.

--
Paulo



Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d

"Daniel Murphy"  wrote in message news:m8dv49$1cgs$1...@digitalmars.com...


And what about explicit casts?


Oh yeah, and how does __va_argsave work, why do we need it?

Looking at the druntime and phobos code, I'm not sure which stuff is 
correct, which stuff needs to have the X86_64 version deleted, and which 
should be moved to va_arg. 



Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread deadalnix via Digitalmars-d

On Wednesday, 31 December 2014 at 21:08:29 UTC, Dicebot wrote:
This mostly matches my current opinion of DIP25 + DIP69 as 
well. It is not as much problem of lacking power but utterly 
breaking KISS principle - too many special cases to remember, 
too many concepts to learn. Path of minimal necessary change is 
tempting but it is path to C++.


Yes especially when this path create non orthogonal features, 
which inevitably create a complexity explosion down the road.


This is the very old simple vs easy problem. Easy is tempting, 
but simple is what we want and they sometime are very different 
things.


Re: D and Nim

2015-01-05 Thread via Digitalmars-d

On Monday, 5 January 2015 at 10:21:12 UTC, Paulo  Pinto wrote:

On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote:

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


A C++ successor is any language that earns its place in a OS 
vendors SDK as the OS official supported language for all OS 
layers.


Which one it will be is still open game.


But C++ gained traction before any OS officially supported it 
(sans BeOS)?


Without an ABI, I think C++ will be it's own successor. And I 
think key C++ people know this and will avoid creating an ABI...


Besides that I don't think there will be a single replacement. It 
will more likely be several languages aiming at different domains 
where you have different hardware requirements (hpc, embedded, 
servers, interactive apps...)


D needs to pick one area, and do it well there.

* If D is aiming for conserving memory and realtime apps, then it 
needs better memory model/reference type system,


* if D is aiming for convenient server programmer (that can 
afford wasting memory), then it need to tune the language for 
better garbage collection.


With no tuning... other languages will surpass it. Be it Rust, 
Chapel, Go, Nim or one of the many budding language projects that 
LLVM has inspired...


Re: http://wiki.dlang.org/DIP25

2015-01-05 Thread deadalnix via Digitalmars-d
On Monday, 29 December 2014 at 20:26:27 UTC, Steven Schveighoffer 
wrote:

On 12/29/14 2:50 PM, Walter Bright wrote:

On 12/29/2014 5:53 AM, Steven Schveighoffer wrote:

On 12/28/14 4:33 PM, Walter Bright wrote:
inout is not transitive, so a ref on the container doesn't 
apply to a
ref on the contents if there's another level of indirection 
in there.
I'm not sure what you mean by this, but inout as a type 
modifier is

definitely
transitive.


As a type modifier, yes, it is transitive. As transferring 
lifetime to

the return value, it is not.



I strongly suggest not to use inout to mean this. This idea 
would be a disaster.


-Steve


On the other hand, inout IS a disaster, so why not ?


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Iain Buclaw via Digitalmars-d"  wrote in message 
news:mailman.4146.1420457999.9932.digitalmar...@puremagic.com...



That is correct for user code, but not druntime C bindings.

GDC can compile the test in 3568 thanks to the GCC backend providing
the va_list struct a name (__va_list_tag).

However it for sure cannot run the program though.  Only body-less
declarations in core.stdc.* are rewritten to ref va_list.


Druntime and phobos rely on va_list converting to void*.  Should this
a) be allowed on platforms where va_list is a pointer
b) always be allowed
c) never be allowed
??? 



Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Daniel Murphy"  wrote in message news:m8dv1g$1cg4$1...@digitalmars.com... 


Druntime and phobos rely on va_list converting to void*.  Should this
a) be allowed on platforms where va_list is a pointer
b) always be allowed
c) never be allowed
??? 


And what about explicit casts?


Re: D and Nim

2015-01-05 Thread logicchains via Digitalmars-d

On Monday, 5 January 2015 at 00:01:34 UTC, Walter Bright wrote:

D:
printf("%d LANGUAGE D %d\n", len, sw.peek().msecs);

Correctly written D:

writeln(len, " LANGUAGE D ", sw.peek().msecs);


Just a note that the reason it uses printf is because, when ldc 
was working on ARM, writeln produced gibberish characters.


On Sunday, 4 January 2015 at 21:46:09 UTC, Ary Borenszweig wrote:

There was a time I liked D. But now to make the code fast you
have to annotate things with pure nothrow @safe to make sure 
the compiler generates fast code. This leads to code that's 
uglier and harder to understand.


For this particular benchmark I noticed little effect on the 
speed of the program from these annotations, I just originally 
added them for that warm fuzzy feeling that comes from marking 
things immutable/pure.


Also, in relation to comments about -boundscheck=off (aka 
noboundscheck), it's interesting to check out the latest Rust 
version. Previously, it was a bit slower than C++, D and Nimrod. 
Now, it matches them... by converting the code to use a tree in 
order to avoid bounds checks! 
https://github.com/logicchains/LPATHBench/blob/master/rs.rs. 
Personally I prefer D's approach.


Re: GSOC - Holiday Edition

2015-01-05 Thread Paulo Pinto via Digitalmars-d

On Monday, 5 January 2015 at 03:33:15 UTC, Mike wrote:

On Sunday, 4 January 2015 at 17:25:49 UTC, Martin Nowak wrote:


Exceptions on MC sounds like a bad idea,


That is a bias of old.  It is entirely dependent on the 
application.  Many modern uses of microcontrollers are not hard 
real-time, and while my work was primarily on ARM 
microcontrollers, my previous comments were about using D for 
bare-metal and systems programming in general.


Last time I build an embedded ARM project the resulting D 
binary was as small as the C++ one.


Yes, my "Hello World!" was 56 bytes, but, it's not only about 
getting something to work.



A group of people that builds the infrastructure is needed.

I can't strictly follow your conclusion, that half of the 
language needs to be change.
The only thing I needed to do last time, was to disable 
ModuleInfo generation in the compiler.


My conclusion is not that half the language needs to change.  
As I said in a previous post, the changes needed are likely 
few, but fundamental, and can't be implemented in 
infrastructure alone if you want the result to be more than 
"Hey, I got it to work".


The original thread prompting this discussion was about having 
a bare-metal GSOC project.  As I and others have shown, such a 
project is possible, interesting, entertaining and educational, 
but it will always be just that without 
language/compiler/toolchain support.


A more worthwhile GSOC project would be to add those few, yet 
fundamental, language/compiler/toolchain changes to make the 
experience feel like the language was designed with intent for 
the purpose of systems programming.  But I don't think that 
will be of much interest to embedded/kernel/bare-metal 
programmers, but rather more for those with an interest in 
language and compiler design.


Mike


Personally I would chose Netduino and MicroEJ capable boards if I 
ever do any electronics again as hobby.


Given your experience wouldn't D be capable to target such 
systems as well?


..
Paulo


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 11:21, Daniel Murphy via Digitalmars-d
 wrote:
> "Iain Buclaw via Digitalmars-d"  wrote in message
> news:mailman.4143.1420452193.9932.digitalmar...@puremagic.com...
>
>> That depends on how we agree to go forward with this.  From memory, we
>> each do / did things differently.
>>
>> I have no doubt that the way I've done it is a kludge at best, but
>> I'll explain it anyway.
>>
>> GDC *always* uses the real va_list type, our type-strict backend
>> demands at least that from us.  So when it comes down to the problem
>> of passing around va_list when it's a static array (extern C expects a
>> ref), I rely on people using core.vararg/gcc.builtins to get the
>> proper __builtin_va_list before importing modules such as
>> core.stdc.stdio (printf and friends) - as these declarations are then
>> rewritten by the compiler from:
>>
>> int vprintf(__builtin_va_list[1] va, in char* fmt, ...)
>>
>> to:
>>
>> int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...)
>>
>>
>> This is an *esper* workaround, and ideally, I shouldn't be doing this...
>
>
> I just read the discussion in
> https://github.com/D-Programming-Language/dmd/pull/3568 and I think I
> finally get it, lol.
>
> AIUI your solution won't work for user C++ functions that take va_list,
> because either type or mangling will be correct, but never both.  Is that
> correct?  Can gdc compile the tests in 3568?
>

That is correct for user code, but not druntime C bindings.

GDC can compile the test in 3568 thanks to the GCC backend providing
the va_list struct a name (__va_list_tag).

However it for sure cannot run the program though.  Only body-less
declarations in core.stdc.* are rewritten to ref va_list.


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Iain Buclaw via Digitalmars-d"  wrote in message 
news:mailman.4143.1420452193.9932.digitalmar...@puremagic.com...



That depends on how we agree to go forward with this.  From memory, we
each do / did things differently.

I have no doubt that the way I've done it is a kludge at best, but
I'll explain it anyway.

GDC *always* uses the real va_list type, our type-strict backend
demands at least that from us.  So when it comes down to the problem
of passing around va_list when it's a static array (extern C expects a
ref), I rely on people using core.vararg/gcc.builtins to get the
proper __builtin_va_list before importing modules such as
core.stdc.stdio (printf and friends) - as these declarations are then
rewritten by the compiler from:

int vprintf(__builtin_va_list[1] va, in char* fmt, ...)

to:

int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...)


This is an *esper* workaround, and ideally, I shouldn't be doing this...


I just read the discussion in 
https://github.com/D-Programming-Language/dmd/pull/3568 and I think I 
finally get it, lol.


AIUI your solution won't work for user C++ functions that take va_list, 
because either type or mangling will be correct, but never both.  Is that 
correct?  Can gdc compile the tests in 3568?


I'm going to have a look at turning va_list into a magic type that the 
compiler will pass by reference when necessary and always mangle correctly. 



Re: D and Nim

2015-01-05 Thread Abdulhaq via Digitalmars-d

On Monday, 5 January 2015 at 11:01:51 UTC, bearophile wrote:



I don't remember having such bug in my life.


Perhaps you are very good, but a language like D must be 
designed for more common programmers like Kenji Hara, Andrei 
Alexandrescu, or Raymond Hettinger.


Bye,
bearophile






Re: D and Nim

2015-01-05 Thread bearophile via Digitalmars-d

Ary Borenszweig:

Are there proofs of percentage of bugs caused by incorrectly 
mutating variables that were supposed to be immutable?


I don't know, probably not, but the progress in language design 
is still in its pre-quantitative phase (note: I think Rust 
variables are constant by default, and mutable on request with 
"mut").
It's not just a matter of bugs, it's also a matter of making the 
code simpler to better/faster understand what a function is doing 
and how.




I don't remember having such bug in my life.


Perhaps you are very good, but a language like D must be designed 
for more common programmers like Kenji Hara, Andrei Alexandrescu, 
or Raymond Hettinger.


Bye,
bearophile


Re: D and Nim

2015-01-05 Thread bearophile via Digitalmars-d

Daniel Murphy:


Every C++ programmer has hit this bug at some point:

struct S
{
   int a;
   S(int a)
   {
   a = a;
   }
};


I have a bug report for something like that [TM]:
https://issues.dlang.org/show_bug.cgi?id=3878

Bye,
bearophile


Re: call for GC benchmarks

2015-01-05 Thread Brian Schott via Digitalmars-d

On Sunday, 4 January 2015 at 05:38:06 UTC, Martin Nowak wrote:
I'd like to have a few more real world GC benchmarks in 
druntime.
The current ones are all rather micro-benchmarks, some of them 
don't even create garbage.


So if someone has a program that is heavily GC limited, I'd be 
interested in seeing that converted to a benchmark.


Made the start with one 
https://github.com/D-Programming-Language/druntime/pull/1078 
that resembles a mysql to mongodb importer I wrote recently.


You could try building really old versions of DCD. I converted my 
entire D parsing library to allocators several months ago and got 
a huge speed boost.


Re: call for GC benchmarks

2015-01-05 Thread Benjamin Thaut via Digitalmars-d

Am 04.01.2015 um 06:37 schrieb Martin Nowak:

I'd like to have a few more real world GC benchmarks in druntime.
The current ones are all rather micro-benchmarks, some of them don't
even create garbage.

So if someone has a program that is heavily GC limited, I'd be
interested in seeing that converted to a benchmark.

Made the start with one
https://github.com/D-Programming-Language/druntime/pull/1078 that
resembles a mysql to mongodb importer I wrote recently.


I have a 3D Space shooter implemented in D. Before I transitioned it 
over to complete manual memory management, the GC was the biggest 
bottleneck. Would you be interrested in something like that as well, or 
are smaller applications with a command line interface preferred?
If you are interrested I might be able to branch of a old revision and 
make it compile with the latest dmd again.


Kind Regards
Benjamin Thaut


Re: D and Nim

2015-01-05 Thread Paulo Pinto via Digitalmars-d

On Monday, 5 January 2015 at 09:51:22 UTC, Suliman wrote:

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


A C++ successor is any language that earns its place in a OS 
vendors SDK as the OS official supported language for all OS 
layers.


Which one it will be is still open game.


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 09:23, Daniel Murphy via Digitalmars-d
 wrote:
> "Iain Buclaw via Digitalmars-d"  wrote in message
> news:mailman.4141.1420448690.9932.digitalmar...@puremagic.com...
>
>> void foo(int bar, ...)
>> {
>>   va_list* va = void;
>>   va_list[1] __va_argsave;
>>   va = &__va_argsave;
>>
>>   ...
>> }
>>
>> The above being compiler generated by DMD.
>
>
> Should that be va = &__va_argsave[0] ?

Yes.  More or less, both should do the same. :-)

>  So what _should_ DMD be generating?

That depends on how we agree to go forward with this.  From memory, we
each do / did things differently.

I have no doubt that the way I've done it is a kludge at best, but
I'll explain it anyway.

GDC *always* uses the real va_list type, our type-strict backend
demands at least that from us.  So when it comes down to the problem
of passing around va_list when it's a static array (extern C expects a
ref), I rely on people using core.vararg/gcc.builtins to get the
proper __builtin_va_list before importing modules such as
core.stdc.stdio (printf and friends) - as these declarations are then
rewritten by the compiler from:

int vprintf(__builtin_va_list[1] va, in char* fmt, ...)

to:

int vprintf(ref __builtin_va_list[1] va, in char* fmt, ...)


This is an *esper* workaround, and ideally, I shouldn't be doing this...


Re: D and Nim

2015-01-05 Thread Suliman via Digitalmars-d

What is kill future of Nim?

D is successor of C++, but Nim? Successor of Python?


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Jacob Carlborg via Digitalmars-d

On 2015-01-05 05:04, Brian Schott wrote:


Getting dub to turn on optimizations is easier than getting it to turn
off debugging.


dub build --build=release ?

--
/Jacob Carlborg


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"Iain Buclaw via Digitalmars-d"  wrote in message 
news:mailman.4141.1420448690.9932.digitalmar...@puremagic.com...



void foo(int bar, ...)
{
  va_list* va = void;
  va_list[1] __va_argsave;
  va = &__va_argsave;

  ...
}

The above being compiler generated by DMD.


Should that be va = &__va_argsave[0] ?  So what _should_ DMD be generating? 



Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Iain Buclaw via Digitalmars-d
On 5 January 2015 at 08:28, Daniel Murphy via Digitalmars-d
 wrote:
> "David Nadlinger"  wrote in message
> news:qlzdmlnzlklofmlkq...@forum.dlang.org...
>
>> It is. It breaks vararg cross-platform compatibility (e.g. Linux x86 vs.
>> Linux x86_64) and GDC/LDC will never need it. It's something that we really
>> to be fixed sooner than later. The only reason why the current situation is
>> bearable is that C varargs are rarely ever used in D-only code.
>
>
> Do you know how to fix it in dmd?  I don't know why it's there in the first
> place.

It is there because you still use a synthetic pointer for C varargs on
x86_64.  This synthetic pointer needs to be initialised to point to a
static array otherwise bad things happen when you pass on to C.  Enter
__va_argsave to the rescue.

void foo(int bar, ...)
{
  va_list* va = void;
  va_list[1] __va_argsave;
  va = &__va_argsave;

  ...
}

The above being compiler generated by DMD.


Re: For the lulz: ddmd vs libdparse lexer timings

2015-01-05 Thread Daniel Murphy via Digitalmars-d
"David Nadlinger"  wrote in message 
news:qlzdmlnzlklofmlkq...@forum.dlang.org...


It is. It breaks vararg cross-platform compatibility (e.g. Linux x86 vs. 
Linux x86_64) and GDC/LDC will never need it. It's something that we 
really to be fixed sooner than later. The only reason why the current 
situation is bearable is that C varargs are rarely ever used in D-only 
code.


Do you know how to fix it in dmd?  I don't know why it's there in the first 
place. 



Re: D and Nim

2015-01-05 Thread Jonathan via Digitalmars-d
Thanks everyone for the incite so far! Reading between the lines, 
I gather most thoughts are that both languages are similar in 
their positioning/objectives yet differ in certain domains (e.g. 
generic/template capabilities) and qualities (e.g. Nim 
opinionated choice of scope delimiters). Does that sound logical? 
This was kind of the thing I was fishing for when thinking of the 
post.


Re: lint for D

2015-01-05 Thread Kingsley via Digitalmars-d

On Sunday, 4 January 2015 at 00:05:51 UTC, Martin Nowak wrote:

https://github.com/Hackerpilot/Dscanner


Brilliant thanks - I've successfully integrated it into my 
IntelliJ plugin