Re: assert semantic change proposal

2014-08-05 Thread Tofu Ninja via Digitalmars-d

On Wednesday, 6 August 2014 at 00:52:32 UTC, Walter Bright wrote:

On 8/3/2014 4:51 PM, Mike Farnsworth wrote:
This all seems to have a very simple solution, to use 
something like: expect()


I see code coming that looks like:

   expect(x > 2);  // must be true
   assert(x > 2);  // check that it is true

All I can think of is, shoot me now :-)


How about something like
@expected assert(x > 2); or @assumed assert(x > 2);

It wouldn't introduce a new keyword, but still introduces the
expected/assumed semantics. You should keep in mind that you
might have to make a compromise, regardless of your feelings on
the subject.


Also, I am going to try to say this in as respectful a way as I
can...

Please stop responding in such a dismissive way, I think it is
already pretty obvious that some are getting frustrated by these
threads. Responding in a dismissive way makes it seem like you
don't take the arguments seriously.


Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Tobias Pankrath via Digitalmars-d

From the PR:

---
A better way to bind multiple arguments would be:

((int x = 2,
  int y = 3,
 ) => (x * y))()
---

Could we make this possible?


Re: Qt Creator and D

2014-08-05 Thread Suliman via Digitalmars-d
DCD is implemented in D. Its lexer/parser/ast code is located 
in the libdparse project. 
https://github.com/Hackerpilot/libdparse/


Am I right understand that project like SDC maybe very helpfull 
to get not only for auto completation, but also real time code 
checking like it do Visual Studio?




Re: Qt Creator and D

2014-08-05 Thread Brian Schott via Digitalmars-d
On Wednesday, 6 August 2014 at 04:34:25 UTC, Manu via 
Digitalmars-d wrote:
Does DCD also share the Mono-D completion lib, or are their 
'competing'

libs again?


They are separate autocompletion engines. Mono-D's 
lexer/parser/ast are written in C#, probably because it was much 
easier to integrate with Monodevelop that way. Mono-D is two 
years older than DCD.


DCD is implemented in D. Its lexer/parser/ast code is located in 
the libdparse project. https://github.com/Hackerpilot/libdparse/


Re: Qt Creator and D

2014-08-05 Thread Manu via Digitalmars-d
On 6 August 2014 00:12, Max Klimov via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Wednesday, 18 September 2013 at 14:49:27 UTC, Joseph Rushton
> Wakeling wrote:
>
>> Hello all,
>>
>> Several of us have been talking about Qt Creator and D in various
>> subthreads of the current IDE-related discussions going ...
>>
>
> Recently I started the development of plugins for QtCreator in
> order to add basic support of D. I did not notice the existing
> project https://github.com/GoldMax/QtCreatorD, it could have been
> very useful for me.
> Currently I have 2 plugins: the first one is for DUB
> (http://code.dlang.org/) support, the second one is for direct D
> language support directly.
> The repositories are here:
>
> https://github.com/Groterik/qtcreator-dubmanager
> https://github.com/Groterik/qtcreator-dlangeditor
>
> DUB-plugin provides project management features, building,
> running, etc. The dlangeditor-plugin itself provides indention,
> code completion, etc. It uses DCD
> (https://github.com/Hackerpilot/DCD) binaries for completion.
> This plugin is not as stable as I expected (sometimes completion
> can disappear until the next QtCreator's start) but I am looking
> forward to the community’s help in searching/fixing bugs and new
> features development/suggestion.
>

Out of curiosity, how do you find DCD?

In my experience, the Mono-D completion engine has been the best for a long
time, and VisualD switched to use the Mono-D completion engine some time
back.
Does DCD also share the Mono-D completion lib, or are their 'competing'
libs again?
Can users expect the same experience from DCD integrated editors as from
Mono-D?

Anyone worked in both environments extensively?


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d
On Wednesday, 6 August 2014 at 01:11:55 UTC, Jeremy Powers via 
Digitalmars-d wrote:
That's in the past. This is all about the pros and cons of 
changing it now

and for the future.



The main argument seems to revolve around whether this is 
actually a change
or not.  In my (and others') view, the treatment of assert as 
'assume' is
not a change at all.  It was in there all along, we just needed 
the wizard

to tell us.



How can there be any question? This is a change in the compiler, 
a change in the docs, change in what your program does, change of 
the very bytes in the executable. If my program worked before and 
doesn't now, how is that not a change? This must be considered a 
change by any reasonable definition of the word change.


I don't think I can take seriously this idea that someone's 
unstated, unmanifested intentions define change more so than 
things that are .. you know.. actual real changes.


Much of the rest of your post seems to be predicated on this, so 
I don't think I can respond to it. Let me know if I missed 
something.


In an attempt to return this discussion to something useful, 
question:


If assert means what I think it means (and assuming we agree on 
what the
actual two interpretations are), how do we allay concerns about 
it?  Is
there anything fundamentally/irretrievably bad if we use this 
new/old

definition?


Well I think I outlined the issues in the OP. As for solutions, 
there have been some suggestions in this thread, the simplest is 
to leave things as is and introduce the optimizer hint as a 
separate function, assume().


I don't think there was any argument presented against a separate 
function besides that Walter couldn't see any difference between 
the two behaviors, or the intention thing which doesn't really 
help us here.


I guess the only real argument against it is that pre-existing 
asserts contain significant optimization information that we 
can't afford to not reuse. But this is a claim I'm pretty 
skeptical of. Andrei admitted it's just a hunch at this point. 
Try looking through your code base to see how many asserts would 
be useful for optimizing. For me, it's mostly 
default:assert(false); in switch statements, though ironically 
that is defined to produce a halt even in release, so the 
compiler won't optimize away the branch like it should.


Heh, I just realized, that particular special case is another 
argument for a separate function, because assert(false) can't 
express unreachability. assume(false) could.



(Can we programmatically (sp?) identify and flag/resolve
issues that occur from a mismatch of expectations?)


I'm not an expert on this, but my guess is it's possible in 
theory but would never happen in practice. Such things are very 
complex to implement, if Walter won't agree to a simple and easy 
solution, I'm pretty sure there's no way in hell he would agree 
to a complex one that takes a massive amount of effort.


Re: Guide for dmd development @ Win64?

2014-08-05 Thread Dicebot via Digitalmars-d

On Tuesday, 5 August 2014 at 21:48:40 UTC, Johannes Blume wrote:
Normally, you just execute vcvars32.bat/vcvars64.bat before 
doing anything from the command line and you are set. Even make 
scripts I created five years ago for VS2008 still work without 
a hitch on VS2013 without any manual PATH trickery. The 
detailed directory layout of VS is not something makefiles are 
supposed to know about.


If it is only reliable way to get environment prepared, is there 
any reason we shouldn't require running `make -f win*.mak` from 
it instead of trying to configure all paths manually?


Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Rikki Cattermole via Digitalmars-d

On 6/08/2014 7:29 a.m., H. S. Teoh via Digitalmars-d wrote:

There are currently two Phobos PR's that implement essentially the same
functionality, but in slightly different ways:

https://github.com/D-Programming-Language/phobos/pull/1255
https://github.com/D-Programming-Language/phobos/pull/2343


From the discussion on Github, it seems to me that we should only

introduce one new function rather than two similar but
not-quite-the-same functions. Since the discussion seems stuck on
Github, I thought I should bring it here to the larger community to see
if we can reach a consensus (who am I kidding... but one can hope :-P)
on:

(1) What name to use (bring on the bikeshed rainbow)
(2) Exactly what functionality should be included.
(3) Which PR to merge.


T


I'm not convinced that either PR adds anything of use. As noted unaryFun 
can be used identically to bindTo. So its wasted code in my opinion.


Re: assert semantic change proposal

2014-08-05 Thread eles via Digitalmars-d

On Wednesday, 6 August 2014 at 02:43:17 UTC, eles wrote:

On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:

On 8/6/2014 8:18 AM, David Bregman wrote:


*I think this is important: is not only a tested condition that 
is then handled, but a tested condition exposing a "does not 
work as intended". It looks more like a kind of "this code 
should be unreachable".


nitpicking: is not "my program does not behave like expected (for 
example, because the config file is broken)" but "my code wasn't 
supposed to say that and I did not mean to write such code"


Re: assert semantic change proposal

2014-08-05 Thread eles via Digitalmars-d

On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:

On 8/6/2014 8:18 AM, David Bregman wrote:





You keep going on the premise that your definition is the 
intended definition. I completely disagree. My understanding of 
assert has always been as Walter has described it.


I did not use to think the same, but once Walter stated his 
vision of assert(), it was like a kind of revelation to me: why 
the optimizer won't make use of such obvious information like 
assert() provides just like asimple:


if(x<5) {
  // you can safely optimize this code knowing (*assuming*) 
that always x<5

}

But, what started to bother me lately and, I think, is the root 
of the problem: to have programmer code disabled by a compiler 
flag. I do not speak about boundschecking, where the code is 
never explicitely written by the programmer, but of real 
programmer code.


Until now, versioning (or, in C/C++, the #ifdef) was the sole 
acceptable way to disable programmer code. The C assert slipped 
through as being based on a #ifdef or #if (I know good compilers 
will also optimize a if(0), but this is just because it happens 
to be obvious).


OTOH, the D assert is no longer based (directly) on versioning, 
so having it disabled by a flag is not easily grasped. This, 
combined with the sudden revelation of the optimizer messing with 
it, produced a shock and this thread illustrates it. People are 
just to used to its secondary meaning from C, that is, besides 
testing conditions: "easily obtain a log of what and where was 
wrong". So, it was an assertion, but also a logging feature 
(albeit a fatal one). People got used with assert() becoming noop 
code in the release mode, just like they would disable the logs 
for release.


The more I think about it, the more I feel like assert would be 
more naturally an annotation or a kind of versioning. Still, I 
cannot come with a clear cut proposition, as my mind is also 
entangled in old habits. One one hand, it feels natural as an 
instruction, on the other hand, being disable-able, maybe even 
ignorable (in release builds) and an invalidating point for the 
program logic*, it should belong somewhere else.


*I think this is important: is not only a tested condition that 
is then handled, but a tested condition exposing a "does not work 
as intended". It looks more like a kind of "this code should be 
unreachable".


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d

On Wednesday, 6 August 2014 at 01:39:25 UTC, Mike Parker wrote:

On 8/6/2014 8:18 AM, David Bregman wrote:



This appears to be the root of the argument, and has been 
circled

repeatedly... it's not my intent to restart another round of
discussion on
that well traveled ground, I just wanted to state my support 
for the

definition as I understand it.


I disagree. I don't think the fact that some people already 
had the new
definition in mind before is really all that relevant. That's 
in the
past. This is all about the pros and cons of changing it now 
and for the

future.


You keep going on the premise that your definition is the 
intended definition. I completely disagree. My understanding of 
assert has always been as Walter has described it. To me, 
*that* is the existing definition and yours is completely new. 
Nothing is being changed. He's just talking about starting to 
take advantage of it as he has always intended to.


No, intention is orthogonal to this. Again, this is all about the 
pros and cons of changing the *actual* semantics of assert.


Re: assert semantic change proposal

2014-08-05 Thread Tove via Digitalmars-d

On Wednesday, 6 August 2014 at 00:47:28 UTC, Walter Bright wrote:
If you build dmd in debug mode, and then run it with -O --c, it 
will give you a list of all the data flow transformations it 
does.


But the list is a blizzard on non-trivial programs.


Awesome, thanks! Will give it a whirl, as soon as my vacation is 
over.


Re: assert semantic change proposal

2014-08-05 Thread Mike Parker via Digitalmars-d

On 8/6/2014 8:18 AM, David Bregman wrote:



This appears to be the root of the argument, and has been circled
repeatedly... it's not my intent to restart another round of
discussion on
that well traveled ground, I just wanted to state my support for the
definition as I understand it.


I disagree. I don't think the fact that some people already had the new
definition in mind before is really all that relevant. That's in the
past. This is all about the pros and cons of changing it now and for the
future.


You keep going on the premise that your definition is the intended 
definition. I completely disagree. My understanding of assert has always 
been as Walter has described it. To me, *that* is the existing 
definition and yours is completely new. Nothing is being changed. He's 
just talking about starting to take advantage of it as he has always 
intended to.


---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com



Re: assert semantic change proposal

2014-08-05 Thread Jeremy Powers via Digitalmars-d
> That's in the past. This is all about the pros and cons of changing it now
> and for the future.
>

The main argument seems to revolve around whether this is actually a change
or not.  In my (and others') view, the treatment of assert as 'assume' is
not a change at all.  It was in there all along, we just needed the wizard
to tell us.



The below can safely be ignored, as I just continue the pedantic
discussions


OK, but my point was you were using a different definition of undefined
> behavior. We can't communicate if we aren't using the same meanings of
> words.
>
>
Yes, very true.  My definition of undefined in this case hinges on my
definition of what assert means.  If a failed assert means all code after
it is invalid, then by definition (as I interpret the definition) that code
is invalid and can be said to have undefined behaviour.  That is, it makes
sense to me that it is specified as undefined, by the spec that is
incredibly unclear.  I may be reading too much into it here, but this
follows the strict definition of undefined - it is undefined because it is
defined to be undefined.  This is the 'because I said so' defense.



>  The 'regular' definition of assert that you claim is what I see as
>> the redefinition - it is a definition based on the particular
>> implementation of assert in other languages, not on the conceptual idea of
>> assert as I understand it (and as it appears to be intended in D).
>>
>
> The 'regular' definition of assert is used in C, C++ and for the last
> >10years (afaik), in D. If you want to change it you need a good
> justification. I'm not saying such justification necessarily exist doesn't
> either, maybe it does but I have not seen it.
>
>
This 'regular' definition is a somewhat strict interpretation of the
definition to only match how languages have implemented it.  I have always
interpreted the intent of assert to be 'here is something that must be
true, if it is not then my program is in an invalid state' - the fact it is
only a debug halting tool in practice means it falls short of its
potential.  And in fact I very rarely use it in practice for this reason,
as I find the way it works almost useless and definitely dangerous.



> This appears to be the root of the argument, and has been circled
>> repeatedly... it's not my intent to restart another round of discussion on
>> that well traveled ground, I just wanted to state my support for the
>> definition as I understand it.
>>
>
> I disagree. I don't think the fact that some people already had the new
> definition in mind before is really all that relevant.


It comes back to whether the new definition is actually new.  If it is a
new definition, then we can argue about whether it is good or not.  If it
is the old definition (which slightly differs from how assert works in
practice in other languages) then we can argue about whether D should
conform to other languages or leverage the existing definition...

I contend that it is not new, and is simply an extension of the actual
definition.  Some people agree, some people don't... very lengthy and
esoteric discussions have already spiraled through this repeatedly, so us
arguing about it again probably won't get anywhere.

My stance is that this new/old definition is a good thing, as it matches
how I thought things were already, and any code that surfaces as broken
because of it was already broken in my definition.  Therefore this 'change'
is good, does not introduce breaking changes, and arguments about such
should be redirected towards mitigation and fixing of expectations.

In an attempt to return this discussion to something useful, question:

If assert means what I think it means (and assuming we agree on what the
actual two interpretations are), how do we allay concerns about it?  Is
there anything fundamentally/irretrievably bad if we use this new/old
definition?  (Can we programmatically (sp?) identify and flag/resolve
issues that occur from a mismatch of expectations?)


Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/3/2014 4:01 PM, Timon Gehr wrote:

Walter is not a supernatural being.


Yes, I am. I demand tribute.



Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/3/2014 4:24 PM, Martin Krejcirik wrote:

Couldn't this new assert behaviour be introduced as a new optimization
switch ? Say -Oassert ? It would be off by default and would work both
in debug and release mode.


It could, but every one of these:

1. doubles the time it takes to test dmd, it doesn't take many of these to 
render dmd untestable


2. adds confusion to most programmers as to what switch does what

3. adds complexity, i.e. bugs

4. interactions between optimization switches often exhibits emergent behavior - 
i.e. extremely hard to test for




Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/3/2014 4:51 PM, Mike Farnsworth wrote:

This all seems to have a very simple solution, to use something like: expect()


I see code coming that looks like:

   expect(x > 2);  // must be true
   assert(x > 2);  // check that it is true

All I can think of is, shoot me now :-)



Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/3/2014 7:31 PM, John Carter wrote:

Compiler users always blame the optimizer long before they blame their crappy 
code.

Watching the gcc mailing list over the years, those guys bend over backwards to
prevent that happening.

But since an optimization has to be based on additional hard information, they
have, with every new version of gcc, used that information both for warnings and
optimization.


Recent optimization improvements in gcc and clang have also broken existing code 
that has worked fine for decades.


In particular, overflow checks often now get optimized out, as the check relied 
on, pedantically, undefined behavior.


This is why D has added the core.checkedint module, to have overflow checks that 
are guaranteed to work.


Another optimization that has broken existing code is removal of dead 
assignments. This has broken crypto code that would overwrite passwords after 
using them. It's also why D now has volatileStore() and volatileLoad(), if only 
someone will pull them.


I.e. silent breakage of existing, working code is hardly unknown in the C/C++ 
world.



Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/3/2014 7:26 PM, Tove wrote:

It is possible, just not as a default enabled warning.

Some compilers offers optimization diagnostics which can be enabled by a switch,
I'm quite fond of those as it's a much faster way to go through a list of
compiler highlighted failed/successful optimizations rather than being forced to
check the asm output after every new compiler version or minor code refactoring.

In my experience, it actually works fine in huge projects, even if there are
false positives you can analyse what changes from the previous version as well
as ignoring modules which you know is not performance critical.


If you build dmd in debug mode, and then run it with -O --c, it will give you a 
list of all the data flow transformations it does.


But the list is a blizzard on non-trivial programs.



Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Artur Skawina via Digitalmars-d
On 08/05/14 21:29, H. S. Teoh via Digitalmars-d wrote:
> There are currently two Phobos PR's that implement essentially the same
> functionality, but in slightly different ways:
> 
>   https://github.com/D-Programming-Language/phobos/pull/1255

How is

   (1 + 2 * 3).bindTo!(x => x * x * x + 2 * x * x + 3 * x)()

better than

   (x => x * x * x + 2 * x * x + 3 * x)(1 + 2 * 3)

?

artur


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d
On Tuesday, 5 August 2014 at 22:25:59 UTC, Jeremy Powers via 
Digitalmars-d wrote:


You're using a nonstandard definition of undefined behavior. 
Undefined
behavior has a precise meaning, that's why Timon linked the 
wiki article

for you.

The regular definition of assert does not involve any 
undefined behavior,

only the newly proposed one.



But the 'newly proposed one' is the definition that I have been 
using all

along.


OK, but my point was you were using a different definition of 
undefined behavior. We can't communicate if we aren't using the 
same meanings of words.


The 'regular' definition of assert that you claim is what I see 
as

the redefinition - it is a definition based on the particular
implementation of assert in other languages, not on the 
conceptual idea of
assert as I understand it (and as it appears to be intended in 
D).


The 'regular' definition of assert is used in C, C++ and for the 
last >10years (afaik), in D. If you want to change it you need a 
good justification. I'm not saying such justification necessarily 
exist doesn't either, maybe it does but I have not seen it.




This appears to be the root of the argument, and has been 
circled
repeatedly... it's not my intent to restart another round of 
discussion on
that well traveled ground, I just wanted to state my support 
for the

definition as I understand it.


I disagree. I don't think the fact that some people already had 
the new definition in mind before is really all that relevant. 
That's in the past. This is all about the pros and cons of 
changing it now and for the future.


Re: assert semantic change proposal

2014-08-05 Thread bachmeier via Digitalmars-d
But the 'newly proposed one' is the definition that I have been 
using all

along.


+1. Until this came up, I didn't know another definition existed.

The 'regular' definition of assert that you claim is what I see 
as

the redefinition - it is a definition based on the particular
implementation of assert in other languages, not on the 
conceptual idea of
assert as I understand it (and as it appears to be intended in 
D).


In my view, it's also a redefinition of -release. My view is 
influenced by Common Lisp. If you want speed, you test your 
program, and then when you feel comfortable, set the optimization 
levels to get as much speed as possible. If you want safety and 
debugging, set the optimization levels accordingly. I was always 
under the impression that -release was a statement to the 
compiler that "I've tested the program, make it run as fast as 
possible, and let me worry about any remaining bugs."


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d
On Tuesday, 5 August 2014 at 18:19:00 UTC, Jeremy Powers via 
Digitalmars-d wrote:
This has already been stated by others, but I just wanted to 
pile on - I

agree with Walter's definition of assert.

2. Semantic change.
The proposal changes the meaning of assert(), which will 
result in
breaking existing code.  Regardless of philosophizing about 
whether or not
the code was "already broken" according to some definition of 
assert, the
fact is that shipping programs that worked perfectly well 
before may no

longer work after this change.




Disagree.
Assert (as I always understood it) means 'this must be true, or 
my program
is broken.'  In -release builds the explicit explosion on a 
triggered
assert is skipped, but any code after a non-true assert is, by 
definition,
broken.  And by broken I mean the fundamental constraints of 
the program

are violated, and so all bets are off on it working properly.
A shipping program that 'worked perfectly well' but has 
unfulfilled asserts
is broken - either the asserts are not actually true 
constraints, or the

broken path just hasn't been hit yet.


This is the 'already broken' argument, which I mentioned in the 
quote above.


This kind of change could never be made in C or C++, because 
there is too much legacy code that depends on it. Perhaps D can 
still afford to break this because it's still a young language. 
That is a strength of young languages. If you believe in this 
case that the upside justifies the breakage, by all means, just 
say so and accept the consequences. Don't try to escape 
responsibility by retroactively redefining previously working 
code as broken :)




Looking at the 'breaking' example:

assert(x!=1);
if (x==1) {
 ...
}

If the if is optimized out, this will change from existing 
behaviour.  But
it is also obviously (to me at least) broken code already.  The 
assert says
that x cannot be 1 at this point in the program, if it ever is 
then there
is an error in the program and then it continues as if the 
program were
still valid.  If x could be one, then the assert is invalid 
here.  And this
code will already behave differently between -release and 
non-release

builds, which is another kind of broken.


Not everything that breaks will be so obvious as that. It can get 
much more hairy when assumptions propagate across multiple levels 
of inlining.


Also, some code purposely uses that pattern. It is (or rather, 
was) valid for a different use case of assert.




3a. An alternate statement of the proposal is literally "in 
release mode,
assert expressions introduce undefined behavior into your code 
in if the

expression is false".



This statement seems fundamentally true to me of asserts 
already,
regardless of whether they are used for optimizations.  If your 
assert
fails, and you have turned off 'blow up on assert' then your 
program is in
an undefined state.  It is not that the assert introduces the 
undefined
behaviour, it is that the assert makes plain an expectation of 
the code and
if that expectation is false the code will have undefined 
behaviour.




This is not the standard definition of undefined behavior.

With regular assert, execution is still well defined. If you want 
to know what happens in release mode when the assert condition is 
not satisfied, all you need to do is read the source code to find 
out.


With assume, if the condition is not satisfied, there is no way 
to know what will happen. _anything_ can happen, it can even 
format your hard drive. That's true undefined behavior.




3b. Since assert is such a widely used feature (with the 
original
semantics, "more asserts never hurt"), the proposal will 
inject a massive
amount of undefined behavior into existing code bases, greatly 
increasing
the probability of experiencing problems related to undefined 
behavior.




I actually disagree with the 'more asserts never hurt' 
statement.  Exactly
because asserts get compiled out in release builds, I do not 
find them very
useful/desirable.  If they worked as optimization hints I might 
actually

use them more.

And there will be no injection of undefined behaviour - the 
undefined
behaviour is already there if the asserted constraints are not 
valid.


This uses your own definition of UB again, it isn't true for the 
regular definition.


 Maybe if the yea side was consulted, they might easily agree 
to an
alternative way of achieving the improved optimization goal, 
such as

creating a new function that has the proposed semantics.



Prior to this (incredibly long) discussion, I was not aware 
people had a
different interpretation of assert.  To me, this 'new 
semantics' is
precisely what I always thought assert was, and the proposal is 
just
leveraging it for some additional optimizations.  So from my 
standpoint,
adding a new function would make sense to support this 
'existing' behaviour
that others seem to rely on - assert is fine as is, if the 
definition of

'is' is what I think it is.


That do

Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Daniel Gibson via Digitalmars-d

Am 06.08.2014 00:17, schrieb Idan Arye:

On Tuesday, 5 August 2014 at 21:10:25 UTC, Tofu Ninja wrote:

Can you explain the utility of both of them? I am not big into
functional programming so I am not seeing it.


The purpose of `bindTo` is to emulate the `let` expressions found in
many functional languages (see
http://en.wikipedia.org/wiki/Let_expression). The idea is to bind a
value to a name for the limited scope of a single expression.

Note that `bindTo` could be implemented as:

 alias bindTo = std.functional.unaryFun;

and I'll probably change the implementation to this if this PR will be
chosen. The reason I think `bindTo` is needed even if it's just an alias
to `unaryFun` is that `bindTo` conveys better that the you are binding a
name to a value, not just overcomplicating the code.


And my impression (in lisp/clojure) was, that let emulates (named) 
variables of imperative languages :P


Cheers,
Daniel


Re: assert semantic change proposal

2014-08-05 Thread Jeremy Powers via Digitalmars-d
>
> You're using a nonstandard definition of undefined behavior. Undefined
> behavior has a precise meaning, that's why Timon linked the wiki article
> for you.
>
> The regular definition of assert does not involve any undefined behavior,
> only the newly proposed one.
>

But the 'newly proposed one' is the definition that I have been using all
along.  The 'regular' definition of assert that you claim is what I see as
the redefinition - it is a definition based on the particular
implementation of assert in other languages, not on the conceptual idea of
assert as I understand it (and as it appears to be intended in D).

This appears to be the root of the argument, and has been circled
repeatedly... it's not my intent to restart another round of discussion on
that well traveled ground, I just wanted to state my support for the
definition as I understand it.


Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Idan Arye via Digitalmars-d

On Tuesday, 5 August 2014 at 21:10:25 UTC, Tofu Ninja wrote:
Can you explain the utility of both of them? I am not big into 
functional programming so I am not seeing it.


The purpose of `bindTo` is to emulate the `let` expressions found 
in many functional languages (see 
http://en.wikipedia.org/wiki/Let_expression). The idea is to bind 
a value to a name for the limited scope of a single expression.


Note that `bindTo` could be implemented as:

alias bindTo = std.functional.unaryFun;

and I'll probably change the implementation to this if this PR 
will be chosen. The reason I think `bindTo` is needed even if 
it's just an alias to `unaryFun` is that `bindTo` conveys better 
that the you are binding a name to a value, not just 
overcomplicating the code.


Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Meta via Digitalmars-d
On Tuesday, 5 August 2014 at 19:31:08 UTC, H. S. Teoh via 
Digitalmars-d wrote:
There are currently two Phobos PR's that implement essentially 
the same

functionality, but in slightly different ways:

https://github.com/D-Programming-Language/phobos/pull/1255
https://github.com/D-Programming-Language/phobos/pull/2343

From the discussion on Github, it seems to me that we should 
only

introduce one new function rather than two similar but
not-quite-the-same functions. Since the discussion seems stuck 
on
Github, I thought I should bring it here to the larger 
community to see
if we can reach a consensus (who am I kidding... but one can 
hope :-P)

on:

(1) What name to use (bring on the bikeshed rainbow)
(2) Exactly what functionality should be included.
(3) Which PR to merge.


T


I don't think either is particularly useful. This is something 
that's either part of the language, or doesn't exist.


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d
On Tuesday, 5 August 2014 at 20:50:06 UTC, Jeremy Powers via 
Digitalmars-d wrote:




Well, yes: Undefined behaviour in the sense


"And there will be no injection of undefined behaviour
   ^~~
   conventional sense


 - the undefined behaviour is already there if the asserted 
constraints

   ^~~
   altered sense

are not valid."




I still don't quite see your point.  Perhaps I should have 
said:  In the
case where an asserted constraint is not met, the program is 
invalid.

 Being invalid it has undefined behaviour if it continues.


From another:


There is a difference between invalid and undefined: A program 
is invalid
("buggy"), if it doesn't do what it's programmer intended, 
while
"undefined" is a matter of the language specification. The 
(wrong)
behaviour of an invalid program need not be undefined, and 
often isn't in

practice.



I disagree with this characterization.  Something can be buggy, 
not doing
what the programmer intended, while also a perfectly valid 
program.  You
can make wrong logic that is valid/reasonable in the context of 
the program.


Invalid in this case means the programmer has explicitly stated 
certain
constraints must hold, and such constraints do not hold.  So if 
you
continue with the program in the face of invalid constraints, 
you have no

guarantee what will happen - this is what I mean by 'undefined'.


You're using a nonstandard definition of undefined behavior. 
Undefined behavior has a precise meaning, that's why Timon linked 
the wiki article for you.


The regular definition of assert does not involve any undefined 
behavior, only the newly proposed one.


Re: Guide for dmd development @ Win64?

2014-08-05 Thread Johannes Blume via Digitalmars-d

On Tuesday, 5 August 2014 at 17:19:03 UTC, Dicebot wrote:

On Tuesday, 5 August 2014 at 08:40:00 UTC, Mike Parker wrote:

On 8/5/2014 1:09 PM, Dicebot wrote:
Ok finally have managed to compile everything for Windows 8.1 
+ Visual
C++ Express 2013 and I am very very happy that I do Linux 
programming

for a living.


To be fair, I frequently build C and C++ projects with MinGW 
and/or VC without needing to jump through any hoops, since 
most projects these days either build with a build tool like 
Premake/CMake/(Take Your Pick) or provide a number of 
Makefiles for different compiler configurations. DMD's build 
system on Windows is just masochistic.


I am most frustrated by the fact that they break the path 
layout between Visual Studio releases for no reason and that 
cl.exe can't find own basic dll out of the box without explicit 
path hint. This has nothing to do with DMD build system - I can 
only blame latter for no paying for every single Windows / 
Visual Studio version out there to test path compatibility.


Normally, you just execute vcvars32.bat/vcvars64.bat before doing 
anything from the command line and you are set. Even make scripts 
I created five years ago for VS2008 still work without a hitch on 
VS2013 without any manual PATH trickery. The detailed directory 
layout of VS is not something makefiles are supposed to know 
about.


Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Brian Schott via Digitalmars-d

I'd probably never use either of them.


Re: assert semantic change proposal

2014-08-05 Thread David Bregman via Digitalmars-d

On Tuesday, 5 August 2014 at 18:35:32 UTC, Walter Bright wrote:

(limited connectivity for me)

For some perspective, recently gcc and clang have introduced 
optimizations based on undefined behavior in C/C++. The 
undefined behavior has been interpreted by modern optimizers as 
"these cases will never happen". This has wound up breaking a 
significant amount of existing code. There have been a number 
of articles about these, with detailed explanations about how 
they come about and the new, more correct, way to write code.


The emerging consensus is that the code breakage is worth it 
for the performance gains.


That said, I do hear what people are saying about potential 
code breakage and agree that we need to address this properly.


Well, then at least we agree there is some kind of tradeoff being 
made here if the definition of assert is changed: potential 
performance vs a bunch of negatives.


In my estimation, the upside is small and the tradeoff is not 
close to being justified. If performance is a top goal, there are 
many other things that could be done which have lesser (or zero) 
downsides. Just to give one example, addition of a forceinline 
attribute would be of great assistance to those attempting to 
micro optimize their code.


And of course, adding this as a new function instead of 
redefining an existing one would eliminate the code breakage and 
C compatibility issues. The undefined behavior issue would 
remain, but at least defining assume as @system would alleviate 
the @safe issue somewhat (there could still be leaks due to 
inlining), and make it more clear to users that it's a dangerous 
feature. It would also make it more searchable, for code auditing 
purposes.


Anyways, if I have at least made you and others aware of all the 
downsides, and all the contradictions of this proposal with D's 
stated goals, then I guess I have done my part for this issue.


Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/5/2014 1:09 PM, Ary Borenszweig wrote:

On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:

On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:



Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to be a
revolutionary feature in D.


LLVM already has it. It's not revolutionary:

http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic


That's a language extension. A language extension is not a language feature. But 
it is strong evidence supporting my assertion that these sorts of things are 
inexorably coming. As bearophile posted, Microsoft also has such an intrinsic 
for their C++.




By the way, I think Walter said "assert can be potentially used to make
optimizations" not "Oh, I just had an idea! We could use assert to optimize
code". I think the code already does this. Of course, we would have to look at
the source code to find out...


It is hardly a new idea, or my idea. I got it from this 1981 book:

http://www.amazon.com/Program-Flow-Analysis-Application-Prentice-Hall/dp/0137296819/

which I've had a copy of since '82 or so. My notions on asserts being contracts, 
regardless of switch settings, date to a similar time, see "Object Oriented 
Software Construction", a 1985 book.




By the way, most of the time in this list I hear "We could use this and that
feature to allow better optimizations" and no optimizations are ever
implemented. Look at all those @pure nosafe nothrow const that you have to put
and yet you don't get any performance boost from that.


Not only is that currently quite incorrect, don't put the chicken before the 
egg. The 'nothrow', etc., feature must exist before it can be taken advantage of.


Re: Complete the checklist! :o)

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 10:14:21AM -0700, Andrei Alexandrescu via Digitalmars-d 
wrote:
> http://colinm.org/language_checklist.html

Alright, I'll have a go at it:

---
   Programming Language Checklist
   by [1]Colin McMillen, [2]Jason Reed, and [3]Elly Jones.

 You appear to be advocating a new:
 [ ] functional  [X] imperative  [X] object-oriented  [X] procedural [X] 
stack-based
 [X] "multi-paradigm"  [X] lazy  [X] eager  [X] statically-typed  [ ] 
dynamically-typed
 [ ] pure  [ ] impure  [ ] non-hygienic  [ ] visual  [ ] beginner-friendly
 [ ] non-programmer-friendly  [ ] completely incomprehensible
 programming language.  Your language will not work.  Here is why it will not 
work.

 You appear to believe that:
 [ ] Syntax is what makes programming difficult
 [X] Garbage collection is free[ ] Computers have infinite 
memory
 [X] Nobody really needs:
 [ ] concurrency  [X] a REPL  [X] debugger support  [X] IDE support  [ ] I/O
 [ ] to interact with code not written in your language
 [ ] The entire world speaks 7-bit ASCII
 [X] Scaling up to large software projects will be easy
 [X] Convincing programmers to adopt a new language will be easy
 [ ] Convincing programmers to adopt a language-specific IDE will be easy
 [ ] Programmers love writing lots of boilerplate
 [X] Specifying behaviors as "undefined" means that programmers won't rely on 
them
 [ ] "Spooky action at a distance" makes programming more fun

 Unfortunately, your language (has/lacks):
 [X] comprehensible syntax  [ ] semicolons  [ ] significant whitespace  [X] 
macros
 [ ] implicit type conversion  [X] explicit casting  [X] type inference
 [X] goto  [X] exceptions  [X] closures  [ ] tail recursion  [ ] coroutines
 [X] reflection  [ ] subtyping  [ ] multiple inheritance  [X] operator 
overloading
 [X] algebraic datatypes  [ ] recursive types  [ ] polymorphic types
 [ ] covariant array typing  [X] monads  [X] dependent types
 [ ] infix operators  [X] nested comments  [X] multi-line strings  [X] regexes
 [ ] call-by-value  [ ] call-by-name  [ ] call-by-reference  [ ] call-cc

 The following philosophical objections apply:
 [ ] Programmers should not need to understand category theory to write "Hello, 
World!"
 [ ] Programmers should not develop RSI from writing "Hello, World!"
 [ ] The most significant program written in your language is its own compiler
 [X] The most significant program written in your language isn't even its own 
compiler
 [X] No language spec
 [X] "The implementation is the spec"
[ ] The implementation is closed-source  [ ] covered by patents  [ ] not 
owned by you
 [ ] Your type system is unsound  [ ] Your language cannot be unambiguously 
parsed
[ ] a proof of same is attached
[ ] invoking this proof crashes the compiler
 [X] The name of your language makes it impossible to find on Google
 [ ] Interpreted languages will never be as fast as C
 [ ] Compiled languages will never be "extensible"
 [ ] Writing a compiler that understands English is AI-complete
 [ ] Your language relies on an optimization which has never been shown possible
 [ ] There are less than 100 programmers on Earth smart enough to use your 
language
 [ ]  takes exponential time
 [X] _The color of the bikeshed__ is known to be undecidable

 Your implementation has the following flaws:
 [ ] CPUs do not work that way
 [ ] RAM does not work that way
 [ ] VMs do not work that way
 [ ] Compilers do not work that way
 [ ] Compilers cannot work that way
 [ ] Shift-reduce conflicts in parsing seem to be resolved using rand()
 [ ] You require the compiler to be present at runtime
 [X] You require the language runtime to be present at compile-time
 [X] Your compiler errors are completely inscrutable
 [ ] Dangerous behavior is only a warning
 [X] The compiler crashes if you look at it funny
 [ ] The VM crashes if you look at it funny
 [ ] You don't seem to understand basic optimization techniques
 [ ] You don't seem to understand basic systems programming
 [ ] You don't seem to understand pointers
 [ ] You don't seem to understand functions

 Additionally, your marketing has the following problems:
 [X] Unsupported claims of increased productivity
 [X] Unsupported claims of greater "ease of use"
 [ ] Obviously rigged benchmarks
[ ] Graphics, simulation, or crypto benchmarks where your code just calls
handwritten assembly through your FFI
[ ] String-processing benchmarks where you just call PCRE
[ ] Matrix-math benchmarks where you just call BLAS
 [X] Noone really believes that your language is faster than:
 [X] assembly  [X] C  [X] FORTRAN  [X] Java  [ ] Ruby  [ ] Prolog
 [ ] Rejection of orthodox programming-language theory without justification
 [ ] Rejection of orthodox systems programming without justification
 [ ] Rejection of orthodox algorithmic theory without justification
 [ ] Rejection of basic 

Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/5/2014 12:25 PM, Araq wrote:

'assume' is not nearly powerful enough for this and in no way "revolutionary".


More in the near-term realm of possibility are asserts that constrain the range 
of values for a variable, which can subsequently eliminate the extra code needed 
to handle the full range of the type.


One case is the one that started off this whole discussion - constraining the 
range of values so that an overflow-checking-multiply need not actually check 
the overflow, because an overflow might be impossible.


This kind of situation can come up in generic code, where the generic code is 
written conservatively and defensively, then relying on the caller to provide a 
few asserts which will in effect "customize" the generic code.


What's exciting about this is it'll give us a lever we can use to generate 
better code than other languages are capable of.


Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/4/2014 8:01 AM, Andrei Alexandrescu wrote:

On 8/4/14, 7:27 AM, Matthias Bentrup wrote:

Should this semantics extend to array bounds checking, i.e. after the
statement

foo[5] := 0;

can the optimizer assume that foo.length >= 6 ?


Yes, definitely. -- Andrei



Yes, after all, bounds checking is just another form of asserts.


Re: Walter is offline for the time being

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/4/2014 10:00 AM, Andrei Alexandrescu wrote:

FYI Walter's Internet connection is björked so he's been offline yesterday and
at least part of today. -- Andrei


Back up now, yay! But I have a deluge of posts/email to catch up on.


Re: assert semantic change proposal

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 08:11:16PM +, via Digitalmars-d wrote:
> On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via Digitalmars-d
> wrote:
> >Exactly. I think part of the problem is that people have been using
> >assert with the wrong meaning. In my mind, 'assert(x)' doesn't mean
> >"abort if x is false in debug mode but silently ignore in release
> >mode", as some people apparently think it means. To me, it means "at
> >this point in the program, x is true".  It's that simple.
> 
> A language construct with such a meaning is useless as a safety
> feature.

I don't see it as a safety feature at all.


> If I first have to prove that the condition is true before I can
> safely use an assert, I don't need the assert anymore, because I've
> already proved it.

I see it as future proofing: I may have proven the condition for *this*
version of the program, but all software will change (unless it's dead),
and change means the original proof may no longer be valid, but this
part of the code is still written under the assumption that the
condition holds. In most cases, it *does* still hold, so in general
you're OK, but sometimes a change invalidates an axiom that, in
consequence, invalidates the assertion.  Then the assertion will trip
(in non-release mode, of course), telling me that my program logic has
become invalid due to the change I made.  So I'll have to fix the
problem so that the condition holds again.


> If it is intended to be an optimization hint, it should be implemented
> as a pragma, not as a prominent feature meant to be widely used. (But
> I see that you have a different use case, see my comment below.)

And here is the beauty of the idea: rather than polluting my code with
optimization hints, which are out-of-band (and which are generally
unverified and may be outright wrong after the code undergoes several
revisions), I am stating *facts* about my program logic that must hold
-- which therefore fits in very logically with the code itself. It even
self-documents the code, to some extent. Then as an added benefit, the
compiler is able to use these facts to emit more efficient code. So to
me, it *should* be a prominent, widely-used feature. I would use it, and
use it a *lot*.

 
> >The optimizer only guarantees (in theory) consistent program
> >behaviour if the program is valid to begin with. If the program is
> >invalid, all bets are off as to what its "optimized" version does.
> 
> There is a difference between invalid and undefined: A program is
> invalid ("buggy"), if it doesn't do what it's programmer intended,
> while "undefined" is a matter of the language specification. The
> (wrong) behaviour of an invalid program need not be undefined, and
> often isn't in practice.

To me, this distinction doesn't matter in practice, because in practice,
an invalid program produces a wrong result, and a program with undefined
behaviour also produces a wrong result. I don't care what kind of wrong
result it is; what I care is to fix the program to *not* produce a wrong
result.


> An optimizer may only transform code in a way that keeps the resulting
> code semantically equivalent. This means that if the original
> "unoptimized" program is well-defined, the optimized one will be too.

That's a nice property to have, but again, if my program produces a
wrong result, then my program produces a wrong result. As a language
user, I don't care that the optimizer may change one wrong result to a
different wrong result.  What I care about is to fix the code so that
the program produces the *correct* result. To me, it only matters that
the optimizer does the Right Thing when the program is correct to begin
with. If the program was wrong, then it doesn't matter if the optimizer
makes it a different kind of wrong; the program should be fixed so that
it stops being wrong.


> >Yes, the people using assert as a kind of "check in debug mode but
> >ignore in release mode" should really be using something else
> >instead, since that's not what assert means. I'm honestly astounded
> >that people would actually use assert as some kind of
> >non-release-mode-check instead of the statement of truth that it was
> >meant to be.
> 
> Well, when this "something else" is introduced, it will need to
> replace almost every existing instance of "assert", as the latter must
> only be used if it is proven that the condition is always true. To
> name just one example, it cannot be used in range `front` and
> `popFront` methods to assert that the range is not empty, unless there
> is an additional non-assert check directly before it.

I don't follow this reasoning. For .front and .popFront to assert that
the range is non-empty, simply means that user code that attempts to do
otherwise is wrong by definition, and must be fixed. I don't care if
it's wrong as in invalid, or wrong as in undefined, the bottom line is
that code that calls .front or .popFront on an empty range is
incorrectly written, and therefore must be fixed.


Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

On 8/5/2014 12:13 PM, H. S. Teoh via Digitalmars-d wrote:

The way I see it, we need to educate D users to use 'assert' with the
proper meaning,


I agree. We're starting with improving the spec wording.



I think in the
long run, this will turn out to be an important, revolutionary
development not just in D, but in programming languages in general.


I agree. It also opens the door for programmers providing simple, checkable 
hints to the optimizer. I don't know how far we can go with that, but I suspect 
significant opportunity.




Re: Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread Tofu Ninja via Digitalmars-d
On Tuesday, 5 August 2014 at 19:31:08 UTC, H. S. Teoh via 
Digitalmars-d wrote:
There are currently two Phobos PR's that implement essentially 
the same

functionality, but in slightly different ways:

https://github.com/D-Programming-Language/phobos/pull/1255
https://github.com/D-Programming-Language/phobos/pull/2343

From the discussion on Github, it seems to me that we should 
only

introduce one new function rather than two similar but
not-quite-the-same functions. Since the discussion seems stuck 
on
Github, I thought I should bring it here to the larger 
community to see
if we can reach a consensus (who am I kidding... but one can 
hope :-P)

on:

(1) What name to use (bring on the bikeshed rainbow)
(2) Exactly what functionality should be included.
(3) Which PR to merge.


T


Can you explain the utility of both of them? I am not big into 
functional programming so I am not seeing it.


Re: assert semantic change proposal

2014-08-05 Thread Jeremy Powers via Digitalmars-d
>
>
>> Well, yes: Undefined behaviour in the sense
>>
> "And there will be no injection of undefined behaviour
>^~~
>conventional sense
>
>
>  - the undefined behaviour is already there if the asserted constraints
>^~~
>altered sense
>
> are not valid."
>


I still don't quite see your point.  Perhaps I should have said:  In the
case where an asserted constraint is not met, the program is invalid.
 Being invalid it has undefined behaviour if it continues.

>From another:

> There is a difference between invalid and undefined: A program is invalid
> ("buggy"), if it doesn't do what it's programmer intended, while
> "undefined" is a matter of the language specification. The (wrong)
> behaviour of an invalid program need not be undefined, and often isn't in
> practice.
>

I disagree with this characterization.  Something can be buggy, not doing
what the programmer intended, while also a perfectly valid program.  You
can make wrong logic that is valid/reasonable in the context of the program.

Invalid in this case means the programmer has explicitly stated certain
constraints must hold, and such constraints do not hold.  So if you
continue with the program in the face of invalid constraints, you have no
guarantee what will happen - this is what I mean by 'undefined'.


Re: assert semantic change proposal

2014-08-05 Thread Ary Borenszweig via Digitalmars-d

On 8/5/14, 5:26 PM, H. S. Teoh via Digitalmars-d wrote:

On Tue, Aug 05, 2014 at 05:09:43PM -0300, Ary Borenszweig via Digitalmars-d 
wrote:

On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:

On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:



Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to be a
revolutionary feature in D.


LLVM already has it. It's not revolutionary:

http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic


Even better, so there's precedent for this. Even if it's only exposed at
the LLVM level, rather than the source language. Introducing this at the
source language level (like proposed in D) is a good step forward IMO.



By the way, I think Walter said "assert can be potentially used to
make optimizations" not "Oh, I just had an idea! We could use assert
to optimize code". I think the code already does this. Of course, we
would have to look at the source code to find out...


If the code already does this, then what are we arguing about?


Exactly. I think the OP doesn't know that Walter wasn't proposing any 
semantic change in assert. Walter was just stating how assert works for 
him (or should work, but probably some optimizations are not implemented).


We should ask Walter, but I think he's offline...



Re: assert semantic change proposal

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 05:09:43PM -0300, Ary Borenszweig via Digitalmars-d 
wrote:
> On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:
> >On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d 
> >wrote:
> 
> >Furthermore, I think Walter's idea to use asserts as a source of
> >optimizer hints is a very powerful concept that may turn out to be a
> >revolutionary feature in D.
> 
> LLVM already has it. It's not revolutionary:
> 
> http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic

Even better, so there's precedent for this. Even if it's only exposed at
the LLVM level, rather than the source language. Introducing this at the
source language level (like proposed in D) is a good step forward IMO.


> By the way, I think Walter said "assert can be potentially used to
> make optimizations" not "Oh, I just had an idea! We could use assert
> to optimize code". I think the code already does this. Of course, we
> would have to look at the source code to find out...

If the code already does this, then what are we arguing about?


> By the way, most of the time in this list I hear "We could use this
> and that feature to allow better optimizations" and no optimizations
> are ever implemented. Look at all those @pure nosafe nothrow const
> that you have to put and yet you don't get any performance boost from
> that.

Automatic attribute inference is the way to go. More and more, I'm
beginning to be convinced that manually-specified attributes are a dead
end.

Having said that, though, I'm pretty sure the compiler could (and
should) do more with pure/nothrow/const, etc.. I think it may already be
taking advantage of nothrow/const (nothrow elides throw/catch
scaffolding, e.g., which could be important in high-performance inner
loops). With pure the current situation could be improved, since it
currently only has effect when you call the same pure function multiple
times within a single expression. But there are lots more things that
could be done with it, e.g., memoization across different statements in
the function body.

PR's would be welcome. ;-)


T
-- 
What do you mean the Internet isn't filled with subliminal messages? What about 
all those buttons marked "submit"??


Re: assert semantic change proposal

2014-08-05 Thread via Digitalmars-d

On Tuesday, 5 August 2014 at 20:09:44 UTC, Ary Borenszweig wrote:
By the way, most of the time in this list I hear "We could use 
this and that feature to allow better optimizations" and no 
optimizations are ever implemented. Look at all those @pure 
nosafe nothrow const that you have to put and yet you don't get 
any performance boost from that.


Hmm... I've never seen these annotations as a performance 
feature. They're there to help writing correct programs. If this 
allows some performance gains, great, but IMHO it's not their 
primary purpose.


Re: assert semantic change proposal

2014-08-05 Thread Timon Gehr via Digitalmars-d

On 08/05/2014 08:59 PM, Jeremy Powers via Digitalmars-d wrote:

...

Well, yes: Undefined behaviour in the sense


"And there will be no injection of undefined behaviour
   ^~~
   conventional sense

 - the undefined behaviour is already there if the asserted constraints
   ^~~
   altered sense

are not valid."




Re: assert semantic change proposal

2014-08-05 Thread via Digitalmars-d
On Tuesday, 5 August 2014 at 18:57:40 UTC, H. S. Teoh via 
Digitalmars-d wrote:
Exactly. I think part of the problem is that people have been 
using
assert with the wrong meaning. In my mind, 'assert(x)' doesn't 
mean
"abort if x is false in debug mode but silently ignore in 
release mode",
as some people apparently think it means. To me, it means "at 
this point

in the program, x is true".  It's that simple.


A language construct with such a meaning is useless as a safety 
feature. If I first have to prove that the condition is true 
before I can safely use an assert, I don't need the assert 
anymore, because I've already proved it. If it is intended to be 
an optimization hint, it should be implemented as a pragma, not 
as a prominent feature meant to be widely used. (But I see that 
you have a different use case, see my comment below.)



The optimizer only guarantees (in theory)
consistent program behaviour if the program is valid to begin 
with. If
the program is invalid, all bets are off as to what its 
"optimized"

version does.


There is a difference between invalid and undefined: A program is 
invalid ("buggy"), if it doesn't do what it's programmer 
intended, while "undefined" is a matter of the language 
specification. The (wrong) behaviour of an invalid program need 
not be undefined, and often isn't in practice.


An optimizer may only transform code in a way that keeps the 
resulting code semantically equivalent. This means that if the 
original "unoptimized" program is well-defined, the optimized one 
will be too.


Yes, the people using assert as a kind of "check in debug mode 
but
ignore in release mode" should really be using something else 
instead,
since that's not what assert means. I'm honestly astounded that 
people
would actually use assert as some kind of 
non-release-mode-check instead

of the statement of truth that it was meant to be.


Well, when this "something else" is introduced, it will need to 
replace almost every existing instance of "assert", as the latter 
must only be used if it is proven that the condition is always 
true. To name just one example, it cannot be used in range 
`front` and `popFront` methods to assert that the range is not 
empty, unless there is an additional non-assert check directly 
before it.




Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to 
be a
revolutionary feature in D. It could very well develop into the 
answer
to my long search for a way of declaring identities in 
user-defined

types that allow high-level optimizations by the optimizer, thus
allowing user-defined types to be on par with built-in types in
optimizability. Currently, the compiler is able to optimize 
x+x+x+x into
4*x if x is an int, for example, but it can't if x is a 
user-defined
type (e.g. BigInt), because it can't know if opBinary was 
defined in a
way that obeys this identity. But if we can assert that this 
holds for
the user-defined type, e.g., BigInt, then the compiler can make 
use of
that axiom to perform such an optimization.  This would then 
allow code
to be written in more human-readable forms, and still maintain 
optimal

performance, even where user-defined types are involved.


This is a very powerful feature indeed, but to be used safely, 
the compiler needs to be able to detect invalid uses reliably at 
compile time. This is currently not the case:


void onlyOddNumbersPlease(int n) {
assert(n % 2);
}

void foo() {
onlyOddNumbersPlease(42);// shouldn't compile, but 
does

}

It would be great if this were possible. In the example of 
`front` and `popFront`, programs that call these methods on a 
range that could theoretically be empty wouldn't compile. This 
might be useful for optimization, but above that it's useful for 
verifying correctness.


Unfortunately this is not what has been suggested (and was 
evidently intended from the beginning)...


Re: assert semantic change proposal

2014-08-05 Thread Ary Borenszweig via Digitalmars-d

On 8/5/14, 3:55 PM, H. S. Teoh via Digitalmars-d wrote:

On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:



Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to be a
revolutionary feature in D.


LLVM already has it. It's not revolutionary:

http://llvm.org/docs/LangRef.html#llvm-assume-intrinsic

By the way, I think Walter said "assert can be potentially used to make 
optimizations" not "Oh, I just had an idea! We could use assert to 
optimize code". I think the code already does this. Of course, we would 
have to look at the source code to find out...


By the way, most of the time in this list I hear "We could use this and 
that feature to allow better optimizations" and no optimizations are 
ever implemented. Look at all those @pure nosafe nothrow const that you 
have to put and yet you don't get any performance boost from that.


Re: assert semantic change proposal

2014-08-05 Thread via Digitalmars-d
On Tuesday, 5 August 2014 at 19:14:57 UTC, H. S. Teoh via 
Digitalmars-d wrote:

T

--
Ignorance is bliss... until you suffer the consequences!


(sic!)


Phobos PR: `call` vs. `bindTo`

2014-08-05 Thread H. S. Teoh via Digitalmars-d
There are currently two Phobos PR's that implement essentially the same
functionality, but in slightly different ways:

https://github.com/D-Programming-Language/phobos/pull/1255
https://github.com/D-Programming-Language/phobos/pull/2343

>From the discussion on Github, it seems to me that we should only
introduce one new function rather than two similar but
not-quite-the-same functions. Since the discussion seems stuck on
Github, I thought I should bring it here to the larger community to see
if we can reach a consensus (who am I kidding... but one can hope :-P)
on:

(1) What name to use (bring on the bikeshed rainbow)
(2) Exactly what functionality should be included.
(3) Which PR to merge.


T

-- 
They say that "guns don't kill people, people kill people." Well I think the 
gun helps. If you just stood there and yelled BANG, I don't think you'd kill 
too many people. -- Eddie Izzard, Dressed to Kill


Re: assert semantic change proposal

2014-08-05 Thread Araq via Digitalmars-d

Furthermore, I think Walter's idea to use asserts as a source of
optimizer hints is a very powerful concept that may turn out to 
be a
revolutionary feature in D. It could very well develop into the 
answer
to my long search for a way of declaring identities in 
user-defined

types that allow high-level optimizations by the optimizer, thus
allowing user-defined types to be on par with built-in types in
optimizability.


The answer to your search is "term rewriting macros (with 
sideeffect and alias analysis)" as introduced by Nimrod. Watch my 
talk. ;-)


'assume' is not nearly powerful enough for this and in no way 
"revolutionary".


Re: assert semantic change proposal

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 11:35:14AM -0700, Walter Bright via Digitalmars-d wrote:
> (limited connectivity for me)
> 
> For some perspective, recently gcc and clang have introduced
> optimizations based on undefined behavior in C/C++. The undefined
> behavior has been interpreted by modern optimizers as "these cases
> will never happen". This has wound up breaking a significant amount of
> existing code. There have been a number of articles about these, with
> detailed explanations about how they come about and the new, more
> correct, way to write code.

And I'd like to emphasize that code *should* have been written in this
new, more correct way in the first place. Yes, it's a pain to have to
update legacy code, but where would progress be if we're continually
hampered by the fear of breaking what was *already* broken to begin
with?


> The emerging consensus is that the code breakage is worth it for the
> performance gains. That said, I do hear what people are saying about
> potential code breakage and agree that we need to address this
> properly.

The way I see it, we need to educate D users to use 'assert' with the
proper meaning, and to replace all other usages with alternatives
(perhaps a Phobos function that does what they want without the full
implications of assert -- i.e., "breaking" behaviour like influencing
the optimizer, etc.). Once reasonable notice and time has been given,
I'm all for introducing optimizer hinting with asserts. I think in the
long run, this will turn out to be an important, revolutionary
development not just in D, but in programming languages in general.


T

-- 
Ignorance is bliss... until you suffer the consequences!


Re: std.jgrandson

2014-08-05 Thread via Digitalmars-d
On Tuesday, 5 August 2014 at 18:12:54 UTC, Andrei Alexandrescu 
wrote:

On 8/5/14, 10:58 AM, Dicebot wrote:
On Tuesday, 5 August 2014 at 17:58:08 UTC, Andrei Alexandrescu 
wrote:

All good points. Proceed with implementation! :o) -- Andrei


Any news about std.allocator ? ;)


It looks like I need to go all out and write a garbage 
collector, design and implementation and all.


A few months ago, you posted a video of a talk where you 
presented code from a garbage collector (it used templated mark 
functions to get precise tracing). I remember you said that this 
code was in use somewhere (I guess at FB?). Can this be used as a 
basis?


Re: assert semantic change proposal

2014-08-05 Thread Jeremy Powers via Digitalmars-d
>
> And there will be no injection of undefined behaviour - the undefined
>> behaviour is already there if the asserted constraints are not valid.
>>
>
> Well, no. http://en.wikipedia.org/wiki/Undefined_behavior
>

Well, yes: Undefined behaviour in the sense the writer of the program has
not defined it.

A program is written with certain assumptions about the state at certain
points.  An assert can be used to explicitly state those assumptions, and
halt the program (in non-release) if the assumptions are invalid.  If the
state does not match what the assert assumes it to be, then any code
relying on that state is invalid, and what it does has no definition given
by the programmer.

(And here I've circled back to assert==assume... all because I assume what
assert means)

If the state that is being checked could actually ever be valid, then it is
not valid for an assert - use some other validation.


Re: assert semantic change proposal

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 11:18:46AM -0700, Jeremy Powers via Digitalmars-d wrote:
> This has already been stated by others, but I just wanted to pile on -
> I agree with Walter's definition of assert.
> 
> 2. Semantic change.
> > The proposal changes the meaning of assert(), which will result in
> > breaking existing code.  Regardless of philosophizing about whether
> > or not the code was "already broken" according to some definition of
> > assert, the fact is that shipping programs that worked perfectly
> > well before may no longer work after this change.
> 
> Disagree.
> Assert (as I always understood it) means 'this must be true, or my
> program is broken.'  In -release builds the explicit explosion on a
> triggered assert is skipped, but any code after a non-true assert is,
> by definition, broken.  And by broken I mean the fundamental
> constraints of the program are violated, and so all bets are off on it
> working properly.  A shipping program that 'worked perfectly well' but
> has unfulfilled asserts is broken - either the asserts are not
> actually true constraints, or the broken path just hasn't been hit
> yet.

Exactly. I think part of the problem is that people have been using
assert with the wrong meaning. In my mind, 'assert(x)' doesn't mean
"abort if x is false in debug mode but silently ignore in release mode",
as some people apparently think it means. To me, it means "at this point
in the program, x is true".  It's that simple.

Now if it turns out that x actually *isn't* true, then you have a
contradiction in your program logic, and therefore, by definition, your
program is invalid, which means any subsequent behaviour is undefined.
If you start with an axiomatic system where the axioms contain a
contradiction, then any results you derive from the system will be
meaningless, since a contradiction vacuously proves everything.
Similarly, any program behaviour that follows a false assertion is
undefined, because one of the "axioms" (i.e., assertions) introduces a
contradiction to the program logic.


> Looking at the 'breaking' example:
> 
> assert(x!=1);
> if (x==1) {
>  ...
> }
> 
> If the if is optimized out, this will change from existing behaviour.
> But it is also obviously (to me at least) broken code already.  The
> assert says that x cannot be 1 at this point in the program, if it
> ever is then there is an error in the program and then it
> continues as if the program were still valid.  If x could be one, then
> the assert is invalid here.  And this code will already behave
> differently between -release and non-release builds, which is another
> kind of broken.

Which is what Walter has been saying: the code is *already* broken, and
is invalid by definition, so it makes no difference what the optimizer
does or doesn't do. If your program has an array overrun bug that writes
garbage to an unrelated variable, then you can't blame the optimizer for
producing a program where the unrelated variable acquires a different
garbage value from before. The optimizer only guarantees (in theory)
consistent program behaviour if the program is valid to begin with. If
the program is invalid, all bets are off as to what its "optimized"
version does.


> > 3a. An alternate statement of the proposal is literally "in release
> > mode, assert expressions introduce undefined behavior into your code
> > in if the expression is false".
> 
> This statement seems fundamentally true to me of asserts already,
> regardless of whether they are used for optimizations.  If your assert
> fails, and you have turned off 'blow up on assert' then your program
> is in an undefined state.  It is not that the assert introduces the
> undefined behaviour, it is that the assert makes plain an expectation
> of the code and if that expectation is false the code will have
> undefined behaviour.

I agree.


> > 3b. Since assert is such a widely used feature (with the original
> > semantics, "more asserts never hurt"), the proposal will inject a
> > massive amount of undefined behavior into existing code bases,
> > greatly increasing the probability of experiencing problems related
> > to undefined behavior.
> >
> 
> I actually disagree with the 'more asserts never hurt' statement.
> Exactly because asserts get compiled out in release builds, I do not
> find them very useful/desirable.  If they worked as optimization hints
> I might actually use them more.
> 
> And there will be no injection of undefined behaviour - the undefined
> behaviour is already there if the asserted constraints are not valid.

And if people are using asserts in ways that are different from what
it's intended to be (expressions that must be true if the program logic
has been correctly implemented), then their programs are already invalid
by definition. Why should it be the compiler's responsibility to
guarantee consistent behaviour of invalid code?


> > Maybe if the yea side was consulted, they might easily agree to an
> > alternative way of achievi

Re: assert semantic change proposal

2014-08-05 Thread Walter Bright via Digitalmars-d

(limited connectivity for me)

For some perspective, recently gcc and clang have introduced 
optimizations based on undefined behavior in C/C++. The undefined 
behavior has been interpreted by modern optimizers as "these cases will 
never happen". This has wound up breaking a significant amount of 
existing code. There have been a number of articles about these, with 
detailed explanations about how they come about and the new, more 
correct, way to write code.


The emerging consensus is that the code breakage is worth it for the 
performance gains. That said, I do hear what people are saying about 
potential code breakage and agree that we need to address this properly.


Re: assert semantic change proposal

2014-08-05 Thread Timon Gehr via Digitalmars-d

On 08/05/2014 08:18 PM, Jeremy Powers via Digitalmars-d wrote:


And there will be no injection of undefined behaviour - the undefined
behaviour is already there if the asserted constraints are not valid.


Well, no. http://en.wikipedia.org/wiki/Undefined_behavior


Re: assert semantic change proposal

2014-08-05 Thread Jeremy Powers via Digitalmars-d
This has already been stated by others, but I just wanted to pile on - I
agree with Walter's definition of assert.

2. Semantic change.
> The proposal changes the meaning of assert(), which will result in
> breaking existing code.  Regardless of philosophizing about whether or not
> the code was "already broken" according to some definition of assert, the
> fact is that shipping programs that worked perfectly well before may no
> longer work after this change.



Disagree.
Assert (as I always understood it) means 'this must be true, or my program
is broken.'  In -release builds the explicit explosion on a triggered
assert is skipped, but any code after a non-true assert is, by definition,
broken.  And by broken I mean the fundamental constraints of the program
are violated, and so all bets are off on it working properly.
A shipping program that 'worked perfectly well' but has unfulfilled asserts
is broken - either the asserts are not actually true constraints, or the
broken path just hasn't been hit yet.

Looking at the 'breaking' example:

assert(x!=1);
if (x==1) {
 ...
}

If the if is optimized out, this will change from existing behaviour.  But
it is also obviously (to me at least) broken code already.  The assert says
that x cannot be 1 at this point in the program, if it ever is then there
is an error in the program and then it continues as if the program were
still valid.  If x could be one, then the assert is invalid here.  And this
code will already behave differently between -release and non-release
builds, which is another kind of broken.


3a. An alternate statement of the proposal is literally "in release mode,
> assert expressions introduce undefined behavior into your code in if the
> expression is false".
>

This statement seems fundamentally true to me of asserts already,
regardless of whether they are used for optimizations.  If your assert
fails, and you have turned off 'blow up on assert' then your program is in
an undefined state.  It is not that the assert introduces the undefined
behaviour, it is that the assert makes plain an expectation of the code and
if that expectation is false the code will have undefined behaviour.



3b. Since assert is such a widely used feature (with the original
> semantics, "more asserts never hurt"), the proposal will inject a massive
> amount of undefined behavior into existing code bases, greatly increasing
> the probability of experiencing problems related to undefined behavior.
>

I actually disagree with the 'more asserts never hurt' statement.  Exactly
because asserts get compiled out in release builds, I do not find them very
useful/desirable.  If they worked as optimization hints I might actually
use them more.

And there will be no injection of undefined behaviour - the undefined
behaviour is already there if the asserted constraints are not valid.


 Maybe if the yea side was consulted, they might easily agree to an
> alternative way of achieving the improved optimization goal, such as
> creating a new function that has the proposed semantics.
>

Prior to this (incredibly long) discussion, I was not aware people had a
different interpretation of assert.  To me, this 'new semantics' is
precisely what I always thought assert was, and the proposal is just
leveraging it for some additional optimizations.  So from my standpoint,
adding a new function would make sense to support this 'existing' behaviour
that others seem to rely on - assert is fine as is, if the definition of
'is' is what I think it is.


Re: std.jgrandson

2014-08-05 Thread Andrei Alexandrescu via Digitalmars-d

On 8/5/14, 10:58 AM, Dicebot wrote:

On Tuesday, 5 August 2014 at 17:58:08 UTC, Andrei Alexandrescu wrote:

All good points. Proceed with implementation! :o) -- Andrei


Any news about std.allocator ? ;)


It looks like I need to go all out and write a garbage collector, design 
and implementation and all.


Andrei


Re: std.jgrandson

2014-08-05 Thread H. S. Teoh via Digitalmars-d
On Tue, Aug 05, 2014 at 10:58:08AM -0700, Andrei Alexandrescu via Digitalmars-d 
wrote:
> On 8/5/14, 10:48 AM, Sean Kelly wrote:
[...]
> >The original point of JSON was that it auto-converts to
> >Javascript data.  And since Javascript only has one numeric type,
> >of course JSON does too.  But I think it's important that a JSON
> >package for a language maps naturally to the types available in
> >that language.  D provides both floating point and integer types,
> >each with their own costs and benefits, and so the JSON package
> >should as well.  It ends up being a lot easier to deal with than
> >remembering to round from JSON.number or whatever when assigning
> >to an int.
> >
> >In fact, JSON doesn't even impose any precision restrictions on
> >its numeric type, so one could argue that we should be using
> >BigInt and BigFloat.  But this would stink most of the time, so...

Would it make sense to wrap a JSON number in an opaque type that
implicitly casts to the target built-in type?


> >On an unrelated note, while the default encoding for strings is
> >UTF-8, the RFC absolutely allows for UTF-16 surrogate pairs, and
> >this must be supported.  Any strings you get from Internet
> >Explorer will be encoded as UTF-16 surrogate pairs regardless of
> >content, presumably since Windows uses 16 bit wide chars for
> >unicode.
[...]

Wait, I thought surrogate pairs only apply to characters past U+? Is
it even possible to encode BMP characters with surrogate pairs?? Or do
you mean UTF-16?


T

-- 
Music critic: "That's an imitation fugue!"


Re: scope guards

2014-08-05 Thread Sean Kelly via Digitalmars-d

On Tuesday, 5 August 2014 at 16:31:51 UTC, Jacob Carlborg wrote:

On 2014-08-05 01:10, Sean Kelly wrote:

The easiest thing would be to provide a thread-local reference 
to

the currently in-flight exception.  Then you could do whatever
checking you wanted to inside the scope block.


That's quite clever. Can we do that?


I don't see why not.  The exception handling code would need to
set and clear the reference at the proper points, but this
shouldn't be too difficult.  We'd have to be careful how it's
documented though.  I think it's mostly applicable to Manu's
case--specializing code in scope guards.  For example:
http://www.gotw.ca/gotw/047.htm


Re: std.jgrandson

2014-08-05 Thread Dicebot via Digitalmars-d
On Tuesday, 5 August 2014 at 17:58:08 UTC, Andrei Alexandrescu 
wrote:

All good points. Proceed with implementation! :o) -- Andrei


Any news about std.allocator ? ;)


Re: std.jgrandson

2014-08-05 Thread Andrei Alexandrescu via Digitalmars-d

On 8/5/14, 10:48 AM, Sean Kelly wrote:

On Tuesday, 5 August 2014 at 17:17:56 UTC, Andrei Alexandrescu
wrote:


I searched around a bit and it seems different libraries have
different takes to this numeric matter. A simple reading of the spec
suggests that floating point data is the only numeric type. However,
many implementations choose to distinguish between floating point and
integrals.


The original point of JSON was that it auto-converts to
Javascript data.  And since Javascript only has one numeric type,
of course JSON does too.  But I think it's important that a JSON
package for a language maps naturally to the types available in
that language.  D provides both floating point and integer types,
each with their own costs and benefits, and so the JSON package
should as well.  It ends up being a lot easier to deal with than
remembering to round from JSON.number or whatever when assigning
to an int.

In fact, JSON doesn't even impose any precision restrictions on
its numeric type, so one could argue that we should be using
BigInt and BigFloat.  But this would stink most of the time, so...

On an unrelated note, while the default encoding for strings is
UTF-8, the RFC absolutely allows for UTF-16 surrogate pairs, and
this must be supported.  Any strings you get from Internet
Explorer will be encoded as UTF-16 surrogate pairs regardless of
content, presumably since Windows uses 16 bit wide chars for
unicode.


All good points. Proceed with implementation! :o) -- Andrei


Re: std.jgrandson

2014-08-05 Thread Sean Kelly via Digitalmars-d

On Tuesday, 5 August 2014 at 17:17:56 UTC, Andrei Alexandrescu
wrote:


I searched around a bit and it seems different libraries have 
different takes to this numeric matter. A simple reading of the 
spec suggests that floating point data is the only numeric 
type. However, many implementations choose to distinguish 
between floating point and integrals.


The original point of JSON was that it auto-converts to
Javascript data.  And since Javascript only has one numeric type,
of course JSON does too.  But I think it's important that a JSON
package for a language maps naturally to the types available in
that language.  D provides both floating point and integer types,
each with their own costs and benefits, and so the JSON package
should as well.  It ends up being a lot easier to deal with than
remembering to round from JSON.number or whatever when assigning
to an int.

In fact, JSON doesn't even impose any precision restrictions on
its numeric type, so one could argue that we should be using
BigInt and BigFloat.  But this would stink most of the time, so...

On an unrelated note, while the default encoding for strings is
UTF-8, the RFC absolutely allows for UTF-16 surrogate pairs, and
this must be supported.  Any strings you get from Internet
Explorer will be encoded as UTF-16 surrogate pairs regardless of
content, presumably since Windows uses 16 bit wide chars for
unicode.


Re: std.jgrandson

2014-08-05 Thread Dicebot via Digitalmars-d
On Tuesday, 5 August 2014 at 17:17:56 UTC, Andrei Alexandrescu 
wrote:

On 8/5/14, 8:23 AM, Daniel Murphy wrote:

"Andrea Fontana"  wrote in message
news:takluoqmlmmooxlov...@forum.dlang.org...

If I'm right, json has just one numeric type. No difference 
between

integers / float and no limits.

So probably the mapping is:

float/double/real/int/long => number


Maybe, but std.json has three numeric types.


I searched around a bit and it seems different libraries have 
different takes to this numeric matter. A simple reading of the 
spec suggests that floating point data is the only numeric 
type. However, many implementations choose to distinguish 
between floating point and integrals.


There is certain benefit in using same primitive types for JSON 
as ones defined by BSON spec.


Re: Guide for dmd development @ Win64?

2014-08-05 Thread Dicebot via Digitalmars-d

On Tuesday, 5 August 2014 at 08:40:00 UTC, Mike Parker wrote:

On 8/5/2014 1:09 PM, Dicebot wrote:
Ok finally have managed to compile everything for Windows 8.1 
+ Visual
C++ Express 2013 and I am very very happy that I do Linux 
programming

for a living.


To be fair, I frequently build C and C++ projects with MinGW 
and/or VC without needing to jump through any hoops, since most 
projects these days either build with a build tool like 
Premake/CMake/(Take Your Pick) or provide a number of Makefiles 
for different compiler configurations. DMD's build system on 
Windows is just masochistic.


I am most frustrated by the fact that they break the path layout 
between Visual Studio releases for no reason and that cl.exe 
can't find own basic dll out of the box without explicit path 
hint. This has nothing to do with DMD build system - I can only 
blame latter for no paying for every single Windows / Visual 
Studio version out there to test path compatibility.


Re: std.jgrandson

2014-08-05 Thread Andrei Alexandrescu via Digitalmars-d

On 8/5/14, 8:23 AM, Daniel Murphy wrote:

"Andrea Fontana"  wrote in message
news:takluoqmlmmooxlov...@forum.dlang.org...


If I'm right, json has just one numeric type. No difference between
integers / float and no limits.

So probably the mapping is:

float/double/real/int/long => number


Maybe, but std.json has three numeric types.


I searched around a bit and it seems different libraries have different 
takes to this numeric matter. A simple reading of the spec suggests that 
floating point data is the only numeric type. However, many 
implementations choose to distinguish between floating point and integrals.


Andrei



Re: Guide for dmd development @ Win64?

2014-08-05 Thread Dicebot via Digitalmars-d

On Tuesday, 5 August 2014 at 17:12:27 UTC, Orvid King wrote:
I had to change much more than that but finally got it to the 
point of
actually running `make -f win64.mak` for druntime. There it 
fails trying
to compile errno.c  with a system error "mspdb120.dll is 
missing".
Googling for this message finds suggestions to kill 
"mspdbsrv.exe"

process but there is no such process running >_<


You should have been able to pass VCDIR as a variable directly 
to the make command, just as I do for the phobos and druntime 
builds.


It tries to use VCDIR/bin/x86_amd64 but this dll can be found 
only in VCDIR/bin - do make it work one either needs to add 
former to PATH or modify win64.mak to lookup both.


Re: Guide for dmd development @ Win64?

2014-08-05 Thread Orvid King via Digitalmars-d

On 8/4/2014 10:17 PM, Kapps wrote:

3) Edit the tools win32.mak to use -m64 and thus actually be 64-bit. The
makefiles don't use a different folder for x86 and x64, so you can't
have a 32-bit version of phobos and 64-bit version of phobos at same
time, so tools needs to be built for 64-bit. Then I removed everything
but ddemangle and rdmd from the targets.


To solve this, I just cleaned the input directory after installing the 
compiled binaries. The makefile for the tools repo would then be 
compiling against my newly installed dmd/druntime/phobos, which means 
there shouldn't be any issues.


Complete the checklist! :o)

2014-08-05 Thread Andrei Alexandrescu via Digitalmars-d

http://colinm.org/language_checklist.html

Andrei


Re: Guide for dmd development @ Win64?

2014-08-05 Thread Orvid King via Digitalmars-d

On 8/4/2014 9:43 PM, Dicebot wrote:

On Monday, 4 August 2014 at 22:48:51 UTC, Orvid King wrote:

Yep, you'll need to update VCDIR at the top of updateAll.sh to point
into the 2013 Visual Studio directory rather than the 2010 directory.
(I believe it should be 12.0)


I had to change much more than that but finally got it to the point of
actually running `make -f win64.mak` for druntime. There it fails trying
to compile errno.c  with a system error "mspdb120.dll is missing".
Googling for this message finds suggestions to kill "mspdbsrv.exe"
process but there is no such process running >_<


You should have been able to pass VCDIR as a variable directly to the 
make command, just as I do for the phobos and druntime builds.


Re: std.jgrandson

2014-08-05 Thread Daniel Murphy via Digitalmars-d

"Jacob Carlborg"  wrote in message news:lrqvfa$2has$1...@digitalmars.com...

I'm not saying that is a bad idea or that I don't want to be able to do 
this. I just prefer this to be handled by a generic serialization module. 
Which can of course handle the simple cases, like above, as well.


I know, but I don't really care if it's part of a generic serialization 
library or not.  I just want it there.  Chances are tying it to a future 
generic serialization library is going to make it take longer. 



Re: assume, assert, enforce, @safe

2014-08-05 Thread Andrew Godfrey via Digitalmars-d
On Tuesday, 5 August 2014 at 09:42:26 UTC, Ola Fosheim Grøstad 
wrote:
But I don't think this path is all that new… so I hope Walter, 
if he continues walking down this path, remains unrelenting and 
keeps walking past "assert as assume" until he finds truly new 
territory in the realm of formal methods. That could happen and 
bring great features to D.


This! +1000!

This is what I've been feeling too: Walter is wrong - 
astonishingly wrong in fact - but in a very interesting direction 
that may have something very 'right' on the other side of it. I 
don't know what form it will take, but the example
someone gave, of keeping track of when a range is sorted vs. not 
known to be sorted, to me gives a hint of where this may lead. I 
can't quite imagine that particular example playing out, but in 
general if the compiler keeps track of properties of things then 
it could start making algorithmic-level performance decisions 
that today we always have to make by hand. To me that's 
interesting.




Re: scope guards

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-05 12:38, Manu via Digitalmars-d wrote:


'scope' class destruction is deterministic though right?

http://dlang.org/statement.html : there are examples of stuff like this:

scope  Foo f =new  Foo();


Yes, but you don't know in the destructor of "Foo" if it was used in a 
scope declaration like above or not, unless you declare the whole class 
as "scope". BTW, Tango in D1 solved this by introducing Object.dispose 
which was called then "scope" or "delete" was used.


--
/Jacob Carlborg


Re: scope guards

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-05 01:10, Sean Kelly wrote:


The easiest thing would be to provide a thread-local reference to
the currently in-flight exception.  Then you could do whatever
checking you wanted to inside the scope block.


That's quite clever. Can we do that?

--
/Jacob Carlborg


Re: std.jgrandson

2014-08-05 Thread Sönke Ludwig via Digitalmars-d

Am 03.08.2014 21:53, schrieb Andrei Alexandrescu:


What would be your estimated time of finishing?



My rough estimate would be that about two weeks of calender time should 
suffice for a first candidate, since the functionality and the design is 
already mostly there. However, it seems that VariantN will need some 
work, too (currently using opAdd results in an error for an Algebraic 
defined for JSON usage).


Re: scope guards

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-04 12:09, Manu via Digitalmars-d wrote:


I hate pointless brace and indentation spam, I feel it severely damages
the readability of my code. And try/catch has a natural tendency to
produce horrid nested structures.
I would rather C-style error reporting via sentinel values and 'if()'
than try/catch in practically every case imaginable, especially when
try/catches start to nest themselves.


If you want to the if-statements to have the same semantics won't those 
be nested as well. BTW, why don't you just wrap the whole function in a 
try-block and add several catch blocks to it. No need to nest them, it 
will also be closer to how scope-statements behave.



Okay, so why are scope guards such a key talking point in D if people
loved try/catch?
'scope-tastic' code would be flat and sequential. I find flat and
sequential code MUCH easier to reason about. Again, why would anyone
care about 'scope' if they didn't feel this way at some level?


You need to nest the scope-statements to have the same semantics as 
nested try-catch. Or don't nest the try-catch, see above.



, and I'm strongly tempted to just
abandon my experiment and return to C-style error handling with
sentinel
values.


I can't see how that will improve anything. Seems like you have some
grudge against Java and don't won't your code to look like it.


It will produce flat sequential code which is easier to follow.


You can do that with try-catch as well, see above.


I agree, I had the same thought. But I felt it was better to integrate
it into an existing (and popular) structure than to form a new one.
I actually think there would be a valuable place for both though.


So you think a "catch" with an implicit "try" is a completely new feature?

--
/Jacob Carlborg


Re: scope guards

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-04 11:07, Daniel Murphy wrote:


But really, doesn't everyone?


Sure, but not to the point where I would go back to C style error handling.

--
/Jacob Carlborg


Re: std.jgrandson

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-05 11:54, Sönke Ludwig wrote:


I think we could also simply keep the generic default recursive descent
behavior, but allow serializers to customize the process using some kind
of trait. This could even be added later in a backwards compatible
fashion if necessary.


I have a very flexible trait like system in place. This allows to 
configure the serializer based on the given archiver and user 
customizations. To avoid having the serializer do unnecessary work which 
the archiver cannot handle.



BTW, how is the progress for Orange w.r.t. to the conversion to a more
template+allocation-less approach


Slowly. I think the range support in the serializer is basically 
complete. But the deserializer isn't done yet. I would also like to 
provide, at least, one additional archiver type besides XML. BTW std.xml 
doesn't make it any easier to rangify the serializer.


I've been focusing on D/Objective-C lately, which I think is in a more 
complete state than std.serialization. I would really like to get it 
done and create a pull request so I can get back to std.serialization. 
But I always get stuck after a merge with something breaking. With the 
summer and vacations I haven't been able to work that much on D at all.


, is a new std proposal within the next

DMD release cycle realistic?


Probably not.


I quite like most of how vibe.data.serialization turned out, but it
can't do any alias detection/deduplication (and I have no concrete plans
to add support for that), which is why I currently wouldn't consider it
as a potential Phobos candidate.


I'm quite satisfied with the feature support and flexibility of 
Orange/std.serialization. With the new trait like system it will be even 
more flexible.


--
/Jacob Carlborg


Re: std.jgrandson

2014-08-05 Thread Jacob Carlborg via Digitalmars-d

On 2014-08-05 14:40, Daniel Murphy wrote:


I guess I meant types that have an obvious mapping to json types.

int/long -> json integer
bool -> json bool
string -> json string
float/real -> json float (close enough)
T[] -> json array
T[string] -> json object
struct -> json object

This is usually enough for config and data files.  Being able to do this
is just awesome:

struct AppConfig
{
string somePath;
bool someOption;
string[] someList;
string[string] someMap;
}

void main()
{
auto config =
"config.json".readText().parseJSON().fromJson!AppConfig();
}


I'm not saying that is a bad idea or that I don't want to be able to do 
this. I just prefer this to be handled by a generic serialization 
module. Which can of course handle the simple cases, like above, as well.


--
/Jacob Carlborg


Re: std.jgrandson

2014-08-05 Thread Daniel Murphy via Digitalmars-d
"Andrea Fontana"  wrote in message 
news:takluoqmlmmooxlov...@forum.dlang.org...


If I'm right, json has just one numeric type. No difference between 
integers / float and no limits.


So probably the mapping is:

float/double/real/int/long => number


Maybe, but std.json has three numeric types. 



Re: std.jgrandson

2014-08-05 Thread Andrei Alexandrescu via Digitalmars-d

On 8/5/14, 2:08 AM, Andrea Fontana wrote:

Sure is, thanks. Listen, would you want to volunteer a std.data.json
proposal?



What does it mean? :)


One one side enters vibe.data.json with the deltas prompted by 
std.jgrandson plus your talent and determination, and on the other side 
comes std.data.json with code and documentation that passes the Phobos 
review process. -- Andrei




Re: Qt Creator and D

2014-08-05 Thread Max Klimov via Digitalmars-d

On Wednesday, 18 September 2013 at 14:49:27 UTC, Joseph Rushton
Wakeling wrote:

Hello all,

Several of us have been talking about Qt Creator and D in 
various subthreads of the current IDE-related discussions going 
...


Recently I started the development of plugins for QtCreator in
order to add basic support of D. I did not notice the existing
project https://github.com/GoldMax/QtCreatorD, it could have been
very useful for me.
Currently I have 2 plugins: the first one is for DUB
(http://code.dlang.org/) support, the second one is for direct D
language support directly.
The repositories are here:

https://github.com/Groterik/qtcreator-dubmanager
https://github.com/Groterik/qtcreator-dlangeditor

DUB-plugin provides project management features, building,
running, etc. The dlangeditor-plugin itself provides indention,
code completion, etc. It uses DCD
(https://github.com/Hackerpilot/DCD) binaries for completion.
This plugin is not as stable as I expected (sometimes completion
can disappear until the next QtCreator's start) but I am looking
forward to the community’s help in searching/fixing bugs and new
features development/suggestion.


Re: std.jgrandson

2014-08-05 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 5 August 2014 at 12:40:25 UTC, Daniel Murphy wrote:
"Jacob Carlborg"  wrote in message 
news:kvuaxyxjwmpqrorlo...@forum.dlang.org...


> This is exactly what I need in most projects.  Basic types, 
> arrays, AAs, and structs are usually enough.


I was more thinking only types that cannot be broken down in 
to smaller pieces, i.e. integer, floating point, bool and 
string. The serializer would break down the other types in to 
smaller pieces.


I guess I meant types that have an obvious mapping to json 
types.


int/long -> json integer
bool -> json bool
string -> json string
float/real -> json float (close enough)
T[] -> json array
T[string] -> json object
struct -> json object

This is usually enough for config and data files.  Being able 
to do this is just awesome:


struct AppConfig
{
   string somePath;
   bool someOption;
   string[] someList;
   string[string] someMap;
}

void main()
{
   auto config = 
"config.json".readText().parseJSON().fromJson!AppConfig();

}

Being able to serialize whole graphs into json is something I 
need much less often.


If I'm right, json has just one numeric type. No difference 
between integers / float and no limits.


So probably the mapping is:

float/double/real/int/long => number





Re: scope guards

2014-08-05 Thread Daniel Murphy via Digitalmars-d

"Dicebot"  wrote in message news:drbpycdjoakiofwnz...@forum.dlang.org...

scope classes are not supported anymore and considered D1 legacy ;) Though 
not officially deprecated I doubt anyone actually pays attention if those 
are even working.


I do, for DDMD.  The only thing wrong with them is that they're not safe, 
and hopefully we'll be fixing it so 'scope' actually works at some point... 



Re: std.jgrandson

2014-08-05 Thread Daniel Murphy via Digitalmars-d
"Jacob Carlborg"  wrote in message 
news:kvuaxyxjwmpqrorlo...@forum.dlang.org...


> This is exactly what I need in most projects.  Basic types, arrays, AAs, 
> and structs are usually enough.


I was more thinking only types that cannot be broken down in to smaller 
pieces, i.e. integer, floating point, bool and string. The serializer 
would break down the other types in to smaller pieces.


I guess I meant types that have an obvious mapping to json types.

int/long -> json integer
bool -> json bool
string -> json string
float/real -> json float (close enough)
T[] -> json array
T[string] -> json object
struct -> json object

This is usually enough for config and data files.  Being able to do this is 
just awesome:


struct AppConfig
{
   string somePath;
   bool someOption;
   string[] someList;
   string[string] someMap;
}

void main()
{
   auto config = "config.json".readText().parseJSON().fromJson!AppConfig();
}

Being able to serialize whole graphs into json is something I need much less 
often. 



Re: What have I missed?

2014-08-05 Thread Wyatt via Digitalmars-d

On Tuesday, 5 August 2014 at 09:30:45 UTC, Era Scarecrow wrote:


 So, I don't suppose there's a short quick & dirty summary of 
what's happened in the last 18 months?


The bikeshed is now a very pleasing red, but some people think it 
should be a different shade of red and the rest think it should 
be green.


Re: assume, assert, enforce, @safe

2014-08-05 Thread Kagamin via Digitalmars-d
On Monday, 4 August 2014 at 00:59:10 UTC, Andrei Alexandrescu 
wrote:
I can totally relate to people who hold a conviction strong 
enough to

have difficulty acknowledging a contrary belief as even remotely
reasonable


Yes, it's difficult to acknowledge a belief, reason for which 
wasn't provided, but instead "we have nothing to add" reply was 
given.


Re: readln() blocks file operations on windows

2014-08-05 Thread Inspi8 via Digitalmars-d
On Thursday, 31 July 2014 at 19:13:28 UTC, Martin Drasar via 
Digitalmars-d wrote:

On 31.7.2014 20:37, FreeSlave via Digitalmars-d wrote:
Note that output to stdout is not good choice to check event 
order,
because it's buffered. Try to flush stdout or write to stderr. 
Maybe

it's actual problem.


Hi,

this is just for illustration, although I think that writeln 
flushes
itself. I was checking it in a debugger after it suddenly hung 
my entire

program.

Martin


Hi,

It works under Windows wenn compiled as a 64-bit program ( -m64 )


Re: discuss disqus

2014-08-05 Thread Klaim - Joël Lamotte via Digitalmars-d
Hi,​
did you consider using Discourse at least as a replacement for comments
system?  http://www.discourse.org/
It's made by the guys who made stackoverflow.com and it's useful at least
as an alternative to disqus and
also obviously as a forum.
Some blogs (using wordpress) do use Discourse for comments.
However, Discourse backend is wrote using ruby so if you want to self host
you have to do some install work but
they simplified it apparently by providing a docker as the default
installer.


Re: scope guards

2014-08-05 Thread Dicebot via Digitalmars-d
On Tuesday, 5 August 2014 at 10:39:01 UTC, Manu via Digitalmars-d 
wrote:

'scope' class destruction is deterministic though right?

http://dlang.org/statement.html : there are examples of stuff 
like this:


scope Foo f = new Foo();


scope classes are not supported anymore and considered D1 legacy 
;) Though not officially deprecated I doubt anyone actually pays 
attention if those are even working.


Official replacement is 
http://dlang.org/phobos/std_typecons.html#.scoped


Many of dlang.org documentation pages that are not generated from 
actual code are outdated in that regard.


Re: assume, assert, enforce, @safe

2014-08-05 Thread via Digitalmars-d

On Tuesday, 5 August 2014 at 10:00:55 UTC, eles wrote:
It is wise to mix them to such degree as to no longer 
distinguish them? For me, assume and the like shall rather go 
with the annotations.


That's one of the reasons I think it is not new territory, since 
letting assert have side effects basically sounds like 
constraints programming/logic programming.


I do think that constraints programming has a place in support 
for generic programming and other things that can be known to 
evaluate at compile time. So I think imperative programming 
languages are going to become hybrids over time.


Also, if you think about the new field "program synthesis", where 
you specify the constraints to generate/fill out boiler code in 
an imperative program, then the distinction becomes blurry. 
Rather than saying sort(x) you just specify that the outcome 
should be sorted in the post condition, but don't care why it 
ended up that way. So the compiler will automatically add sort(x) 
if needed. Sounds like a powerful way to get rid of boring 
programming parts.


Another point, when you think about it, Program Verification and 
Optimization are conceptually closely related.


S = specification // asserts is a weak version of this
P = program
E = executable

ProgramVerification:
Prove( S(x)==P(x) for all x )

Optimization Invariant:
Prove( P(x)==E(x) for all x )


Re: scope guards

2014-08-05 Thread Manu via Digitalmars-d
On 5 August 2014 19:37, Atila Neves via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Monday, 4 August 2014 at 14:57:44 UTC, Dicebot wrote:
>
>> On Monday, 4 August 2014 at 04:09:07 UTC, Manu via Digitalmars-d wrote:
>>
>>> Sure, scope() may be useful for this, but it seems in my experience that
>>> destructors almost always perform this without any additional code at the
>>> callsite.
>>>
>>
>> Destructors only work if:
>>
>> a) you already have RAII wrappers provided, otherwise it is much more
>> code to write
>> b) you work with structs, class destruction is non-determenistic
>>
>
> b) And even then, struct destruction can be non-deterministic if they
> happen to be in a dynamic array...
>

'scope' class destruction is deterministic though right?

http://dlang.org/statement.html : there are examples of stuff like this:

scope Foo f = new Foo();


Re: std.jgrandson

2014-08-05 Thread Dicebot via Digitalmars-d

On Tuesday, 5 August 2014 at 09:54:42 UTC, Sönke Ludwig wrote:
I think we could also simply keep the generic default recursive 
descent behavior, but allow serializers to customize the 
process using some kind of trait. This could even be added 
later in a backwards compatible fashion if necessary.


Simple option is to define required serializer traits and make 
both std.serialization default and any custom data-specific ones 
conform it.


Re: assume, assert, enforce, @safe

2014-08-05 Thread eles via Digitalmars-d
On Tuesday, 5 August 2014 at 09:42:26 UTC, Ola Fosheim Grøstad 
wrote:
On Monday, 4 August 2014 at 00:59:10 UTC, Andrei Alexandrescu 
wrote:

For my money, consider Walter's response:


I feel a bit confused about the mixture between compiler and 
optimizer. While I agree the compiler does the optimization and 
the two are intrinsically linked, the languages (or the 
instructions) for them seem to me to belong to quite different 
paradigms:


- compiler language is imperative programming
- optimizer language is declarative programming

It is wise to mix them to such degree as to no longer distinguish 
them? For me, assume and the like shall rather go with the 
annotations.


Re: Status of getting CDGC into druntime

2014-08-05 Thread Dicebot via Digitalmars-d

Oh, thanks for reminding me about this thread :)

I have been working on CDGC porting for some time. Currently I 
have a basic version that can be built within D2 druntime and is 
capable of allocating and cleaning the garbage. It still needs 
some cleanup and tests do not pass because of different runtime 
requirements compared to D1 but I expect to present something for 
public experiments this autumn.


(thanks Sean Kelly for earlier similar effort and Martin Nowak 
for explaining how to do forks properly :P)


Re: What have I missed?

2014-08-05 Thread Era Scarecrow via Digitalmars-d

On Tuesday, 5 August 2014 at 09:52:13 UTC, Dicebot wrote:
I have meant that I have no idea what changes you are speaking 
about and change list for std.bitmanip is too long to look 
through unless you know exactly what to look for :(


 I remember i almost re-wrote the entire thing from scratch, and 
it had so many bug fixes it was crazy, something like 30 pulls 
and bug numbers were referenced; Not just for BitArray but for 
the BitManip template as well.


 Trust me, it would be recognizable, and long too. I think at one 
point Andre or Walter wanted it broken down to simpler and fewer 
fixes so they are more modularized, and to do so would be so 
counter productive since it was basically an entire rewrite.


 Worse is the error of why it wouldn't merge with phobos back in 
2012/2013... It was like 13 newlines didn't match up so it didn't 
know how to merge it 13 whitespaces... I don't know...


Re: Status of getting CDGC into druntime

2014-08-05 Thread Darren via Digitalmars-d

On Monday, 2 June 2014 at 19:04:05 UTC, Sean Kelly wrote:


What I did at the time I created the CDGC branch was diff our GC
now vs. the code from when Druntime was created (the SVN repo on
dsource.org).  It shouldn't be more than a bunch of busywork for
someone to figure out which changes are relevant and apply them,
but it's busywork no one has wanted to do yet.


I think the plan was to incrementally evolve the GC towards the 
CDGC. Probably the 'right' thing to do but it requires more 
patience (and a deeper understanding of the GC code). Judging by 
the commit history the effort does appear to have slowed down. 
Perhaps there's activity taking place that's not visible on 
github.


I'm personally really interested in the progress of this effort - 
in particular removing the global lock on allocations. My primary 
experience is with Java, which is far more profligate with object 
allocation than D, but single-threaded object allocation was one 
of the biggest performance killers for certain types of 
application. We had 4-cpu servers (back when that was a lot) with 
1 red-hot processor and 3 idle ones due to contention on memory 
allocation.


I'd also like to echo Leandro regarding configurable GC. There is 
no one-size-fits-all for applications. Interactive applications 
favour low-latency over throughput, while a long-running batch 
process wants throughput and doesn't care so much about long 
pauses. Being able to tune Java's GC at runtime allowed me to 
turn a 90 minute batch process into 12 minutes with zero code 
changes. A real lifesaver.


Re: std.jgrandson

2014-08-05 Thread Sönke Ludwig via Digitalmars-d

Am 04.08.2014 20:38, schrieb Jacob Carlborg:

On 2014-08-04 16:55, Dicebot wrote:


That is exactly the problem - if `structToJson` won't be provided,
complaints are inevitable, it is too basic feature to wait for
std.serialization :(


Hmm, yeah, that's a problem.


On the other hand, a simplistic solution will inevitably result in 
people needing more. And when at some point a serialization module is in 
Phobos, there will be duplicate functionality in the library.



I am pretty sure that this is not the only optimized serialization
approach out there that does not fit in a content-insensitive
primitive-based traversal scheme. And we won't Phobos stuff to be
blazingly fast which can lead to situation where new data module will
circumvent the std.serialization API to get more performance.


I don't like the idea of having to reimplement serialization for each
data type that can be generalized.



I think we could also simply keep the generic default recursive descent 
behavior, but allow serializers to customize the process using some kind 
of trait. This could even be added later in a backwards compatible 
fashion if necessary.


BTW, how is the progress for Orange w.r.t. to the conversion to a more 
template+allocation-less approach, is a new std proposal within the next 
DMD release cycle realistic?


I quite like most of how vibe.data.serialization turned out, but it 
can't do any alias detection/deduplication (and I have no concrete plans 
to add support for that), which is why I currently wouldn't consider it 
as a potential Phobos candidate.


Re: What have I missed?

2014-08-05 Thread Dicebot via Digitalmars-d

On Tuesday, 5 August 2014 at 09:47:16 UTC, Era Scarecrow wrote:

On Tuesday, 5 August 2014 at 09:37:10 UTC, Dicebot wrote:

Check commit history


 It's confusing. Glancing at code snippets the code doesn't 
look like mine. I'll just have to assume my work was junked.


I have meant that I have no idea what changes you are speaking 
about and change list for std.bitmanip is too long to look 
through unless you know exactly what to look for :(


Re: What have I missed?

2014-08-05 Thread Era Scarecrow via Digitalmars-d

On Tuesday, 5 August 2014 at 09:37:10 UTC, Dicebot wrote:

Check commit history


 It's confusing. Glancing at code snippets the code doesn't look 
like mine. I'll just have to assume my work was junked.


Re: What have I missed?

2014-08-05 Thread Peter Alexander via Digitalmars-d

On Tuesday, 5 August 2014 at 09:30:45 UTC, Era Scarecrow wrote:
 So, I don't suppose there's a short quick & dirty summary of 
what's happened in the last 18 months?


Too much to list.

http://dlang.org/changelog.html


  1   2   >