Re: A New Era for the D Community

2023-05-03 Thread Don Allen via Digitalmars-d-announce

On Wednesday, 3 May 2023 at 23:24:53 UTC, Walter Bright wrote:

This initiative has my full support.


While I have ceased using D because of my concerns about the 
project's future (I discussed my reasons in a previous message 
that don't need to be repeated), I have continued to check this 
forum occasionally, hoping to see the slope turn positive. Mike's 
message and your response are both the kind of thing I was hoping 
for.


While there is no guarantee that the effort Mike describes will 
have the desired outcome, the mere fact that the effort has been 
made and endorsed by you is a significant positive step.


I'm sure I needn't tell you that technical work and project 
management require related but different talents. I did both 
professionally for a very long time and I certainly was not 
equally good at both. You can probably guess which one was the 
laggard. But I have seen it done well, having worked for some 
really great managers.


One who deserves mention is the late Frank Heart. The ARPANet 
(and thus the Internet) would not have existed without Frank's 
unique ability to herd the brilliant cats at BBN 50+ years ago. 
He's in the Internet Hall of Fame, deservedly. See 
https://en.wikipedia.org/wiki/Frank_Heart. Some of the great 
people in the history of computer science are in the picture on 
that page, including Bob Kahn, who, with Vint Cerf, won the 
Turing Award. Both played absolutely key roles in the development 
of the Internet.


I really hope that this is the start of something good for this 
project. A lot of value has been built here, but the project has 
obviously foundered in an organizational way. Project management 
is difficult, so the trouble is not surprising or unique. The key 
is recognizing that the problems are happening and taking steps 
that have a reasonable probability of improving the situation.


I will watch how this unfolds with great interest.

/Don Allen


Re: More fun with toStringz and the GC

2022-08-06 Thread Don Allen via Digitalmars-d-announce

On Saturday, 6 August 2022 at 13:40:12 UTC, Don Allen wrote:
On Saturday, 6 August 2022 at 02:14:24 UTC, Steven 
Schveighoffer wrote:

On 8/5/22 8:51 PM, Don Allen wrote:


And this, from Section 32.2 of the Language Reference Manual:

If pointers to D garbage collector allocated memory are 
passed to C functions, it's critical to ensure that the 
memory will not be collected by the garbage collector before 
the C function is done with it. This is accomplished by:


     Making a copy of the data using 
core.stdc.stdlib.malloc() and passing the copy instead.
     -->Leaving a pointer to it on the stack (as a parameter 
or automatic variable), as the garbage collector will scan 
the stack.<--
     Leaving a pointer to it in the static data segment, as 
the garbage collector will scan the static data segment.
     Registering the pointer with the garbage collector with 
the std.gc.addRoot() or std.gc.addRange() calls.


I did what the documentation says and it does not work.


I know, I felt exactly the same way in my post on it:

https://forum.dlang.org/post/sial38$7v0$1...@digitalmars.com

I even issued a PR to remove the problematic recommendation:

https://github.com/dlang/dlang.org/pull/3102

But there was pushback to the point where it wasn't worth it. 
So I closed it.


As I said in my previous post, the documentation issue really 
needs to be addressed.


I do realize now that I *assumed* that what I did was going to 
result in a stack reference to the c-string I was trying to 
keep alive.


At the risk of over-doing this, one more thing I want to say in 
the interest of clarity: the incorrect documentation led me right 
into this error: "This is accomplished by . Leaving a pointer 
to it on the stack (as a parameter or automatic variable), as the 
garbage collector will scan

the stack."

I've fixed my code using addRoot/removeRoot and so far it seems 
to work.





Re: More fun with toStringz and the GC

2022-08-06 Thread Don Allen via Digitalmars-d-announce
On Saturday, 6 August 2022 at 02:14:24 UTC, Steven Schveighoffer 
wrote:

On 8/5/22 8:51 PM, Don Allen wrote:


And this, from Section 32.2 of the Language Reference Manual:

If pointers to D garbage collector allocated memory are passed 
to C functions, it's critical to ensure that the memory will 
not be collected by the garbage collector before the C 
function is done with it. This is accomplished by:


     Making a copy of the data using core.stdc.stdlib.malloc() 
and passing the copy instead.
     -->Leaving a pointer to it on the stack (as a parameter 
or automatic variable), as the garbage collector will scan the 
stack.<--
     Leaving a pointer to it in the static data segment, as 
the garbage collector will scan the static data segment.
     Registering the pointer with the garbage collector with 
the std.gc.addRoot() or std.gc.addRange() calls.


I did what the documentation says and it does not work.


I know, I felt exactly the same way in my post on it:

https://forum.dlang.org/post/sial38$7v0$1...@digitalmars.com

I even issued a PR to remove the problematic recommendation:

https://github.com/dlang/dlang.org/pull/3102

But there was pushback to the point where it wasn't worth it. 
So I closed it.


As I said in my previous post, the documentation issue really 
needs to be addressed.


I do realize now that I *assumed* that what I did was going to 
result in a stack reference to the c-string I was trying to keep 
alive. Bad assumption, obviously. But I think the point is that 
there is a simple, reliable mechanism -- addRoot, removeRoot -- 
that works and the documentation should say that and only that. 
Walter said this in his 9/25/21 post: "Use GC.addRoot() to keep a 
reference alive. That's what it's for.
". That's all that's needed. All the rest leads people like me 
who don't think like a compiler to make the mistake I made.


Re: More fun with toStringz and the GC

2022-08-05 Thread Don Allen via Digitalmars-d-announce
On Friday, 5 August 2022 at 23:38:22 UTC, Steven Schveighoffer 
wrote:

On 8/5/22 7:13 PM, jfondren wrote:

On Friday, 5 August 2022 at 22:51:07 UTC, Don Allen wrote:
My theory: because gc_protect2 is never referenced, I'm 
guessing that the compiler is optimizing away the storage of 
the returned pointer, the supporting evidence being what I 
said in the previous paragraph. Anyone have a better idea?


A local variable definitely isn't enough: 
https://forum.dlang.org/thread/xchnfzvpmxgytqprb...@forum.dlang.org


This package came of it: 
https://code.dlang.org/packages/keepalive




Yes, but I will warn you, the compilers are smart buggers. I 
think someone came up with a case where this still doesn't keep 
it alive (been a while since I made that).


The only true solution is to use `GC.addRoot` on the string and 
`GC.removeRoot` when you are done.


Steve --

Thanks for this.

But this time I *did* read the documentation, specifically this:

Interfacing Garbage Collected Objects With Foreign Code

The garbage collector looks for roots in:

the static data segment
the stacks and register contents of each thread
the TLS (thread-local storage) areas of each thread
any roots added by core.memory.GC.addRoot() or 
core.memory.GC.addRange()
If the only pointer to an object is held outside of these areas, 
then the collector will miss it and free the memory.


To avoid this from happening, either

maintain a pointer to the object in an area the collector 
does scan for pointers;
add a root where a pointer to the object is stored using 
core.memory.GC.addRoot() or core.memory.GC.addRange().
reallocate and copy the object using the foreign code's 
storage allocator or using the C runtime library's malloc/free.



And this, from Section 32.2 of the Language Reference Manual:

If pointers to D garbage collector allocated memory are passed to 
C functions, it's critical to ensure that the memory will not be 
collected by the garbage collector before the C function is done 
with it. This is accomplished by:


Making a copy of the data using core.stdc.stdlib.malloc() and 
passing the copy instead.
-->Leaving a pointer to it on the stack (as a parameter or 
automatic variable), as the garbage collector will scan the 
stack.<--
Leaving a pointer to it in the static data segment, as the 
garbage collector will scan the static data segment.
Registering the pointer with the garbage collector with the 
std.gc.addRoot() or std.gc.addRange() calls.


I did what the documentation says and it does not work.

Having a better version of C and C++ with a gc and the ability to 
directly call useful C/C++ libraries is a big D selling point, as 
far as I am concerned. It was a major motivation for the creation 
of Go. But getting the interaction between the GC and foreign 
functions properly documented is essential. Right now, there are 
bits and pieces of advice in the Language Reference, the Feature 
Overview, and the toStringz documentation and none of it tells 
you what you need to know. In fact, it does the opposite, telling 
you to do something (stick a pointer on the stack) that does not 
work, which leads to the "nasty bug" spoken of in the toStringz 
doc. When you waste a lot of a user's time with poor and 
inaccurate documentation, as this did mine, you are not making 
friends. I would advise fixing this asap.


/Don


More fun with toStringz and the GC

2022-08-05 Thread Don Allen via Digitalmars-d-announce
Remember all the fun we had last year when I failed to heed the 
warning in the toStringz documentation about retaining a 
reference to a char * passed into C? It took a long time to find 
that one, with a lot of help from Steve Schveighoffer and others.


Well, I've got another one. Consider this:


// Get number of children of the parent account
auto gc_protect = bind_text(n_children_stmt, 1, 
parent.guid);
parent.children.length = one_row!(int)(n_children_stmt, 
_int);


auto gc_protect2 = bind_text(account_child_stmt, 1, 
parent.guid);
for (int i = 0; next_row_available_p(account_child_stmt, 
_reset); i++) {

parent.children[i] = new Account;
parent.children[i].name = 
fromStringz(sqlite3_column_text(account_child_stmt, 0)).idup;
parent.children[i].guid = 
fromStringz(sqlite3_column_text(account_child_stmt, 1)).idup;
parent.children[i].flags = 
sqlite3_column_int(account_child_stmt, 2);
parent.children[i].value = 
get_account_value(parent.children[i]);

}

bind_text takes a D string, turns it into a C string with 
toStringz, uses that to call sqlite3_bind_text and returns the C 
string, which I store as you can see with the intention of 
protecting it from the gc. The code as written above does not 
work. At some point, I get an index-out-of-bounds error, because 
the loop is seeing too many children. If I turn off the GC, the 
code works correctly and the application completes normally.


With the GC on, if I put a debugging writeln inside the loop, 
right after the 'for', that prints, among other things, the value 
of gc_protect2 (I wanted to convince myself that the GC wasn't 
moving what it points to; yes, I know the documentation says the 
current GC won't do that), the problem goes away. A Heisenbug!


My theory: because gc_protect2 is never referenced, I'm guessing 
that the compiler is optimizing away the storage of the returned 
pointer, the supporting evidence being what I said in the 
previous paragraph. Anyone have a better idea?


By the way, I get the same error compiling this with dmd or ldc.

/Don Allen


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-30 Thread Don via Digitalmars-d-announce

On Wednesday, 30 January 2019 at 13:58:38 UTC, 12345swordy wrote:
I do not accept gut feeling as a valid objection here. The 
current workarounds is shown to be painful as shown in the dip 
and in the discussions that it currently link. That *the* 
motivation here.


Like I said previously I am on the reviews side and that's it.

By the way I don't like your tone when you say: "I do not accept 
gut feeling as a valid objection here".


I don't think you would like if I say that your opinion is biased 
because you know the author either, so don't go that way, because 
it's not only me against this DIP.


I am familiar with the author here, he is very involved with 
the C++<->D compatibility side of things. He knows the pain 
from first hand experience.


Alright we're talking about a change that have been on hold for 
almost 10 years, if it was simple it would already been done.


In this thread we saw some other concerns been emerged.

Finally I only know the author by his postings in this forum, and 
I don't have anything personally against him.


Donald.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-29 Thread Don via Digitalmars-d-announce

On Wednesday, 30 January 2019 at 03:01:36 UTC, 12345swordy wrote:

On Wednesday, 30 January 2019 at 00:25:17 UTC, Don wrote:
But what I fail to see is why can't the programmer solve this 
themselves instead of relying on a new feature that would 
cause more harm?



Donald.


...Did you even read the arguments in the dip? This has been 
discuss quite a lot in the forums, it even gives you links to 
them.


Well, I read the DIP and the whole forum discussion back in the 
day, and again I think this will create more harm than benefits 
the way it was proposed.


And starting from the beginning of this DIP - Rationale example:

   "void fun(int x);

fun(10); // <-- this is how users expect to call a typical 
function

But when ref is present:

void fun(ref int x);

fun(10); // <-- compile error; not an lvalue!!
Necessitating the workaround:

int temp = 10;
fun(temp);

This inconvenience extends broadly to every manner of rvalue 
passed to

functions, including:"

So the solution in the way I understood is pretty much a syntax 
sugar, creating temporary variable with destruction.


But the concept is weird, because originally your function 
signature has a "ref parameter" and we're just creating a 
workaround expanding it to handle rvalues.


I would prefer to handle it myself with overloading instead of 
being presented with new language feature creating different 
scenarios for something that's not the case right now.


Otherwise D will be pretty much like C++ and in this case why 
bother with it?


Donald.


Re: DIP 1016--ref T accepts r-values--Formal Assessment

2019-01-29 Thread Don via Digitalmars-d-announce

I'm on the reviewers side here.

To be honest I never liked this DIP and maybe I'll sound dumb but 
I think this is a case where this could bring more problem than 
anything.


The way I see this would be more like a syntax sugar to create 
temporary variable for ref parameters and that's it.


But what I fail to see is why can't the programmer solve this 
themselves instead of relying on a new feature that would cause 
more harm?


With overload some could do:

void f(int i){
f(i);
}

void f(ref int i){
++i;
writeln(i);
}

void main(){
int i = 0;
f(10);
f(i);
}

prints:
11
1

The "f" function will work with ref or literal (rvalues).

But this will be controlled by the programmer the way they want 
it.


Donald.


Re: Battle-plan for CTFE

2016-05-17 Thread Don Clugston via Digitalmars-d-announce

On Sunday, 15 May 2016 at 12:17:30 UTC, Daniel Murphy wrote:

On 15/05/2016 9:57 PM, Martin Nowak wrote:

On 05/15/2016 01:58 PM, Daniel Murphy wrote:
The biggest advantage of bytecode is not the interpreter 
speed, it's
that by lowering you can substitute VarExps etc with actual 
references

to memory without modifying the AST.

By working with something lower level than the AST, you 
should end up

with something much less complex and with fewer special cases.


Which is a bad assessment, you can stick variable indexes into
VarDeclaration (we already do that) and thereby access them in 
O(1).
Converting control flow and references into byte code is far 
from

trivial, we're talking about another s2ir and e2ir here.

-Martin



For simple types that's true.  For more complicated reference 
types...


Variable indexes are not enough, you also need heap memory, but 
slices and pointers (and references) can refer to values either 
on the heap or the stack, and you can have a slice of a member 
static array of a class on the stack, etc.  Then there are 
closures...


Neither e2ir or s2ir are actually that complex.  A lot of the 
mess there comes from the backend IR interface being rather 
difficult to work with.
 We can already save a big chunk of complexity by not having to 
translate the frontend types.  E.g.  implementing the logic in 
the interpreter to correctly unwind through destructors is 
unlikely to be simpler than lowering to an IR.


Exactly. I think the whole idea of trying to avoid a glue layer 
is a mistake.
CTFE is a backend. It really is. And it should be treated as one. 
A very simple one, of course.
Once you do this, you'll find all sorts of commonalities with the 
existing glue layers.
We should end up with at least 4 backends: DMD, GCD, LDC, and 
CTFE.


Many people here are acting like this is something complicated, 
and making dangerous suggestions like using Phobos inside the 
compiler. (I think everyone who has fixed a compiler bug that was 
discovered in Phobos, will know what a nightmare that would be. 
The last thing compiler development needs is another level of 
complexity in the compiler).


As I've tried to explain, the problems with CTFE historically 
were never with the CTFE engine itself. They were always with the 
interface between CTFE and the remainder of the compiler -- 
finding every case where CTFE can be called, finding all the 
bizarre cases (tuple variables, variables without a stack because 
they are local variables declared in comma expressions in global 
scope, local 'ref' variables, etc), finding all the cases where 
the syntax trees were invalid...


There's no need for grandiose plans, as if there is some 
almost-insurmountable problem to be solved. THIS IS NOT 
DIFFICULT. With the interface cleaned up, it is the well-studied 
problem of creating an interpreter. Everyone knows how to do 
this, it's been done thousands of times. The complete test suite 
is there for you. Someone just needs to do it.


I think I took the approach of using syntax trees about as far as 
it can go. It's possible, but it's really vile. Look at the code 
for doing assignments. Bleagh. The only thing in its favour is 
that originally it was the only implementation that was possible 
at all. Even the first, minimal step towards creating a ctfe 
backend -- introducing a syntax-tree-validation step -- 
simplified parts of the code immensely.


You might imagine that it's easier to work with syntax trees than 
to start from scratch but I'm certain that's not true. I'm pretty 
sure that the simplest approach is to use the simplest possible 
machine-independent bytecode that you can come up with. I had got 
to the point of starting that, but I just couldn't face doing it 
in C++.


TL;DR:  CTFE is actually a backend, so don't be afraid of 
creating a glue layer for it.





Re: Battle-plan for CTFE

2016-05-13 Thread Don Clugston via Digitalmars-d-announce

On Monday, 9 May 2016 at 16:57:39 UTC, Stefan Koch wrote:

Hi Guys,

I have been looking into the DMD now to see what I can do about 
CTFE.

Unfortunately It is a pretty big mess to untangle.
Code responsible for CTFE is in at least 3 files.
[dinterpret.d, ctfeexpr.d, constfold.d]
I was shocked to discover that the PowExpression actually 
depends on phobos! (depending on the exact codePath it may or 
may not compile...)


Yes. This is because of lowering. Walter said in his DConf talk 
that lowering was a success; actually, it's a quick-and-dirty 
hack that inevitably leads to a disaster.

Lowering always needs to be reverted.

which let to me prematurely stating that it worked at ctfe 
[http://forum.dlang.org/thread/ukcoibejffinknrbz...@forum.dlang.org]


My Plan is as follows.

Add a new file for my ctfe-interpreter and update it gradually 
to take more and more of the cases the code in the files 
mentioned above was used for.


Do Dataflow analysis on the code that is to be ctfe'd so we can 
tell beforehand if we need to store state in the ctfe stack or 
not.


You don't need dataflow analysis. The CtfeCompile pass over the 
semantic tree was intended to determine how many variables are 
required by each function.


Or baring proper data-flow analysis: RefCouting the variables 
on the ctfe-stack could also be a solution.


I will post more details as soon as I dive deeper into the code.


The current implementation stores persistent state for every 
ctfe incovation.
While caching nothing. Not even the compiled for of a function 
body.

Because it cannot relax purity.


No. Purity is not why it doesn't save the state. It's because of 
history.


I think I need to explain the history of CTFE.
Originally, we had constant-folding. Then constant-folding was 
extended to do things like slicing a string at compile time. 
Constant folding leaks memory like the Exxon Valdez leaks oil, 
but that's OK because it only ever happens once.
Then, the constant folding was extended to include function 
calls, for loops, etc. All using the existing constant-folding 
code. Now the crappy memory usage is a problem. But it's OK 
because the CTFE code was kind of proof-of-concept thing anyway.


Now, everyone asks, why doesn't it use some kind of byte-code 
interpreter or something?
Well, the reason is, it just wasn't possible. There was actually 
no single CTFE entry point. Instead, it was a complete mess. For 
example, with template arguments, the compiler would first try to 
run CTFE on the argument, with error messages suppressed. If that 
succeeded, it was a template value argument. If it generated 
errors, it would then see if was a type. If that failed as well, 
it assumed it was a template alias argument.
The other big problem was that CTFE was also often called on a 
function which had semantic errors.


So, here is what I did with CTFE:
(1) Implement all the functionality, so that CTFE code can be 
developed. The valuable legacy of this, which I am immensely 
proud of, is the file "interpret3.d" in the test suite. It is 
very comprehensive. If an CTFE implementation passes the test 
suite, it's good to go.
The CTFE implementation itself is practically worthless. It's 
value was to get the test suite developed.


(2) Created a single entry point for CTFE. This involved working 
out rules for every place that CTFE is actually required, 
removing the horrid speculative execution of CTFE.
It made sure that functions had actually been semantically 
analyzed before they were executed (there were really horrific 
cases where the function had its semantic tree modified while it 
was being executed!!)
Getting to this point involved about 60 pull requests and six 
months of nearly full-time work. Finally it was possible to 
consider a byte-code interpreter or JITer.


We reached this point around Nov 2012.

(3) Added a 'ctfeCompile' step which runs over the semantic tree 
the first time the function is executed at compile time. Right 
now it does nothing much except that check that the semantic tree 
is valid. This detected many errors in the rest of the compiler.


We reached this point around March 2013.

My intention was to extend the cfteCompile step to a byte-code 
generator. But then I had to stop working on it and concentrate 
on looking after my family.


Looking at the code without knowing the history, you'll think, 
the obvious way to do this would be with a byte-code generator or 
JITer, and wonder why the implementation is so horrible. But for 
most of the history, that kind of implementation was just not 
possible.
People come up with all these elaborate schemes to speed up CTFE. 
It's totally not necessary. All that's needed is a very simple 
bytecode interpreter.






Re: DConf 2016, Berlin: Call for Submissions is now open!

2015-10-26 Thread Don via Digitalmars-d-announce
On Friday, 23 October 2015 at 16:37:20 UTC, Andrei Alexandrescu 
wrote:

http://dconf.org/2016/index.html


Typo: "we're grateful to benefit of their hosting" should be 
"we're grateful to get the benefit of their hosting" or "we're 
grateful to benefit from their hosting".







Sociomantic: We're looking for a Linux Systems Admin!

2015-01-08 Thread Don via Digitalmars-d-announce
It is probably not obvious why our HR department posted this job 
ad to this newsgroup, particularly to anyone who doesn't know 
Sociomantic's relationship to the D community.


Most of the apps running on our servers, are written in D. The 
role doesn't involve D programming, and the job ad doesn't even 
mention D, but it will involve working very closely with our D 
developers, in supporting the deployment and operation of D code.


You can also review the job ad on our company website:
https://www.sociomantic.com/jobs/linux-system-administrator/#.VK5_XV3ydwE

- Don.



Re: Sargon component library now on Dub

2014-12-17 Thread Don via Digitalmars-d-announce

On Sunday, 14 December 2014 at 03:26:56 UTC, Walter Bright wrote:

http://code.dlang.org/packages/sargon

These two modules failed to generate much interest in 
incorporating them into Phobos, but I'm still rather proud of 
them :-)


So am I, the halffloat is much faster than any other 
implementation I've seen. The fast path for the conversion 
functions involves only a few machine instructions.


I had an extra speedup for it that made it optimal, but it 
requires a language primitive to dump excess hidden precision. We 
still need this, it is a fundamental operation (C tries to do it 
implicitly using sequence points, but they don't actually work 
properly).




Here they are:

◦sargon.lz77 - algorithms to compress and expand with LZ77 
compression algorithm


◦sargon.halffloat - IEEE 754 half-precision binary floating 
point format binary16


I'll be adding more in the future.




Re: 438-byte Hello, world Win32 EXE in D

2014-09-11 Thread Don via Digitalmars-d-announce
On Wednesday, 10 September 2014 at 13:53:32 UTC, Marco Leise 
wrote:

Am Tue, 09 Sep 2014 10:20:43 +
schrieb Don x...@nospam.com:

On Monday, 8 September 2014 at 08:18:32 UTC, Ola Fosheim 
Grøstad wrote:

 On Monday, 8 September 2014 at 08:08:23 UTC, Kagamin wrote:
 But that downloaded file is bloatware, because it has to 
 implement functionality, which is not provided by the 
 system. That tiny pe file doesn't download anything, it's 
 completely done by the system.


 Yeah…

 http://stackoverflow.com/questions/284797/hello-world-in-less-than-20-bytes

My personal best --

At my first job, a customer once made a request for a very 
simple DOS utility. They did mention that they didn't have 
much disk space on their machine, so they asked me to try to 
make the program small.
That was a once-in-a-lifetime opportunity. Naturally, I wrote 
it in asm.

The final executable size was 15 bytes. g
The customer loved it.


Vladimir: Good job!
Don: Nice story. What did it do?


It blanked the screen in a particular way. It was purely for 
aesthetic reasons.



During my time at a vocation school I wrote some stuff like a
tiny Windows media player with some of the ASM in the DOS/PE
header area. And an animated GIF player in ASM as a .com
executable with the GIF included in it. (Easy since GIF
algorithms are 16-bit and they use 8-bit color palettes)


Nice.
That was the only time I ever made a commercial release that was 
entirely in asm. It only took me about ten minutes to write. It 
would have been far more difficult in another language.


On Wednesday, 10 September 2014 at 14:17:25 UTC, ketmar via 
Digitalmars-d-announce wrote:

On Wed, 10 Sep 2014 16:02:01 +0200
Marco Leise via Digitalmars-d-announce
digitalmars-d-announce@puremagic.com wrote:


 The final executable size was 15 bytes. g
 The customer loved it.
and they never knows that it took at least 512 bytes anyway. or 
even

more, depending of claster size. heh.


Yeah. Plus the filename took up almost as much space as the 
executable code. But when they said they wanted it to be small, 
they actually meant less than 2 megabytes. When our sales guy 
saw it, he said, You got it down to 15kb? That's incredible!


But I won't pollute D.announce any more. :)


Re: 438-byte Hello, world Win32 EXE in D

2014-09-09 Thread Don via Digitalmars-d-announce
On Monday, 8 September 2014 at 08:18:32 UTC, Ola Fosheim Grøstad 
wrote:

On Monday, 8 September 2014 at 08:08:23 UTC, Kagamin wrote:
But that downloaded file is bloatware, because it has to 
implement functionality, which is not provided by the system. 
That tiny pe file doesn't download anything, it's completely 
done by the system.


Yeah…

http://stackoverflow.com/questions/284797/hello-world-in-less-than-20-bytes


My personal best --

At my first job, a customer once made a request for a very simple 
DOS utility. They did mention that they didn't have much disk 
space on their machine, so they asked me to try to make the 
program small.
That was a once-in-a-lifetime opportunity. Naturally, I wrote it 
in asm.

The final executable size was 15 bytes. g
The customer loved it.


Re: DConf 2014 Keynote: High Performance Code Using D by Walter Bright

2014-07-22 Thread Don via Digitalmars-d-announce

On Wednesday, 16 July 2014 at 10:22:41 UTC, bearophile wrote:

Andrei Alexandrescu:

http://www.reddit.com/r/programming/comments/2aruaf/dconf_2014_keynote_high_performance_code_using_d/


Despite Walter is used to pipeline programming, so the next 
step is to also handle failures and off-band messages in a 
functional way (without exceptions and global error values) 
with two parallel pipelines, here named Railway-Oriented 
Programming. This is one of the simplest introductions (and he 
can skip the slides 19-53) that I have found of this topic 
(that in the Haskell community is explained on the base of 
monads):


http://www.slideshare.net/ScottWlaschin/railway-oriented-programming

In Bugzilla there are already requests for some 
Railway-Oriented Programming:


https://issues.dlang.org/show_bug.cgi?id=6840
https://issues.dlang.org/show_bug.cgi?id=6843

I think no language extensions are needed for such kind of 
programming, but of course built-in tuple syntax and basic 
forms of pattern matching in switch 
(https://d.puremagic.com/issues/show_bug.cgi?id=596 ) improve 
the syntax and make the code more handy, handy enough to push 
more D programmers in using it.


For some examples of those things in a system language, this 
page shows some little examples of functional syntax for Rust:

http://science.raphael.poss.name/rust-for-functional-programmers.html

Bye,
bearophile


I think that approach is more convincing for functional languages 
than for D, especially if you are limited to a single return type.


Why not just follow the use Unix stdout/stderr model, and provide 
an OutputRange for errors to be sent to?


I don't really believe that there are two 'railway tracks' in the 
sense that that presentation implies. Once an error has occurred, 
typically not much more pipeline processing happens. As for Unix, 
stdout from one step is tied to stdin, but stderr is output only. 
There may be further processing of the stderr stream (eg, errors 
may be reported to a database), but the steps are completely 
independent from the main stdin-stdout track. I think you get a 
messy design if you try to combine both into a single pipeline.


I think it could be quite interesting to see which algorithms can 
be created with an Error OutputRange model.


Re: It's official: Sociomantic Labs has been acquired by dunnhumby Ltd

2014-04-04 Thread Don
On Friday, 4 April 2014 at 02:38:58 UTC, Andrei Alexandrescu 
wrote:

On 4/3/14, 7:04 AM, Don wrote:


https://www.sociomantic.com/dunnhumby-acquires-sociomantic/


Congratulations to all involved!

How will this impact the use of D at dunnhumby?


Andrei


This is going to be very big for D. Our technology will be used 
with their data and analysis (they're not a software company).

Here's what Dunnhumby said in their press release:

For some time we have been watching the work of a Berlin 
internet start-up called Sociomantic. They are a very talented 
group of people who have developed ground-breaking online 
technology, far ahead of what anyone else is doing. We have 
decided to buy the company because the combination of 
Sociomantic’s technological capability and dunnhumby’s insight 
from 430m shoppers worldwide will create a new opportunity to 
make the online experience a lot better, because for the first 
time we will be able to make online content personalised for 
people, based on what they actually like, want and need.   It is 
what we have been doing with loyalty programs and personalised 
offers for years – done with scale and speed in the digital 
world.


http://www.dunnhumby.com/its-time-revolutionise-digital-advertising


And this article gives some broader background:

http://www.zdnet.com/tescos-big-data-arm-dunnhumby-buys-ad-tech-firm-sociomantic-labs-728040/

- Don.




It's official: Sociomantic Labs has been acquired by dunnhumby Ltd

2014-04-03 Thread Don


https://www.sociomantic.com/dunnhumby-acquires-sociomantic/



Re: dmd 2.065 beta 3

2014-02-07 Thread Don

On Monday, 3 February 2014 at 18:34:15 UTC, Andrew Edwards wrote:

Following are the changes incorporated since beta 2:



The list of current regressions may be accessed here:

http://d.puremagic.com/issues/buglist.cgi?query_format=advancedbug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENED

Regards,
Andrew


I just found a disastrous optimizer bug in our production code.
https://d.puremagic.com/issues/show_bug.cgi?id=12095
We shouldn't do a release without fixing that one.


Re: dmd 2.065 beta 3

2014-02-07 Thread Don
On Friday, 7 February 2014 at 10:00:50 UTC, Francesco Cattoglio 
wrote:

On Friday, 7 February 2014 at 08:44:34 UTC, Rory McGuire wrote:

Ouch! Wonder why the auto tester never picked that up.
On 7 Feb 2014 10:40, Don x...@nospam.com wrote:


Because of no final by default?


No. The bug has probably always been present in the 64 bit DMD. 
Historically, a couple of optimizer bugs like this one have been 
discovered each year.


Re: dmd 2.065 beta 1 #2

2014-01-27 Thread Don

On Wednesday, 22 January 2014 at 13:37:07 UTC, Sönke Ludwig wrote:
I'm getting deprecation warnings inside std.datetime to use 
any instead of canFind.


Also DMD now warns about using FP operators, such as =, for 
detecting NaN's. What's the rationale for this? One issue with 
this is that isNaN cannot be used for CTFE.



To detect NaNs, you just need to change x = x into x == x.

Actually almost all uses of isNaN in std.math are unnecessarily 
slow, std.math.isNaN doesn't trigger signalling NaNs but almost 
every function in std.math _should_ trigger signalling NaNs, so 
should use the much faster


bool isNaN(real x) { return x != x; }


Re: So, You Want To Write Your Own Programming Language?

2014-01-22 Thread Don
On Wednesday, 22 January 2014 at 04:29:05 UTC, Walter Bright 
wrote:

http://www.reddit.com/r/programming/comments/1vtm2l/so_you_want_to_write_your_own_language_dr_dobbs/


Great article. I was surprised that you mentioned lowering 
positively, though.


I think from DMD we have enough experience to say that although 
lowering sounds good, it's generally a bad idea. It gives you a 
mostly-working prototype very quickly, but you pay a heavy price 
for it. It destroys valuable semantic information. You end up 
with poor quality error messages, and counter-intuitively, you 
can end up with _more_ special cases (eg, lowering ref-foreach in 
DMD means ref local variables can spread everywhere). And it 
reduces possibilities for the optimizer.


In DMD, lowering has caused *major* problems with AAs, foreach. 
and builtin-functions, and some of the transformations that the 
inliner makes. It's also caused problems with postincrement and 
exponentation. Probably there are other examples.


It seems to me that what does make sense is to perform lowering 
as the final step before passing the code to the backend. If you 
do it too early, you're shooting yourself in the foot.


Re: So, You Want To Write Your Own Programming Language?

2014-01-22 Thread Don

On Wednesday, 22 January 2014 at 10:38:40 UTC, bearophile wrote:

Walter Bright:


http://www.reddit.com/r/programming/comments/1vtm2l/so_you_want_to_write_your_own_language_dr_dobbs/


Thank you for the simple nice article.


The poisoning approach. [...] This is the approach we've been 
using in the D compiler, and are very pleased with the results.


Yet, even in D most of the error messages after the first few 
ones are often not so useful to me. So perhaps I'd like a 
compiler switch to show only the first few error messages and 
then stop the compiler.


Could you give an example? We've tried very hard to avoid useless 
error messages, there should only be one error message for each 
bug in the code.
Parser errors still generate a cascade of junk, and the cannot 
deduce function from argument types message is still painful -- 
is that what you mean? Or something else?


Re: Increasing D Compiler Speed by Over 75%

2013-07-26 Thread Don

On Thursday, 25 July 2013 at 18:03:22 UTC, Walter Bright wrote:

http://www.reddit.com/r/programming/comments/1j1i30/increasing_the_d_compiler_speed_by_over_75/



I just reported this compile speed killer:
http://d.puremagic.com/issues/show_bug.cgi?id=10716

It has a big impact on some of the tests in the DMD test suite. 
It might also be responsible for a significant part of the 
compilation time of Phobos, since array literals tend to be 
widely used inside unittest functions.


Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-14 Thread Don

On Friday, 14 June 2013 at 06:49:08 UTC, Jacob Carlborg wrote:

On 2013-06-13 16:44, Leandro Lucarella wrote:

I've always use VIM without any problems. Is not what you 
typically call
an IDE though. I think now some of our guys are using Geany 
moderately
successfully, for sure much better than Ecplise and Mono 
plugins. IIRC,
the main problem with those huge IDEs were memory usage and 
death-files

(files that made the IDE crash consistently).

I think there a lot of working advanced editors for D, but 
IDEs are

quite behind (at least in Linux).


I agree. But he said at the end of the talk that he didn't want 
codecompletion refactoring or anything like that. Now he said 
he just wants something better than Notepad that is stable.


I don't know what's going on here, somehow people are 
consistently misunderstanding me.


The question in the talk was along the lines of what's wrong 
with D's IDEs. And people expected the problem was that they 
don't have good refactoring support or something. But the problem 
is much more severe:

 Mono-D is not as good as Notepad.
 EclipseD is not as good as Notepad.
Because they are unstable.



Re: DConf 2013 Day 3 Talk 2: Code Analysis for D with AnalyzeD by Stefan Rohe

2013-06-14 Thread Don
On Wednesday, 12 June 2013 at 12:50:39 UTC, Andrei Alexandrescu 
wrote:
Reddit: 
http://www.reddit.com/r/programming/comments/1g6x9g/dconf_2013_code_analysis_for_d_with_analyzed/


Hackernews: https://news.ycombinator.com/item?id=5867764

Twitter: 
https://twitter.com/D_Programming/status/344798127775182849


Facebook: 
https://www.facebook.com/dlang.org/posts/655927124420972


Youtube: http://youtube.com/watch?v=ph_uU7_QGY0

Please drive discussions on the social channels, they help D a 
lot.



Andrei


The restrictions on contracts were very interesting. Obviously 
the static analysis could be more sophisticated than what you 
currently have, but
 I don't think contracts can be much use to a static analyzer if 
they can contain arbitrary code.


I would be interested to see how much freedom is actually 
required, in order to make contracts adequately expressive. 
Perhaps it was a language mistake to allow arbitrary code inside 
contracts, it might have been better to start with something very 
restricted and gradually relax the rules.




Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-13 Thread Don

On Thursday, 13 June 2013 at 06:58:22 UTC, Jacob Carlborg wrote:

On 2013-06-11 14:33, Andrei Alexandrescu wrote:

Reddit:
http://www.reddit.com/r/programming/comments/1g47df/dconf_2013_metaprogramming_in_the_real_world_by/


Hackernews: https://news.ycombinator.com/item?id=5861237

Twitter: 
https://twitter.com/D_Programming/status/344431490257526785


Facebook: 
https://www.facebook.com/dlang.org/posts/655271701153181


Youtube: http://youtube.com/watch?v=pmwKRYrfEyY

Please drive discussions on the social channels, they help D a 
lot.


I really don't understand the problem with IDE. He mentions 
that he's not interested in any autocompletion, refactoring or 
anything like that.


Actually not. I'm just opposed to any work on them right now. The 
point is that all of those things are COMPLETELY WORTHLESS if the 
IDE crashes. It's not just a bug. It's an absolute showstopper, 
and I'm begging the community to do something about it.

Fix the crashes, and then we can talk.


Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-13 Thread Don

On Thursday, 13 June 2013 at 08:25:19 UTC, Dicebot wrote:
On Thursday, 13 June 2013 at 08:16:56 UTC, Peter Alexander 
wrote:
Visual Studio constantly crashes for me at work, and I can 
imagine MonoDevelop and Eclipse being similar, but simpler 
editors like Sublime Text, TextMate, vim, emacs etc. shouldn't 
crash. I've been using Sublime Text for years now and I don't 
think it has ever crashed.


I am quite surprised to hear this is an issue at all btw. 
Neither Mono-D nor Eclipse DDT have never crashed for me on my 
smallish sources. And I just can't imagine D syntax 
highlighting crashing vim or emacs :)


Mono-D has had update issues thanks to MonoDevelop upstream but 
that is somewhat different story.


Mono-D and Eclipse DDT both have major problems with long pauses 
while typing (eg 15 seconds unresponsive) and crashes. Both of 
them even have modules of death where just viewing the file 
will cause a crash. If you're unlucky enough to get one of those 
open in your default workspace file, the IDE will crash at 
startup...




Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-13 Thread Don

On Thursday, 13 June 2013 at 12:39:49 UTC, Dicebot wrote:

On Thursday, 13 June 2013 at 09:06:00 UTC, Don wrote:
Mono-D and Eclipse DDT both have major problems with long 
pauses while typing (eg 15 seconds unresponsive) and crashes. 
Both of them even have modules of death where just viewing 
the file will cause a crash. If you're unlucky enough to get 
one of those open in your default workspace file, the IDE will 
crash at startup...


https://github.com/aBothe/Mono-D/issues?state=open ;)

It does sound like a serious problem but I can hardly expect 
IDE maintainers to fix such stuff without having a bug reports.


Guys, this wasn't even part of the talk. The point I made in the 
talk is: at the moment, IDE bugs are much, much worse than 
compiler bugs.


Those IDEs are in an alpha state at best. They are not in a state 
where you can just submit bug reports but keep using them. Not 
commercially.




Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-13 Thread Don

On Thursday, 13 June 2013 at 16:35:08 UTC, Regan Heath wrote:
On Thu, 13 Jun 2013 15:32:03 +0100, Colin Grogan 
grogan.co...@gmail.com wrote:



On Thursday, 13 June 2013 at 10:48:52 UTC, Regan Heath wrote:
On Thu, 13 Jun 2013 08:31:03 +0100, Don 
turnyourkidsintoc...@nospam.com wrote:


On Thursday, 13 June 2013 at 06:58:22 UTC, Jacob Carlborg 
wrote:

On 2013-06-11 14:33, Andrei Alexandrescu wrote:

Reddit:
http://www.reddit.com/r/programming/comments/1g47df/dconf_2013_metaprogramming_in_the_real_world_by/


Hackernews: https://news.ycombinator.com/item?id=5861237

Twitter: 
https://twitter.com/D_Programming/status/344431490257526785


Facebook: 
https://www.facebook.com/dlang.org/posts/655271701153181


Youtube: http://youtube.com/watch?v=pmwKRYrfEyY

Please drive discussions on the social channels, they help 
D a lot.


I really don't understand the problem with IDE. He mentions 
that he's not interested in any autocompletion, refactoring 
or anything like that.


Actually not. I'm just opposed to any work on them right 
now. The point is that all of those things are COMPLETELY 
WORTHLESS if the IDE crashes. It's not just a bug. It's an 
absolute showstopper, and I'm begging the community to do 
something about it.

Fix the crashes, and then we can talk.


I use Notepad++ now and have used TextPad in the past.  But, 
those are just text editors with syntax highlighting (fairly 
flexibly and simply customisable highlighting BTW).


What are the basic features you would require of a 
development environment, I am thinking of features which go 
beyond the basic concept of a text editor, such as:


- The concept of a 'project' or some other collection of 
source files which can be loaded/displayed in some fashion to 
make it easier to find/select/edit individual files


- The ability to hook in 'tools' to key presses like 
compile executing dmd ... or similar.


...

R


How about a GUI front end to vibe-d's dub?

I use that extensively on command line and find it very good, 
I imagine it would be easy enough write a GUI for it...


Or, a plugin for an existing editor.
Or, a 'tool' configured in an existing editor to run dub in a 
certain way.


All good ideas.

What I'm driving at here is trying to find Don's minimal 
requirements beyond stability,


Must not be worse than Notepad. g
I don't have any requirements. I *only* care about stability at 
this point.
I'm not personally looking for an IDE. I'm more a command line 
guy.


D has fifty people contributing to the compiler, but only two or 
three working on IDEs. We need a couple more.

And that's really all I'm saying.


Re: DConf 2013 Day 3 Talk 1: Metaprogramming in the Real World by Don Clugston

2013-06-12 Thread Don

On Tuesday, 11 June 2013 at 20:02:29 UTC, Walter Bright wrote:

On 6/11/2013 12:21 PM, John Colvin wrote:

On Tuesday, 11 June 2013 at 18:47:35 UTC, Walter Bright wrote:

On 6/11/2013 8:28 AM, Adam D. Ruppe wrote:
It is great stuff, solar power is almost free money if you 
can wait 20 years for

it.


Yeah, but you'll have to replace it before 20 years!


Source? There's not much that wears out in a photovoltaic 
AFAIK. The associated
electrical components may break however, especially on some of 
the more complex

setups.


Don't have a source, I read it long ago. Note that none of the 
advertisements, brochures, etc., mention expected life of the 
PVs.


That's not correct. Almost all manufacturers provide a 20 or 30 
year warranty.
Warranty periods have been slowing increasing as the industry has 
gained field experience.



I do know that the life of any semiconductor is measured as the 
integral of the heat it experiences. Heat causes the doping to 
migrate, and when it migrates far enough the part fails.


That's true for certain kinds of dopants (it's particularly true 
if you have copper involved), but dopant migration is not an 
issue for any commercial solar modules that I know of. (The 
situation may be different for exotic technologies). This is 
because solar cells are very simple devices, they're just 
enormous diodes.


Virtually all solar module failures in the field are caused by 
mechanical issues (bad solder joints, cracks, delamination), not 
by semiconductor degradation.




PV panels can get pretty hot in direct sunlight.


They do. Still not as hot as a CPU though!


Heating/cooling cycling will also cause cracking.


Most of these problems were solved in the 80's.

We were continuously doing accelerated lifetime testing of our 
own modules and ones from various manufacturers. Temperature 
cycling, humidity freeze, hail impact testing (that's fun), wind 
load testing (that's really fun), etc.
For some silicon modules you can get oxygen-boron complexes 
induced by UV, which causes a slow reduction in power, but our 
modules survived 200 years equivalent UVB exposure with no 
degradation whatsoever.



Circuit boards, inverters, etc., also fail, and you'd need some 
assurance you can get replacement parts for 20 years.


That one is definitely true. Even worse is batteries for off-grid 
systems, batteries have a very short lifetime.





Re: Don's talk's video to be online soon

2013-06-11 Thread Don

On Monday, 10 June 2013 at 23:54:41 UTC, Andrej Mitrovic wrote:

On 6/11/13, Steven Schveighoffer schvei...@yahoo.com wrote:
On Mon, 10 Jun 2013 19:19:20 -0400, Anthony Goins 
neonto...@gmail.com

wrote:


Will there be video for Andrew Edwards?


IIRC, Andrew specifically requested not to be videotaped.  I'm 
having

trouble finding the link where that was stated.

A shame too, he did a good job!


What about the slides, will they be available? Otherwise a 
couple of
brief sentences on what he was talking about would be cool 
(unless

those are secret too :p)


It was mainly about how D appears to newcomers, and how we can 
improve their experience.
It was very funny, and contained a lot of autobiographical 
material.

The main technical content is here:

http://forum.dlang.org/thread/km6ccu$1ads$1...@digitalmars.com


Re: dmd 2.063 beta 5

2013-05-23 Thread Don

On Tuesday, 21 May 2013 at 20:36:20 UTC, Walter Bright wrote:


Join the dmd beta mailing list to keep up with the betas. This 
one is pretty much good to go, unless something disastrous 
crops up.


http://ftp.digitalmars.com/dmd2beta.zip

Remaining regressions:

http://d.puremagic.com/issues/buglist.cgi?query_format=advancedbug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENED


NO NO NO NO. I am violently opposed to this release.

This beta contains the worst language misfeature of all time. 
It's silently snuck in under the guise of a bugfix.



struct S
{
const int x = 7;
int y;
}

In previous releases, S.x was always 7.
But now, if you write

S s = S(5);

then x gets changed to 5.
This means that the const variable x has been initialized TWICE!

This new behaviour is counter-intuitive and introduces a horrible 
inconsistency.


This is totally different to what happens with module 
constructors (you get a compile error if you try to set a const 
global if it already has an initializer). Likewise, everywhere 
else in the language, when you see a const variable with an 
initializer, the initializer gives its value.



I think the only possible solution is to make it an error to 
provide a const or immutable member with an initializer.


If you are providing an initializer, you surely meant to make it 
'static const', and that is certainly true of all existing usages 
of it.


As far as I can tell, this new feature exists only to create 
bugs. No use cases for it have been given. I cannot imagine a 
case where using this feature would not be a bug.



Please do not release this beta.


Re: dmd 2.063 beta 5

2013-05-23 Thread Don

On Thursday, 23 May 2013 at 10:17:00 UTC, Peter Alexander wrote:

On Thursday, 23 May 2013 at 09:05:02 UTC, Don wrote:
This means that the const variable x has been initialized 
TWICE!


That's no different from non-const members.


It's perfectly OK to modify a non-const member as many times as 
you like. That doesn't cause confusion.



struct Foo { int x = 1; }
Foo f = Foo(2); // f.x is 2

The initialiser is a default value if you don't provide one in 
the constructor. If you don't mark a variable as static then it 
is not static and needs to be initialised like any other member 
variable.


What gives you that idea? It's listed as an initializer in the 
spec.

It's implemented as an initializer in the compiler.


This new behaviour is counter-intuitive and introduces a 
horrible inconsistency.


It is exactly what happens in C++ and causes no confusion there.


I don't think it's legal in C++:

struct S
{
  const int x = 5;
};

w.cpp:4:17: error: ISO C++ forbids initialization of member ‘x’ 
[-fpermissive]



This is totally different to what happens with module 
constructors (you get a compile error if you try to set a 
const global if it already has an initializer).


In structs/classes, it is not an initialiser, it is a default 
value in case you don't provide a different value.



As far as I can tell, this new feature exists only to create 
bugs. No use cases for it have been given. I cannot imagine a 
case where using this feature would not be a bug.


The use case is simple: to allow non-static const member 
variables.


Not correct. You've always been able to have non-static const 
member variables, as long as they have no initializer.


What this feature does, is allow you to add an initializer which 
is ignored.


Re: dmd 2.063 beta 5

2013-05-23 Thread Don

On Thursday, 23 May 2013 at 11:08:16 UTC, Artur Skawina wrote:

On 05/23/13 11:05, Don wrote:

On Tuesday, 21 May 2013 at 20:36:20 UTC, Walter Bright wrote:


Join the dmd beta mailing list to keep up with the betas. 
This one is pretty much good to go, unless something 
disastrous crops up.


http://ftp.digitalmars.com/dmd2beta.zip

Remaining regressions:

http://d.puremagic.com/issues/buglist.cgi?query_format=advancedbug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENED


NO NO NO NO. I am violently opposed to this release.

This beta contains the worst language misfeature of all time. 
It's silently snuck in under the guise of a bugfix.


It is a bugfix.


No. Disallowing the problematic initializer fixes the bug. 
Allowing it, but with a different meaning, is a new feature.




It's also a breaking change, for code that relied on the buggy
behavior. There may be ways to ease the migration. But it's not 
a 'misfeature'.


The language changes with /every/ frontend release, often 
silently or with
just a note in some bugzilla entry.. This case isn't any worse 
and at least

this change is actually a real fix.


No, it's not, it's a fix plus a new misfeature.

The scoped import change, which makes local imports effectively 
'public' is a
much more serious problem. Undoing that one would be painful in 
the future,
if it were to stay. ( 
http://d.puremagic.com/issues/show_bug.cgi?id=10128 )




struct S
{
const int x = 7;
int y;
}

In previous releases, S.x was always 7.
But now, if you write

S s = S(5);

then x gets changed to 5.
This means that the const variable x has been initialized 
TWICE!


This new behaviour is counter-intuitive and introduces a 
horrible inconsistency.


Yes, this is wrong and just shouldn't be allowed. And, yes, 
even inside ctors.


This is totally different to what happens with module 
constructors (you get a compile error if you try to set a 
const global if it already has an initializer). Likewise, 
everywhere else in the language, when you see a const variable 
with an initializer, the initializer gives its value.


Yes, introducing a const and initialized, but still mutable 
class makes no sense.


I think the only possible solution is to make it an error to 
provide a const or immutable member with an initializer.


Except for the issue mentioned above, the new behavior is 
right. Adding a
keyword (static) to such declarations should not be a real 
problem.
AIUI the compiler can be made to list all the places which need 
changing.
But, yes, the fact that the old (buggy) code compiles, but now 
silently

drops that implicit static isn't ideal.
Would making 'struct S{const a=1;} illegal for a release 
really be a
significant improvement? Any code not updated during that 
'migration'

period would then still be in the same situation...


No, it should be illegal for ever. It's not sensible behaviour.



If you are providing an initializer, you surely meant to make 
it 'static const', and that is certainly true of all existing 
usages of it.
As far as I can tell, this new feature exists only to create 
bugs. No use cases for it have been given. I cannot imagine a 
case where using this feature would not be a bug.


struct Packet(uint TYPE) {
   immutable uint type = TYPE;
   // ...
}


But that allows you to write:

auto w  = Packet!(7)(6);

which sets type to 6 !
That makes no sense. It's a bug. Probably you meant:

struct Packet(uint TYPE) {
static immutable uint type = TYPE;
// ...
}
which doesn't store a copy of the '7' in every instance.


Re: dmd 2.063 beta 5

2013-05-23 Thread Don
On Thursday, 23 May 2013 at 13:52:49 UTC, Steven Schveighoffer 
wrote:
On Thu, 23 May 2013 05:05:01 -0400, Don 
turnyourkidsintoc...@nospam.com wrote:



On Tuesday, 21 May 2013 at 20:36:20 UTC, Walter Bright wrote:


Join the dmd beta mailing list to keep up with the betas. 
This one is pretty much good to go, unless something 
disastrous crops up.


http://ftp.digitalmars.com/dmd2beta.zip

Remaining regressions:

http://d.puremagic.com/issues/buglist.cgi?query_format=advancedbug_severity=regressionbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENED


NO NO NO NO. I am violently opposed to this release.

This beta contains the worst language misfeature of all time. 
It's silently snuck in under the guise of a bugfix.



struct S
{
const int x = 7;
int y;
}

In previous releases, S.x was always 7.
But now, if you write

S s = S(5);

then x gets changed to 5.
This means that the const variable x has been initialized 
TWICE!


This new behaviour is counter-intuitive and introduces a 
horrible inconsistency.


I disagree.

struct S
{
 const int x;
}

S s1; // x is 0



Of course! It's an uninitialized variable! Try making x a float, 
and you'll get NaN.



S s2 = S(5); // x is 5


---
I can see uses.  If you don't want x.init to be the default for 
x, then you need to set it to something else.


For example:

struct Widget
{
   immutable string name = (unset); // instead of 
}


That is just an awful workaround for the lack of default 
constructors.


Re: Rust vs Dlang

2013-03-18 Thread Don

On Saturday, 16 March 2013 at 14:42:58 UTC, Suliman wrote:
Hi folks! I had wrote small article about Rust vs D. I hope 
that you will like it!


http://versusit.org/rust-vs-d


Your first Rust example has 100.times instead of 10.times.

Is factorial really a built-in Rust function?? If so, the text 
should say so.


Might perhaps be worth noting that thread-local variables are 
built-in in D, so that D's support for threads is not entirely 
library-based.
The core language is aware of threads, but thread creation etc is 
library based.


Re: An article about contract programming

2013-02-05 Thread Don
On Wednesday, 6 February 2013 at 04:05:23 UTC, Walter Bright 
wrote:

On 2/5/2013 8:57 AM, bearophile wrote:
D doesn't call the invariant even in that second case, as you 
see from this code

that doesn't assert:


Invariants, per the spec, are called on the end of 
constructors, the beginning of destructors, and the beginning 
and end of public functions. Foo does not have any 
ctors/dtors/functions, hence no invariant call.


Sounds like bug 519 to me.


Re: Getting ready for 2.061

2012-12-23 Thread Don

On 23.12.2012 03:11, Walter Bright wrote:

On 12/22/2012 5:43 PM, Jonathan M Davis wrote:

On Saturday, December 22, 2012 17:36:11 Brad Roberts wrote:

On 12/22/2012 3:44 PM, Jesse Phillips wrote:

What is nice about making a pull request against staging is that the
reviewer knows that the fix can be applied that far (not that comments
wouldn't do the same).


I don't believe those assertions to be true.  Merging in either
direction is
possible and the difficulty lies in the nature of the drift between the
two.  Neither direction is necessarily any easier than the other.


If you merge from the branch to master, then there's a higher risk of
forgetting to merge fixes. If you merge from master to the branch,
then there's
a higher risk of putting changes in the branch that you don't want in the
branch. However, as long as the changes on master aren't too large,
you can
simply cherry-pick the changes from master to the branch (or vice versa)
without too much trouble. Overall though, I would think that the risk of
screwing up is higher if commits go to the branch initially rather than
master.


It makes more sense to me to put the commits into master, and then
cherry pick for the branch.


IMHO, the big issue is, and has always been, what does the autotester test?
It makes most sense to me to have all new fixes for _anything_ going 
into the development branch, and tests on the release branch to exist 
solely for regression testing just in case.

It makes no sense to me to have pull testing against multiple branches.


Re: D 1.076 Alpha for Windows 64 bits, works with VS 2010

2012-10-09 Thread Don Clugston

On 06/10/12 20:38, Walter Bright wrote:

On 9/30/2012 9:35 PM, Andrej Mitrovic wrote:

On 10/1/12, Walter Bright newshou...@digitalmars.com wrote:

Also, consider that in C++ you can throw any type, such as an int. There
is no credible way to make this work reasonably in D, as exceptions are
all derived from Exception.


Is that a bug or a feature? :)



It's a feature, and I'm not joking.

What is the compelling use case for throwing an int? How could that
possibly fit into some encapsulation model? What if library A throws an
int, and library B does? Now you catch an int - which did it come from?
You've got no clue. It's indistinguishable from garbage.



Just imagine how much fun could be had, if D let you throw sqrt(17.0) + 
37.919i.






Re: NaNs Just Don't Get No Respect

2012-08-21 Thread Don Clugston

On 20/08/12 22:21, cal wrote:

On Monday, 20 August 2012 at 19:28:33 UTC, Peter Alexander wrote:

On Sunday, 19 August 2012 at 22:22:28 UTC, Walter Bright wrote:

 I find it more likely that the NaN will go unnoticed and
 cause rare bugs.

NaNs in your output are pretty obvious. For example, if your
accounting program prints NAN for the amount on the payroll
cheques, someone is guaranteed to notice. But if it's a few cents off
in your disfavor, it might take you years to discover there's a problem.

Critical systems also would find a NaN command a lot easier to detect
than an off-by-two command, and would be able to shut down and engage
the backup.


The problem is that it's easy for even NaN's to be filtered out.

float x = 0.0f;
float y; // oops
float z = min(x, y); // NaN has disappeared, unnoticed!

My argument is that conservative compile time errors on uninitialised
variables are more likely to catch these errors.


I just tried this:

float a, b = 10;
writeln(min(a, b), , , fmin(a, b));

Result:
nan, 10

I think that is incorrect - both should give NaN. The scientific viz
software I use at work returns NaN for any numerical operation on NaN
values, means, smoothing, etc.


No, it's the other way around.
The IEEE 754 standard defines min(x, NaN) == min(NaN, x) == x.

According to the C standard, fmin() should be returning 10, as well.
There is a bug in fmin().

However min() and max() are extremely unusual in this respect. Almost 
everything else involving a NaN returns NaN.






Re: NaNs Just Don't Get No Respect

2012-08-20 Thread Don Clugston

On 18/08/12 05:03, bearophile wrote:

F i L:


Why would it matter what is normal?


It matters to me because I am curious.

Why aren't my friends that work or study chemistry writing free small
online articles like my programmerCS friends do? Maybe it's systematic
differences in their brain brain? Or it's just more easy to talk about
coding compared to botany and chemistry and making engines? Or maybe
programmers don't know what they are doing? Or maybe it's just I am not
looking in the right places? :-)

Bye,
bearophile



They write journal articles instead. Producing journal articles have 
never been of major importance for IT, but they are crucial for science.


And if it isn't a new, publishable result, you're more likely to 
contribute to something like Wikipedia, than to write a blog post.


OTOH some people do blog, for example, http://terrytao.wordpress.com/
who is one of the top mathematicians on the planet.


Re: Pull freeze

2012-07-30 Thread Don Clugston

On 29/07/12 13:43, Robert Clipsham wrote:

On Sunday, 29 July 2012 at 06:08:18 UTC, Andrei Alexandrescu wrote:

Due to the upcoming release, there will be no regular pull
walk-through tomorrow. Thanks for the growing rate of contribution,
and let's resume the ritual next Sunday.

Andrei


I really can't shake the feeling that you guys just don't get git!

You never need to freeze pulls with git. Ever! It just slows down
development.

The work flow is simple enough:

1. Time for a release
2. Create a release branch, it is feature frozen
3. You can keep pulling feature into master, no problem
4. You can pull regression/bug fixes into the release branch
5. A release is made, merge the release branch into master and continue.

I don't know why I bothered typing this out - the discussion has been
had time and time again with no progress made.


From a technical point of view, I think the main thing to be done 
relates to the auto-tester.
We want the autotester to run on the release branch, as well as the 
development branch.


Note of course that we have druntime and Phobos as well as DMD; they all 
need their own release branches.
I guess the easiest way to do this would be to have a single, permanent 
branch called 'release', that is used for all releases, rather than 
creating a release branch for each compiler version.


Re: Coming Soon: Stable D Releases!

2012-07-25 Thread Don Clugston

On 25/07/12 14:32, Jacob Carlborg wrote:

On 2012-07-25 09:43, Don Clugston wrote:


We don't need this complexity. The solution is *trivial*. We just need
to decide in advance that we will target a release every X weeks, and
that it should be delayed only for reasons of stability.


Yeah, but what happens when Walter or someone else decides to start a
big project, i.e. implementing COFF support, a week before release? We
end up with a bunch of half finished things.


If we had an agreed release cycle, it would not happen. The release 
cycle would be a higher authority than any single person, even Walter.



A solution to this would be to create a new branch or not push the
changes upstream. Although I don't understand why Walter doesn't already
do this.


An agreement is ALL that is required. But despite repeated requests, I
have never been able to get any traction on the idea.
Instead, people propose all kinds of crazy technical solutions, the most
recent ones being changes to bugzilla, trello, and now dlang-stable.

If we say, There will be a new compiler release every 8 weeks, the
problem is solved. Seriously. That one-off agreement is ALL that is
required.


Apparently it seems very difficult to agree upon, since it hasn't
happened. The releases just pop up at random.


I have tried many times, without success. I've never succeeded in 
getting more than two or three people interested.







Re: Coming Soon: Stable D Releases!

2012-07-24 Thread Don Clugston

On 16/07/12 09:51, Adam Wilson wrote:

As a result of the D Versioning thread, we have decided to create a new
organization on Github called dlang-stable. This organization will be
responsible for maintaining stable releases of DMD, DRuntime, and Phobos.

So what is a stable release?
A stable release is a complete build of DMD, DRuntime, and Phobos that
ONLY includes the latest bug-fixes and non-breaking enhancements to
existing features. It will not include, new features, breaking
enhancements, and any other code that the core development team may be
working on.


I'm not actually sure what this means. I fear that it may be motivated 
by an inaccurate assessment of the current situation.


The existing instability comes almost entirely from Phobos, not from the 
compiler. Historically there have been very few instances where you 
might want to choose an older compiler in preference to the latest release.


As I've said before, the one time when a language change caused 
*massive* instability was in the attempt to move AAs from language to 
library -- even though that wasn't even supposed to affect existing code 
in any way. The other thing that typically causes regressions is fixes 
to forward-reference bugs.


Historically, addition of new language features has NOT caused 
instability. What has been true is that features have been added to the 
compiler before they were really usable, but they have not broken 
existing code. Fairly obviously the 64 bit compiler was quite buggy when 
initially released. But even that massive change wasn't much more buggy 
than the library AAs! So I am not sure that you can correctly guess 
where instability will come from.


In summary -- I would not expect your stable DMD to be very different 
from the normal DMD. Phobos is where the instability issue is.


Re: Purity in D – new article

2012-05-30 Thread Don Clugston

On 29/05/12 19:35, David Nadlinger wrote:

On Tuesday, 29 May 2012 at 12:08:08 UTC, Don Clugston wrote:

And to set the record straight -- the relaxed purity ideas were not my
idea.
I forget who first said them, but it wasn't me. I just championed them.


Unfortunately, I don't quite remember either – was it Bruno Medeiros? In
any case, if somebody can help my memory here, I'd be glad to give
credit to the one who came up with the original proposal in the article
as well.

David


The successful proposal, using weakly pure/strongly pure (Sep 21 2010):

http://www.digitalmars.com/d/archives/digitalmars/D/Proposal_Relax_rules_for_pure_117735.html

Its basically the same as this one by Bruno (Apr 29 2008), which uses 
partially pure and mentions an earlier post by me:


http://www.digitalmars.com/d/archives/digitalmars/D/Idea_partially_pure_functions_70762.html#N70762

And the earliest reference I could find is by me (Apr 5 2008) where I 
called it an amoral function.


http://www.digitalmars.com/d/archives/digitalmars/D/Grafting_Functional_Support_on_Top_of_an_Imperative_Language_69253.html

The first compiler release with pure function attributes (though not 
implemented) was released in Apr 22, 2008 and the first with pure as a 
keyword was Jan 20 2008.

So surely this is close to the original.

So now I'm confused, maybe it *was* me after all!
Then formalized by Bruno, and later championed by me?



Re: Purity in D – new article

2012-05-29 Thread Don Clugston

On 27/05/12 22:56, David Nadlinger wrote:

Some of you might remember that I have been meaning to write a
comprehensive introduction to design and use of purity for quite some
while now – I finally got around to do so:

http://klickverbot.at/blog/2012/05/purity-in-d/

Feedback and criticism of all kinds very welcome!

David


For the part about floating-point calculations:

As this would be an impractical restriction, in D pure functions are 
allowed to read and write floating point flags
+ (ie, the floating point state is regarded as a variable implicitly 
passed to every pure function).



And to set the record straight -- the relaxed purity ideas were not my idea.
I forget who first said them, but it wasn't me. I just championed them.



Re: Visual D 0.3.32 maintenance release

2012-05-24 Thread Don Clugston

On 13/05/12 21:28, Walter Bright wrote:

On 5/13/2012 5:31 AM, Rainer Schuetze wrote:

With the workflow of bugzilla/svn it was just copy and pasting the diff
into the bug report. I understand it is easier on Walter's side, though.


Yes, it is definitely easier on my side.

But consider that the number of contributions to dmd has increased by at
least a factor of 10 since we moved to github means that, in general,
contributors find it easier, too.


As I've said before -- that is true for DMD but not for Phobos.
Rate of contributions to Phobos is basically the same as when it was in svn.
Would be good to know why.


Re: Pull requests processing issue

2012-04-23 Thread Don Clugston

On 19/04/12 16:58, David Nadlinger wrote:

On Wednesday, 18 April 2012 at 10:39:26 UTC, Don Clugston wrote:

One problem is github. IMHO github's pull requests are quite
ridiculous, there is no way to prioritize them.


You can't blame GitHub for something we are not using it for – pull
request, as far as we are using them, are just a tool to keep patches
close to the source so that they can conveniently be reviewed and
merged.


If that is so, then this thread is invalid -- the number of open pull 
requests is not a useful metric.


Issue tracking, prioritization, etc. all happens on Bugzilla,

and every pull request should have an accompanying »pull«-tagged
Bugzilla entry.

The infrastructure is already there,


Is it? I can't see any method for dealing with patches that aren't 
merged immediately.
We have bugzilla as a priority system for bugs without patches, github 
works for submitting patches, and a great infrastructure for testing the 
compiler after patches are merged (thanks Brad!) but the patch 
evaluation step (specifically, the code review part) is missing.


Re: Pull requests processing issue

2012-04-18 Thread Don Clugston

On 18/04/12 12:19, Alex Rønne Petersen wrote:

On 18-04-2012 11:00, Trass3r wrote:

I think the problem of ~100 open pull requests needs to be faced
better. People that see their patches rot in that list probably don't
feel rewarded enough to submit more patches.


So true. I won't do any further work if it's in vain anyway.
Also I regularly have to rebase my one cause of conflicts, which is
annoying.

I really wonder what Walter's doing. Is he still running the whole
testsuite instead of relying on the autotester?


Just looking at the auto tester, there seems to be tons of stuff that
can readily be merged...



One problem is github. IMHO github's pull requests are quite ridiculous, 
there is no way to prioritize them.
There are quite a lot of pull requests in there which are doubtful, 
high-risk, or require a lot of time to evaluate. Currently, we don't 
have a way to deal with them.


But, the announce list is not the appropriate place for this discussion.
Please move to the main list if you want to comment further.


Re: UFCS for D

2012-04-02 Thread Don Clugston

On 30/03/12 12:22, Walter Bright wrote:

On 3/30/2012 2:15 AM, Nick Sabalausky wrote:

Andrei and I have talked about it, and we think it is because of
difficulties in breaking a module up into submodules of a package.
We think it's something we need to address.


Eh? Other people have voiced concerns over that since waaay back in even
pre-D1 times. In particular, many people have argued for allowing modules
with the same name as a package. Ie: you could have both module foo and
module foo.bar. The reasons they gave for wanting this are right
along the
lines of what you're talking about here. Eventually they got the message
that it wasn't gonna happen and they gave up asking for it.

Or is there a separate problem you're refering to?


No, that's it. What brings it to the fore is, as I said, the
kitchen-sink module that is becoming prevalent.



To be brutally honest, I don't think that's got much to do with the 
language. It's got to do with Phobos adopting the Big Ball Of Mud design 
pattern. There's no reason for the existing modules to be so huge. Eg, I 
created std.internal.math so that the math modules would stay small.

Not only are the modules huge, they import everything.

I'd like to see some attempt to fix the problem within the language 
right now, before jumping straight into language changes.




Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-27 Thread Don Clugston

On 23/03/12 16:25, Andrei Alexandrescu wrote:

On 3/23/12 12:51 AM, Manfred Nowak wrote:

Andrei Alexandrescu wrote:


You may want to also print the mode of the distribution,
nontrivial but informative


In case of this implementation and according to the given link: trivial
and noninformative, because

| For samples, if it is known that they are drawn from a symmetric
| distribution, the sample mean can be used as an estimate of the
| population mode.

and the program computes the variance as if the values of the sample
follow a normal distribution, which is symmetric.

Therefore the mode of the sample is of interest only, when the variance
is calculated wrongly.


Again, benchmarks I've seen are always asymmetric. Not sure why those
shown here are symmetric. The mode should be very close to the minimum
(and in fact I think taking the minimum is a pretty good approximation
of the sought-after time).

Andrei


Agreed, I think situations where you would get a normal distribution are 
rare in benchmarking code.
Small sections of code always have a best-case scenario, where there are 
no cache misses.

If there are task switches, the best case is zero task switches.

If you use the CPU performance counters, you can identify the *cause* of 
performance variations. When I've done this, I've always been able to 
get very stable numbers


Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Don Clugston

On 23/03/12 09:37, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:


| For samples, if it is known that they are drawn from a symmetric
| distribution, the sample mean can be used as an estimate of the
| population mode.


I'm not printing the population mode, I'm printing the 'sample mode'.
It has a very clear meaning: most frequent value. To have frequency,
I group into 'bins' by precision: 12.345 and 12.3111 will both
go to the 12.3 bin.



and the program computes the variance as if the values of the sample
follow a normal distribution, which is symmetric.


This program doesn't compute the variance. Maybe you are talking
about another program. This program computes the standard deviation
of the sample. The sample doesn't need to of any distribution
to have a standard deviation. It is not a distribution parameter,
it is a statistic.


Therefore the mode of the sample is of interest only, when the variance
is calculated wrongly.


???

The 'sample mode', 'median' and 'average' can quickly tell you
something about the shape of the histogram, without
looking at it.
If the three coincide, then maybe you are in normal distribution land.

The only place where I assume normal distribution is for the
confidence intervals. And it's in the usage help.

If you want to support estimating weird probability
distributions parameters, forking and pull requests are
welcome. Rewrites too. Good luck detecting distribution
shapes ;-)




-manfred


PS: I should use the t student to make the confidence intervals,
and for computing that I should use the sample standard
deviation (/n-1), but that is a completely different story.
The z normal with n30 aproximation is quite good.
(I would have to embed a table for the t student tail factors,
pull reqs velcome).


No, it's easy. Student t is in std.mathspecial.




PS2: I now fixed the confusion with the confidence interval
of the variable and the confidence interval of the mu average,
I simply now show both. (release 0.4).

PS3: Statistics estimate distribution parameters.

--jm







Re: avgtime - Small D util for your everyday benchmarking needs

2012-03-23 Thread Don Clugston

On 23/03/12 11:20, Don Clugston wrote:

On 23/03/12 09:37, Juan Manuel Cabo wrote:

On Friday, 23 March 2012 at 05:51:40 UTC, Manfred Nowak wrote:


| For samples, if it is known that they are drawn from a symmetric
| distribution, the sample mean can be used as an estimate of the
| population mode.


I'm not printing the population mode, I'm printing the 'sample mode'.
It has a very clear meaning: most frequent value. To have frequency,
I group into 'bins' by precision: 12.345 and 12.3111 will both
go to the 12.3 bin.



and the program computes the variance as if the values of the sample
follow a normal distribution, which is symmetric.


This program doesn't compute the variance. Maybe you are talking
about another program. This program computes the standard deviation
of the sample. The sample doesn't need to of any distribution
to have a standard deviation. It is not a distribution parameter,
it is a statistic.


Therefore the mode of the sample is of interest only, when the variance
is calculated wrongly.


???

The 'sample mode', 'median' and 'average' can quickly tell you
something about the shape of the histogram, without
looking at it.
If the three coincide, then maybe you are in normal distribution land.

The only place where I assume normal distribution is for the
confidence intervals. And it's in the usage help.

If you want to support estimating weird probability
distributions parameters, forking and pull requests are
welcome. Rewrites too. Good luck detecting distribution
shapes ;-)




-manfred


PS: I should use the t student to make the confidence intervals,
and for computing that I should use the sample standard
deviation (/n-1), but that is a completely different story.
The z normal with n30 aproximation is quite good.
(I would have to embed a table for the t student tail factors,
pull reqs velcome).


No, it's easy. Student t is in std.mathspecial.


Aargh, I didn't get around to copying it in. But this should do it.

/** Inverse of Student's t distribution
 *
 * Given probability p and degrees of freedom nu,
 * finds the argument t such that the one-sided
 * studentsDistribution(nu,t) is equal to p.
 *
 * Params:
 * nu = degrees of freedom. Must be 1
 * p  = probability. 0  p  1
 */
real studentsTDistributionInv(int nu, real p )
in {
   assert(nu0);
   assert(p=0.0L  p=1.0L);
}
body
{
if (p==0) return -real.infinity;
if (p==1) return real.infinity;

real rk, z;
rk =  nu;

if ( p  0.25L  p  0.75L ) {
if ( p == 0.5L ) return 0;
z = 1.0L - 2.0L * p;
z = betaIncompleteInv( 0.5L, 0.5L*rk, fabs(z) );
real t = sqrt( rk*z/(1.0L-z) );
if( p  0.5L )
t = -t;
return t;
}
int rflg = -1; // sign of the result
if (p = 0.5L) {
p = 1.0L - p;
rflg = 1;
}
z = betaIncompleteInv( 0.5L*rk, 0.5L, 2.0L*p );

if (z0) return rflg * real.infinity;
return rflg * sqrt( rk/z - rk );
}


Re: Tango for D2: All user modules ported

2012-02-02 Thread Don Clugston

On 01/02/12 05:59, SiegeLord wrote:

Hello everyone,

Just wanted to put out an announcement with a progress report on porting effort 
of Tango.

Through the heroic efforts by Igor Stepanov, the initial porting was completed 
ahead of schedule.All the user modules are now ported


 (save for tango.math.BigInt, which right now is aliased to 
std.bigint... this might change in the future)


Please don't change that! That's the way it should be. I tried very
hard, and unsuccessfully to avoid that code being duplicated in Phobos 
and Tango, it's fantastic that you've finally achieved it.


Re: DVM - D Version Manager 0.4.0

2012-01-13 Thread Don Clugston

On 09/01/12 13:09, Jacob Carlborg wrote:

On 2012-01-09 10:30, Don Clugston wrote:

On 06/01/12 22:29, Jacob Carlborg wrote:

I just released a new version of DVM, 0.4.0. The only thing new in this
release in the compile command. This allows to compile DMD, druntime
and Phobos from github. Create a folder, clone DMD, druntime and Phobos
in the newly create folder, run dvm compile folder to compile
everything. The compiler is placed in the DMD directory.

For installation instructions see: https://bitbucket.org/doob/dvm

Changelog:

Version 0.4.0
New/Change Features
* Added a compile command for compiling DMD, druntime and Phobos from
github


I found that I needed to do:
cd .dvm
mkdir bin
before dvm install would work.


Hmm, that's strange. On which platform? Is it when installing DVM itself
or compilers?


Linux64 (Ubuntu). When installing compilers.




Re: DVM - D Version Manager 0.4.0

2012-01-09 Thread Don Clugston

On 06/01/12 22:29, Jacob Carlborg wrote:

I just released a new version of DVM, 0.4.0. The only thing new in this
release in the compile command. This allows to compile DMD, druntime
and Phobos from github. Create a folder, clone DMD, druntime and Phobos
in the newly create folder, run dvm compile folder to compile
everything. The compiler is placed in the DMD directory.

For installation instructions see: https://bitbucket.org/doob/dvm

Changelog:

Version 0.4.0
New/Change Features
* Added a compile command for compiling DMD, druntime and Phobos from
github


I found that I needed to do:
cd .dvm
mkdir bin
before dvm install would work.


Re: dmd 2.057 release

2011-12-15 Thread Don

On 15.12.2011 21:34, Jacob Carlborg wrote:

On 2011-12-15 20:25, Walter Bright wrote:

On 12/15/2011 4:16 AM, Jacob Carlborg wrote:

I wonder if we can list breaking changes in a separate sections in the
changelog.


Any bug fix is a breaking change - code can and does depend on bugs
(often inadvertently).


In this particular case it could just have been a design decision that
has changed. And BTW, deprecating typdef can't be considered a bug fix,
that would be perfect for a list of breaking changes.


Deprecating typedef is in the list of changed features, not in the list 
of fixed bugs. Choosing which list things go in is sometimes a bit 
arbitrary, though.
Occasionally in the changelog, major breaking changes were shown in red. 
That hasn't happened for ages.


Re: dmd 1.071 and 2.056 release

2011-10-31 Thread Don

On 27.10.2011 08:48, Jacob Carlborg wrote:

On 2011-10-26 20:34, Walter Bright wrote:

100 bugs fixed!

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.071.zip

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.056.zip


Impressive as always. I noticed there seem to be a couple of D2 related
fixes in the D1 changelog:

Bugzilla 6073: Cannot pass __traits(parent, ...) as a template parameter
if it is a module

Then there are a couple of fixes related to regressions for D2, don't
know if they apply to D1 as well, just look for Regression(2.0xy).


They do apply. In every case, some code was modified on the D1 compiler. 
Not all of the test cases apply to D1 though (sometimes there are bugs 
in the compiler internals, where we don't have a D1 test case that 
triggers them).




Re: dmd 1.069 and 2.054 release

2011-07-21 Thread Don

Walter Bright wrote:

On 7/20/2011 2:29 PM, Don wrote:

The new CTFE docs got left out somehow.


Not sure what you're referring to?


Sorry, it seems something went wrong with my repository. When I pushed, 
it didn't push to anything...

I'll redo it.


Re: TDPL is an Amazon Kindle bestseller

2011-06-20 Thread Don

Jonathan M Davis wrote:

On 2011-06-19 13:26, Walter Bright wrote:

On 6/19/2011 12:29 PM, Jonathan M Davis wrote:

Well, I'm still not buying a Kindle. Death to e-books! ;)

I just bought a Kindle and I'm running my unread paperbacks through the
scanner and then trashing them!


I _much_ prefer reading actual, solid, paper books. I don't particularly like 
reading books in electronic form at all. It works well for documentation and 
searchability, but beyond that, I don't see it as an advantage at all. And in 
those cases, I'd be reading them on the computer, not an e-book reader. And of 
course, then there's the issue of DRM and all that


So, I don't own an e-book reader and I hope that e-books never become so 
prominent that I'm forced to.


- Jonathan M Davis


There's a solution:

http://smellofbooks.com/


Re: TDPL is an Amazon Kindle bestseller

2011-06-20 Thread Don

Walter Bright wrote:

On 6/20/2011 12:11 AM, Don wrote:

There's a solution:

http://smellofbooks.com/


It says it's a new book smell. I actually like the old book smell.


Check the full product list. There's an old book smell as well. And 
Eau, you have cats. Make sure you read the warnings.


Re: dmd 1.068 and 2.053 release

2011-05-16 Thread Don

Jonathan M Davis wrote:

On 2011-05-15 03:50, Joel Christensen wrote:

Looks like enum's are tighter (eg. 'enum last = media[ $ - 1 ];' doesn't
work now). It was working in 52. I had heard it might be relaxed, not
tightened. I get the error, 'cannot be read at compile time'.

Also immutable imstr = test; printf( toStringz( imstr ) ); wasn't
working at first, but works now for some reason.

Good to have an update though.


A lot of CTFE stuff was rewritten. What all of the implications of that are, I 
don't know, but according to Don (who did the rewrite), there are cases which 
compiled before but didn't generate correct code. I don't know if there were 
any cases which compiled which were supposed to be illegal.


There are VERY MANY cases which compiled before, which were supposed to 
be illegal. The compiler used to accept a variable where it needed a 
compile-time constant!


 Regardless,
because there was a major rewrite for CTFE, the risk of CTFE bugs or 
behavioral changes is higher than is the case for most releases.


To clarify:
Two massive fixes were made, which are independent of each other:
(1) CONSTANT FOLDING: any case where a compile-time value is required 
now MUST be a compile-time value. If a compile-time value is not 
required, there is no attempt to interpret it. This fixed many 
accepts-invalid bugs.
(2) CTFE: array literals no longer use copy-on-write (which gave totally 
wrong semantics). This fixed many wrong-code bugs.


Fixing (2) also allowed a huge number of CTFE bugs to be fixed.

This particular example is a consequence of (1), and has nothing to do 
with the CTFE changes.


Re: [Article Context, First Draft] Concurrency, Parallelism and D

2011-04-11 Thread Don

Andrei Alexandrescu wrote:

On 04/10/2011 06:29 PM, Don wrote:

Andrei Alexandrescu wrote:

On 04/09/2011 09:27 PM, dsimcha wrote:

On 4/9/2011 10:22 PM, Andrei Alexandrescu wrote:

On 04/09/2011 08:31 PM, dsimcha wrote:

On 4/9/2011 7:56 PM, Andrei Alexandrescu wrote:

I think the article's title is missing a comma btw.

Andrei


Where?


Where could it ever be? After parallelism.

Andrei


Actually, I specifically remember learning about this grammar rule in
middle school. When listing stuff, the comma before the and is
optional. Putting it and not putting it are both correct.


I see. I go by Bugs in Writing (awesome book)


Ugh. I have a profound hatred for that book. Rule of thumb: if any style
guide warns agains split infinitives, burn it.


You may want to reconsider. This is one book that most everybody who is 
in the writing business in any capacity agrees with: my editor, 
heavyweight technical writers, my advisor and a few other professors...


My experience is quite different. Maybe it's different in the US (I 
encountered the book from an American colleague, I've never seen it used 
by anyone else).



Besides you can't discount the book on account of one item you disagree 
with. The book has hundreds of items, and it is near inevitable one will 
find an issue a couple of them.


Andrei


For sure, but it was not the only item. The recommendation is use 'that' 
vs 'which' was an even more offensive item. There were several 
recommendations in that book which I thought were dreadful. I also read 
a couple of scathing criticisms of that book. (I think one was in Bill 
Bryson's excellent 'Mother Tongue').
In fairness, it had a few good examples, but in general I could not 
stomach the snobbish pedantry in that book. I've read too much 
functional grammar to take arbitrary normative rules seriously, when 
they are not backed up by an extensive corpus. (Which is why I recommend 
'split infinitives' as a good litmus test -- if they say don't do it, 
they haven't used a corpus).




Re: [Article Context, First Draft] Concurrency, Parallelism and D

2011-04-10 Thread Don

Andrei Alexandrescu wrote:

On 04/09/2011 09:27 PM, dsimcha wrote:

On 4/9/2011 10:22 PM, Andrei Alexandrescu wrote:

On 04/09/2011 08:31 PM, dsimcha wrote:

On 4/9/2011 7:56 PM, Andrei Alexandrescu wrote:

I think the article's title is missing a comma btw.

Andrei


Where?


Where could it ever be? After parallelism.

Andrei


Actually, I specifically remember learning about this grammar rule in
middle school. When listing stuff, the comma before the and is
optional. Putting it and not putting it are both correct.


I see. I go by Bugs in Writing (awesome book)


Ugh. I have a profound hatred for that book. Rule of thumb: if any style 
guide warns agains split infinitives, burn it.




Re: [Article Context, First Draft] Concurrency, Parallelism and D

2011-04-10 Thread Don

dsimcha wrote:

On 4/10/2011 7:29 PM, Don wrote:

Andrei Alexandrescu wrote:

On 04/09/2011 09:27 PM, dsimcha wrote:

On 4/9/2011 10:22 PM, Andrei Alexandrescu wrote:

On 04/09/2011 08:31 PM, dsimcha wrote:

On 4/9/2011 7:56 PM, Andrei Alexandrescu wrote:

I think the article's title is missing a comma btw.

Andrei


Where?


Where could it ever be? After parallelism.

Andrei


Actually, I specifically remember learning about this grammar rule in
middle school. When listing stuff, the comma before the and is
optional. Putting it and not putting it are both correct.


I see. I go by Bugs in Writing (awesome book)


Ugh. I have a profound hatred for that book. Rule of thumb: if any style
guide warns agains split infinitives, burn it.



Another of my memories from my middle school education.  I specifically 
remember being told not to use split infinitives.  Then, a few weeks 
later we were watching the daily news video that was part of the middle 
school curriculum at the time and it was mentioned that the Oxford 
dictionary had voted to consider split infinitives proper grammar. (This 
was in either late 1998 or early 1999.)  All this happened with the 
teacher in the room watching.


Bill Bryson's 'Mother Tongue' contains an excellent diatribe against 
that and other silly rules. He asks the question, who originally comes 
up with these rules? And the answer is, hobbyists. It's quite incredible 
where some of them originate.


Is there a split infinitive in the first sentence below?
We must boldly go where none have gone before.
We have to boldly go where none have gone before.


Re: dmd 1.067 and 2.052 release

2011-02-22 Thread Don

phobophile wrote:

dsimcha Wrote:


== Quote from Don (nos...@nospam.com)'s article

Walter Bright wrote:

Now with 64 bit Linux support! (Though expect problems with it, it's
brand new.)


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.067.zip

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.052.zip

Eleven man-months to implement a 64-bit backend is pretty impressive, I
reckon. Contratulations, Walter!
BTW despite the emphasis on D2, this release has one of the highest
number of D1 bugfixes, ever.

Since when was it even 11?  I thought the first 64 commits weren't until June of
last year.


The guy has been promising 64 bits since over a year ago. WTF is wrong with 
you? Not that impressive anymore.


The first commit was on 21 June 2010. (All that first commit was, was 
defining the 64 bit register set -- it was really the very beginning of 
implementation). So it's 8 months today.




Re: dmd 1.067 and 2.052 release

2011-02-18 Thread Don

Walter Bright wrote:
Now with 64 bit Linux support! (Though expect problems with it, it's 
brand new.)



http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.067.zip

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.052.zip


Eleven man-months to implement a 64-bit backend is pretty impressive, I 
reckon. Contratulations, Walter!


BTW despite the emphasis on D2, this release has one of the highest 
number of D1 bugfixes, ever.


Re: D Programming Language source (dmd, phobos,etc.) has moved to github

2011-01-27 Thread Don

Vladimir Panteleev wrote:

On Wed, 26 Jan 2011 23:22:34 +0200, Don nos...@nospam.com wrote:


Vladimir Panteleev wrote:

On Wed, 26 Jan 2011 06:33:35 +0200, Don nos...@nospam.com wrote:


I think this is a fallacy. It only applies if you
(1) *completely disallow* any centralisation -- which I don't think 
ever happens in practice!
 What about the Linux kernel? There's Linus's git repo, and lots of 
repos maintained by others (e.g. Linux distros). The other distros 
are not a superset of Linus's repo, they have their own branches with 
various project-specific patches and backports. Git was written for 
this specifically.


Yes, but each distro has a trunk, in which all commits are ordered by 
time. There's always an official version of every branch.


Ordered by time of what? Time of merging into the branch? That's not 
very useful, is it? They can't be ordered by time of authorship, for 
certain.


Official is technically meaningless in a DVCS, because no repository 
is holy by design (otherwise it wouldn't be really distributed).


Yes, in theory that's true. In practice, I don't believe it.
Just because you're using a DVCS doesn't mean you have no project 
organisation whatsoever. There's always going to be a repository that 
the release is made from.


If the 
maintainer of a repository becomes MIA, anyone can take over without any 
problems.


and (2) demand that cloning a repository be an entirely read-only 
operation (so that the repository doesn't know how many times it has 
been cloned)
and (3) demand that the revision numbers behave exactly as they do 
in svn.
 Then you're suggesting that the commit identifiers basically contain 
the clone history?


Yes, I think it could be done that way. Identifier would be composed 
of clonenumber+commitnumber. Where it is the location of the original 
change. Yes, there are difficulties with this scheme, but I think they 
are the same challenges as for implementing merges on a centralised 
VCS such as Subversion. I don't think there's anything insurmountable.


Then a clone of a clone of a clone of a clone needs four clone numbers, 
plus a revision number. It'd look something like 5:1:2:1:1056.


No. Just one repository number, and one revision number. You just need 
to be sensible in how the clone numbers are assigned. That's easy.

Basically every repository has a number of clone numbers it can assign.
Every clone gets a subset of that range. Dealing with the situation when 
the range has run out is a bit complicated, but quite doable, and there 
are steps you can take to make it a very rare occurence.


I'm not have almost zero interest in this stuff, so I won't say any 
more. I'm really just commenting that it's not difficult to envisage an 
algorithm which makes exposing a random hash unnecessary.









Re: D Programming Language source (dmd, phobos,etc.) has moved to github

2011-01-26 Thread Don

Vladimir Panteleev wrote:

On Wed, 26 Jan 2011 06:33:35 +0200, Don nos...@nospam.com wrote:


I think this is a fallacy. It only applies if you
(1) *completely disallow* any centralisation -- which I don't think 
ever happens in practice!


What about the Linux kernel? There's Linus's git repo, and lots of repos 
maintained by others (e.g. Linux distros). The other distros are not a 
superset of Linus's repo, they have their own branches with various 
project-specific patches and backports. Git was written for this 
specifically.


Yes, but each distro has a trunk, in which all commits are ordered by 
time. There's always an official version of every branch.




and (2) demand that cloning a repository be an entirely read-only 
operation (so that the repository doesn't know how many times it has 
been cloned)
and (3) demand that the revision numbers behave exactly as they do in 
svn.


Then you're suggesting that the commit identifiers basically contain the 
clone history?


Yes, I think it could be done that way. Identifier would be composed of 
clonenumber+commitnumber. Where it is the location of the original 
change. Yes, there are difficulties with this scheme, but I think they 
are the same challenges as for implementing merges on a centralised VCS 
such as Subversion. I don't think there's anything insurmountable.




Re: D Programming Language source (dmd, phobos,etc.) has moved to github

2011-01-25 Thread Don

Vladimir Panteleev wrote:

On Tue, 25 Jan 2011 23:08:13 +0200, Nick Sabalausky a@a.a wrote:


Browsing through http://hginit.com/index.html, it looks like with Hg,
everything works just as well as with SVN, the only difference being that
you need to remember to specify which repository you're talking about
whenever you give a number.


Not just what repository, but what clone of the repository! It's 
explained in http://hginit.com/05.html. The number only makes sense for 
the clone of the repository you're working on right now - basically you 
can't tell that number to anyone, because it might mean something 
entirely different for them.


Obviously I'm not saying DMD should have gone Hg, I'm just kinda 
shocked

by how horrid Git's approach is for referring to changesets. (Personally,
that alone would be enough to get me to use Hg instead of Git for my own
projects. Heck, I've become pretty much sold on the idea of DVCS, but
because of this I think I'd actually sooner use SVN for a new project 
than

Git.)


I think you need to take some time and think about it. It's impossible 
to use a global incrementing revision number with any DVCS!


I think this is a fallacy. It only applies if you
(1) *completely disallow* any centralisation -- which I don't think ever 
happens in practice!
and (2) demand that cloning a repository be an entirely read-only 
operation (so that the repository doesn't know how many times it has 
been cloned)

and (3) demand that the revision numbers behave exactly as they do in svn.

The SHA1 hashes are how many bits??? Enough for one commit from every 
person on earth, every few minutes, for hundreds of years That's a 
ridiculously inefficient method of identifying changesets.
Looks like a strawman argument to me. It can't be done, but only 
because unnecessary requirements have been added.


Re: Phobos unit testing uncovers a CPU bug

2010-11-27 Thread Don

Kagamin wrote:

Don Wrote:

The great tragedy was that an early AMD processor gave much accurate sin 
and cos than the 387. But, people complained that it was different from 
Intel! So, their next processor duplicated Intel's hopelessly wrong trig 
functions.


The same question goes to you. Why do you call this bug?


The Intel CPU gives the correct answer, but AMD's is wrong. They should 
both give the correct result.


Phobos unit testing uncovers a CPU bug

2010-11-26 Thread Don
The code below compiles to a single machine instruction, yet the results 
are CPU manufacturer-dependent.


import std.math;

void main()
{
 assert( yl2x(0x1.0076fc5cc7933866p+40L, LN2)
== 0x1.bba4a9f774f49d0ap+4L); // Pass on Intel, fails on AMD
}

The results for yl2x(0x1.0076fc5cc7933866p+40L, LN2) are:

Intel:  0x1.bba4a9f774f49d0ap+4L
AMD:0x1.bba4a9f774f49d0cp+4L

The least significant bit is different. This corresponds only to a 
fraction of a bit (that is, it's hardly important for accuracy. For 
comparison, sin and cos on x86 lose nearly sixty bits of accuracy in 
some cases!). Its importance is only that it is an undocumented 
difference between manufacturers.


The difference was discovered through the unit tests for the 
mathematical Special Functions which will be included in the next 
compiler release. Discovery of the discrepancy happened only because of 
several features of D:


- built-in unit tests (encourages tests to be run on many machines)

- built-in code coverage (the tests include extreme cases, simply 
because I was trying to increase the code coverage to high values)


- D supports the hex format for floats. Without this feature, the 
discrepancy would have been blamed on differences in the floating-point 
conversion functions in the C standard library.


This experience reinforces my belief that D is an excellent language for 
scientific computing.


Thanks to David Simcha and Dmitry Olshansky for help in tracking this down.


Re: Phobos unit testing uncovers a CPU bug

2010-11-26 Thread Don

Walter Bright wrote:

Don wrote:
The code below compiles to a single machine instruction, yet the 
results are CPU manufacturer-dependent.


This is awesome work, Don. Kudos to you, David and Dmitry.

BTW, I've read that fine-grained CPU detection can be done, beyond what 
CPUID gives, by examining slight differences in FPU results. I expect 
that *, +, -, / should all give exactly the same answers. But the 
transcendentals, and obviously yl2x, vary.


I believe that would have once been possible, I doubt it's true any more.
Basic arithmetic and sqrt all give correctly rounded results, so they're 
identical on all processors. The 387 gives greatly improved accuracy, 
compared to the 287. But AFAIK there have not been intentional changes 
since then.


The great tragedy was that an early AMD processor gave much accurate sin 
and cos than the 387. But, people complained that it was different from 
Intel! So, their next processor duplicated Intel's hopelessly wrong trig 
functions.
I haven't seen any examples of values which are calculated differently 
between the processors. I only found one vague reference in a paper from 
CERN.


Re: Utah Valley University teaches D (using TDPL)

2010-11-18 Thread Don

Jonathan M Davis wrote:

On Tuesday, November 16, 2010 13:33:54 bearophile wrote:

Jonathan M Davis:

Most of the rest (if not all of it) could indeed be done in a library.

I am not sure it could be done nicely too :-)


That would depend on what you're trying to do. Printing test success or failure 
is as simple as adding the approprate scope statement to the beginning of each 
unittest block. A bit tedious perhaps, but not hard.



Right now
unit tests follow the unix convention of saying nothing on success,

That's an usability failure. Humans expect feedback, because you can't tell
apart unittests run and succeed from unittests not even run. That Unix
convention is bad here. And Unix commands sometimes have a -v (verbose)
command that gives feedback, while D unittests don't have this option.


I'm afraid that I have to disagree there. Having all of the successes print out 
would, in many cases, just be useless output flooding the console. I have no 
problem with making it possible for unit tests to report success, but I wouldn't 
want that to be the default. It's quite clear when a test fails, and that's what 
is necessary in order to fix test failures.


I can see why a beginner might want the positive feedback that a test has 
succeeded, but personally, I would just find it annoying. The only real advantage 
would be that it would indicate where in the unit tests the program was, and 
that's only particularly important if you have a _lot_ of them and they take a 
long time to run.


I think:   %d unit tests passed in %d modules
would be enough.


Re: dmd 1.065 and 2.050 release

2010-11-03 Thread Don

Stephan wrote:

On 03.11.2010 13:29, Lars T. Kyllingstad wrote:

On Fri, 29 Oct 2010 10:35:27 -0700, Walter Bright wrote:


This is primarily a bug fix release.

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.065.zip

http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.050.zip



Thanks to all contributors for yet another good release.  My personal
favourites this time must be the relaxed purity rules (yay, now I can
actually use pure), the improvements to Tuple (finally, proper
indexing!), and the fixing of bug 4465 (which may seem trivial, but which
I've been running into several times a day for a long time).

-Lars


Speaking of fancy pure. When will will the std lib actually start to use 
it ? I tried to use pure lately but as soon as i used phobos methods i 
hit a wall. e.g why is std.string.format not pure ? i did not look into 
it but in my pov it does not change any state and does just return a 
value depending on the given arguments.


Most development of Phobos is done with the last released version of 
DMD, not the version under development. So you'll almost never see 
Phobos using features from the compiler it is released with.


Re: Vibrant 1.5

2010-09-21 Thread Don

bearophile wrote:

ponce:


Vibrant has been open source'd (again):
http://bitbucket.org/ponce/vibrant


Very good. I have seen 2D vectors implemented something like ten times in D 
code, so I think it's time to stop this. They need to go in the standard 
library:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/vec2.d
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/vec3.d
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/vec4.d
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/vectorop.d


Definitely we need vectors in Phobos.



Useful math, fast too:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/common.d
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/rounding.d


Actually, I don't see any functions which are faster than std.math. 
std.math.exp2() is *much* faster than common.pow2() (one slow 
instruction, vs five!!!) And exp2 sets the flags correctly.

expi() is faster than sincos().


Half floats, I don't know if they are better than user defined floats of 
Phobos2:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/half.d

Quaternions:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/quat.d

A color module is useful:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/misc/colors.d
Python std lib has colorsys:
http://docs.python.org/library/colorsys.html

More useful general matrix code:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/mat4.d

Some very basic geometry code fit for a std.geometry module:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/math2d.d

I think all those things (maybe with a little more docs, improvements, 
unittests, contracts) are fit to be added to Phobos, because:
- they are basic things that are commonly useful;
- they aren't a lot of code;
- they will be useful ten years from now too, they will not become obsolete;
- I have seen them implemented in user D code several times;
- Some of them are present in my dlibs1 and I have used them several times.


I agree. There's some useful stuff here.


Re: Vibrant 1.5

2010-09-21 Thread Don

#ponce wrote:

Half floats, I don't know if they are better than user defined floats of 
Phobos2:
http://bitbucket.org/ponce/vibrant/src/tip/trunk/common2/math/half.d


half float are useful for 3D because it saves bandwidth with the graphic cards. 
Less for other purposes.
I think common2 allow to use vec3!(half) but lack of implicit conversion makes 
it less useful than the same design in C++.


My guess is that what you really want, is pack-and-unpack routines for 
whole arrays.  half[0..n*2] -- float[0..n]


It's a fascinating problem. I bet it can be done very efficiently.


Re: Vibrant 1.5

2010-09-21 Thread Don

#ponce wrote:

vec3h a = vec3h(cast(half)1.f, cast(half)2.f, cast(half)3.f);


In C++ I can make the half type and create vec3h with

vec3h a = vec3h(1, 2.f, 3);

because so much thing are implicit.


I heard stories of half-float = float conversions being the bottleneck


I meant float - half-float



My friend Google found some SSE2 code which does float-half in 3.5 
cycles. Not too bad.


http://www.devmaster.net/forums/showthread.php?t=10924


Re: dmd 1.064 and 2.049 release

2010-09-20 Thread Don

bearophile wrote:

Walter Bright:


This is primarily a bug fix release.


I was away. Thank you for all the work.
For the close future I suggest to focus a bit more on fixing language/design 
corner cases, instead of just on normal dmd/Phobos/druntime bugs (as done in 
this release).


Sorry bearophile, regressions and wrong-code bugs will ALWAYS have top 
priority. There will be no apology for fixing bugs like 3996, 4681, and 
4009. g.


Re: dmd 1.063 and 2.048 release

2010-08-16 Thread Don

Brad Roberts wrote:

On 8/15/2010 12:54 PM, Walter Bright wrote:

Nick Sabalausky wrote:

This may be a pain to do, but you could narrow it down from the other
direction: recompile DMD from various trunk revisions between 2.046 and 2.047
and see which actual commit created the problem.

Try mixing/matching the compiler  Phobos to see which one of those caused the
issue.


While I agree that it's worth trying a bisection -- it's generally really quick
and easy to do (the compiler and libraries build rather fast -- about a minute
for me).  It can be a very useful technique for finding where bugs were 
introduced.

That said, it's likely to be rather difficult for this release due to the number
of fixes in the compiler that the library requires and for the periods during
which the two didn't work together.

Do try it.. worst case is you've wasted a little bit of time.  Best case you've
found the cause of the bug.

Later,
Brad


The latest compiler should work with the old Phobos, except that it will 
complain about the ab==c bugs. That's simple to do, it just involves 
copying the 2.048 compiler onto the 2.047 release. Knowing if it is the 
compiler or Phobos/druntime would be an enormous help.


Re: dflplot 0.01

2010-07-11 Thread Don

dsimcha wrote:

In the spirit of making D2 a first-rate scientific computing language, I have
just uploaded the first usable version of my DFL-based dflplot  plotting
library to Scrapple.

Right now dflplot is still a work in progress, but it's good enough to be
useful for basic exploratory plotting in a scientific or statistical computing
context, especially in conjunction with other scientific libs like SciD and
dstats.  I'm sure of this because I've been eating my own dogfood with
pre-release versions for the past few days and am amazed at how much more I
like plotting stuff when I can do it w/o having to write stuff out to a text
file and read it back in Python or Octave and instead can plot it directly from 
D.


This is great stuff, and really valuable for D. Ditches own plotting 
library /


Re: dflplot 0.01

2010-07-11 Thread Don

dsimcha wrote:

== Quote from Don (nos...@nospam.com)'s article

dsimcha wrote:

In the spirit of making D2 a first-rate scientific computing language, I have
just uploaded the first usable version of my DFL-based dflplot  plotting
library to Scrapple.

Right now dflplot is still a work in progress, but it's good enough to be
useful for basic exploratory plotting in a scientific or statistical computing
context, especially in conjunction with other scientific libs like SciD and
dstats.  I'm sure of this because I've been eating my own dogfood with
pre-release versions for the past few days and am amazed at how much more I
like plotting stuff when I can do it w/o having to write stuff out to a text
file and read it back in Python or Octave and instead can plot it directly from 
D.

This is great stuff, and really valuable for D. Ditches own plotting
library /


Since when did you ever have a plotting library?  Or was it not of releasable 
quality?


Personal use only, never intended for release.


Re: Bug fix week

2010-05-27 Thread Don

Stewart Gordon wrote:

Don wrote:
snip

IMHO, one of the most important bugs to fix is actually a spec bug:

4056 Template instantiation with bare parameter not documented

snip

Why single out that one?


Because it's a feature that is used in almost every non-trivial D2 
program, and the spec gives no hint that it even exists. Without it, you 
can't even make sense of many of the Phobos docs. It's an absolute 
disaster for anyone taking a first look at the language -- something 
which we expect to happen frequently in the next few weeks.


Re: dmd 1.061 and 2.046 release

2010-05-15 Thread Don

Walter Bright wrote:

Leandro Lucarella wrote:

I saw the patches, and having all hardcoded in the compiler doesn't seems
like a good idea =/


I know the hardcoding is probably not the best, but I wanted to try it 
out to see if it was a good feature before committing a lot of work to it.


The alternative is to use some sort of configuration file for it. The 
problem, though, is that the hints are for newbies, and newbies probably 
aren't going to get a configuration file properly set up, especially if 
there are multiple such files.


I think the only purpose of such a feature is to increase the chance 
that a newbie's hello world compiles successfully. The importance of 
that can't be underestimated, I think. First impressions matter.


Re: Can we all please stop overreacting?

2010-04-30 Thread Don

Daniel Keep wrote:


lurker wrote:

The Tango developers could have handed over all copyrights to Walter or Phobos. 
This would solve the licensing problems if anything needs to change later.


I don't know how many times this has to be explained.

To quote myself:

Thirdly, the Tango maintainers have *ALREADY TRIED* to change Tango's
license.  They wanted to move to just Apache 2.0 on the basis that it
was similar enough to the AFL to allow this without too much trouble.

The problem was that of the 50-odd contributors, there are people who
they simply couldn't get in contact with.  Without express permission,
they *CANNOT* legally change the license to something incompatible.


That's true, but largely irrelevant. Individual developers can make 
agreements about relicensing of their personal contributions, and 
stating that they're happy with their code being used in Phobos. Sean, 
Steven, and I did. AFAIK the other Tango developers have not. 
Everything's in version control, you can see who's contributed to which 
components. Sure, there'll be places where a dozen uncontactable people 
have been involved. But that shouldn't be an argument for making no 
progress.


It seems very clear to me that there are Tango developers who do not 
want any of their code to be used in Phobos. Which is fine, that's their 
choice. But I wish they'd have the decency to say so, so that the 
community stops wasting time on the issue.


I've tried for the past two years to make tiny steps towards unity. But 
Tango does not seem to be interested.


Please tell me I'm wrong.


Re: Can we all please stop overreacting?

2010-04-30 Thread Don

FeepingCreature wrote:

On 30.04.2010 17:10, Don wrote:

It seems very clear to me that there are Tango developers who do not
want any of their code to be used in Phobos. Which is fine, that's their
choice. But I wish they'd have the decency to say so, so that the
community stops wasting time on the issue.



So what you're saying is, you have this knowledge despite the relevant Tango 
devs not actually saying anything in that direction.


Yes. The silence is deafening.


Could you maybe explain how you came to that conclusion, please?


Essentially, two years of trying to prove that it is false, and failing, 
despite heavy involvement in both Tango and Phobos. I have not come to 
that conclusion lightly.


Re: Can we all please stop overreacting?

2010-04-30 Thread Don

FeepingCreature wrote:

 The quality-of-code metric seems to be universally acknowledged - 
after all, druntime itself is a fork of tango core.


We think you suck, so we'll base our new standard library on your work. 

You seem to be unaware of the history, and this may be leading you to 
misunderstand the situation.


Sean Kelly wrote Ares as a replacement for Phobos. Tango began as a 
merger of Ares with Mango. Tango core is Ares. Druntime is also Ares. 
The primary author has never changed, and it's an unbroken continuation 
of development on a single code base. Ditto with tango.math, (which was 
written by me, originally in a project called 'mathextra').


Re: Can we all please stop overreacting?

2010-04-30 Thread Don

Nick Sabalausky wrote:
another lurker lur...@lurk.urk wrote in message 
news:hrfcfi$1ea...@digitalmars.com...

== Quote from Don (nos...@nospam.com)'s article

FeepingCreature wrote:
  The quality-of-code metric seems to be universally acknowledged -
after all, druntime itself is a fork of tango core.
We think you suck, so we'll base our new standard library on your work. 


You seem to be unaware of the history, and this may be leading you to
misunderstand the situation.
Sean Kelly wrote Ares as a replacement for Phobos. Tango began as a
merger of Ares with Mango. Tango core is Ares. Druntime is also Ares.
The primary author has never changed, and it's an unbroken continuation
of development on a single code base. Ditto with tango.math, (which was
written by me, originally in a project called 'mathextra').
Thank you Sean Kelly, Don and Steve Schveiguy for leaving Tango and coming 
to

Phobos. It means very much for everybody.


Don just said in the message you're replying to that they didn't leave 
Tango. 


My most recent svn commit to Tango was only a month ago, so I still have 
a toe in both camps. But actually I've spent almost all of my time 
working on the compiler.

I have not yet decided on how I will respond to this situation.


Re: Masahiro Nakagawa and SHOO invited to join Phobos developers

2010-04-29 Thread Don

Moritz Warning wrote:

On Thu, 29 Apr 2010 09:24:22 -0700, Walter Bright wrote:


Moritz Warning wrote:

[..]

Maybe you can talk to the Tango devs to clear up this matter?

I suggest that the Tango devs convert the Tango modules that can get
full agreement by their respective devs be converted to the Boost
license. The Boost license is free of the legal problems that BSD has,
and is compatible with the Phobos license.


As far as I have heard, Tango changed it's license to be compatible with 
Phobos in the first place. But Phobos then changed it's license and now 
it's incompatible again. 


That is 100% incorrect. Tango always used a more restrictive license 
than Phobos. Tango has always been able to use Phobos code, but the 
reverse does not apply.



What were the reasons for Phobos to change the license?

Phobos was mostly public domain, which has legal problems (eg in Japan).
The boost license is the closest equivalent to public domain.


Re: dmd 1.058 and 2.043 release

2010-04-10 Thread Don

bearophile wrote:

Now I have tested this release a little better, it seems to work well. I have 
to say two things:

1) The bug 3911 was a mix of two different bugs, Don has fixed one of them, so 
I have closed the bug 3911 and I have opened a new cleaned bug report, number 
4075:


Please don't clutter the announce newsgroup with bug reports.
And in general, do NOT put multiple bugs in one report. In particular, 
it's worth saying this to everyone: if something causes a compiler 
segfault or an internal compiler error, ALWAYS put it in its own report. 
In  95% of cases it's a different bug, even if it looks the same as 
something else.


Re: dmd 2.042 release

2010-03-20 Thread Don

Walter Bright wrote:
This is necessary to fix a memory corruption problem with arrays 
introduced in 2.041.



http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.042.zip

Thanks to the many people who contributed to this update!


This is actually a great release, not just an emergency bug fix.

One very minor issue: if you attempt to build phobos on Win32, and you 
don't have masm386 installed, you need to touch 
src\druntime\src\rt\minit.obj
This is because the date of minit.asm was changed, even though the file 
itself was unchanged.


Re: dmd 1.057 and 2.041 release

2010-03-09 Thread Don

Steven Schveighoffer wrote:

On Tue, 09 Mar 2010 14:54:11 -0500, David Gileadi f...@bar.com wrote:


On 3/9/2010 12:44 PM, Steven Schveighoffer wrote:

On Tue, 09 Mar 2010 14:36:41 -0500, Michal Minich
michal.min...@gmail.com wrote:


assumeNoArrayReference does not express that there can be references to
the original array before slice start. probably better expressing, if
rather long name could be


Actually, you can have valid references up until the slice end.


assumeNoOriginArrayReferencesPastSliceEnd
assumeNoOriginArrayReferencesAfter


These are too long. As much as this is an unsafe to-be-used-with-care
function, we don't want to torture those who need it :) I prefer a name
with 1-3 terms in it.


or probably somthing like this:
unsafeDeletePastSlice


also a little long, and I don't like the term delete, it's not actually
deleting the memory.


As long as we're bikeshedding, maybe assumeUnreferencedAfter?


This is exactly the semantic meaning we are going for.  I'd like it to 
be shorter...


synonyms for unreferenced?

assumeUnusedAfter

Any others ideas?

-Steve

assumeSafeExpand ?


Re: dmd 1.057 and 2.041 release

2010-03-08 Thread Don

Walter Bright wrote:
Lots of meat and potatoes here, and a cookie! (spelling checker for 
error messages)


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.057.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.041.zip

Thanks to the many people who contributed to this update!



Bug 1914 Array initialisation from const array yields memory trample

was fixed, in D2 only. Can we get this into D1 as well?

To show what a huge difference this bug makes, try this test case for 
large values of N:


Executable size in bytes

N   D2.040   D2.041
------   --
10  266 Kb   241 Kb
100 306 Kb   241 Kb
2000  16151 Kb   257 Kb
10K  locks up  321 Kb
-
enum : int { N = 1000 }

struct S {
const float[N] BIGINIT = [7];
float a[N] = BIGINIT;
}

void main() {}


Re: dmd 1.056 and 2.040 release

2010-01-31 Thread Don

strtr wrote:

Walter Bright Wrote:


http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.056.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.040.zip

Thanks to the many people who contributed to this update!


Do you ever find new bugs while fixing other?


Yes. It's a big problem with forward references, because they can affect 
unrelated parts of the compiler. I think that's the reason that Walter's 
been slow to apply patches for forward reference bugs.

Fortunately, most other bugs aren't like that. The progress is real.


Re: dmd 1.054 and 2.038 release

2010-01-02 Thread Don

Sönke Ludwig wrote:

Am 31.12.2009 19:48, schrieb Walter Bright:

Happy New Year!

http://www.digitalmars.com/d/1.0/changelog.html
http://ftp.digitalmars.com/dmd.1.054.zip


http://www.digitalmars.com/d/2.0/changelog.html
http://ftp.digitalmars.com/dmd.2.038.zip

Many thanks to the numerous people who contributed to this update.


Great to see so many fixes that make the language much more hassle-free 
to use - especially for newcomers that hit such things for the first 
time. However, I have at least one blocker problem in this release:


Because of the now disallowed struct initializers for structs with 
constructors (bug 3476), there is no way to use those structs as static 
immutable values as the constructors are not CTFE processable.


(- Error: cannot evaluate ((X __ctmp2;
) , __ctmp2).this() at compile time)

This problem has been there since struct constructors have been 
introduced. A quick search on bugzilla did not return a matching
bug report, only some other issues related to struct constructors. I'll 
file a bug report if noone else knows of any existing one (technically 
this would be an 'improvement', but I think it is a really important 
issue).


Bug 3535.
There are still several bugs related to struct constructors.
Workaround is to use static opCall instead of a constructor.


Re: dmd 1.054 and 2.038 release

2010-01-01 Thread Don

Moritz Warning wrote:

On Thu, 31 Dec 2009 21:22:58 +0100, grauzone wrote:


bearophile wrote:

grauzone:

But I have a problem: the compiler is either extremely slow for me, or
is stuck in an endless loop. All it does is to slowly allocate memory.
I aborted the compilation after ~ 20 minutes and 2 GB RAM allocation.
This wasn't the case with dmd 1.053, where it only took 5-10 seconds
to compile. Can anyone confirm this?

Show the code!

I was going to say but it's hundreds of modules, but then I tried to
compile some other big hog of code: Tango.

And I found compiling this file hangs:
http://dsource.org/projects/tango/browser/trunk/tango/core/tools/

Demangler.d?rev=5248

The exact command line for this was:
dmd -c -I../tango/core -I.. -I../tango/core/vendor -release
-oftango-core-tools-Demangler-release.o ../tango/core/tools/Demangler.d

Again, could anyone confirm this?

Anyway, no time for this anymore, it's going to be 2010 soon here.


Bye,
bearophile

Someone reported the regression already:

http://d.puremagic.com/issues/show_bug.cgi?id=3663


It's caused by the patch for bug 400.


Re: dmd 1.054 and 2.038 release

2010-01-01 Thread Don

Moritz Warning wrote:

On Fri, 01 Jan 2010 22:35:12 +, Moritz Warning wrote:


On Fri, 01 Jan 2010 19:31:49 +0100, Don wrote:


Moritz Warning wrote:

On Thu, 31 Dec 2009 21:22:58 +0100, grauzone wrote:


bearophile wrote:

grauzone:

But I have a problem: the compiler is either extremely slow for me,
or is stuck in an endless loop. All it does is to slowly allocate
memory. I aborted the compilation after ~ 20 minutes and 2 GB RAM
allocation. This wasn't the case with dmd 1.053, where it only took
5-10 seconds to compile. Can anyone confirm this?

Show the code!

I was going to say but it's hundreds of modules, but then I tried
to compile some other big hog of code: Tango.

And I found compiling this file hangs:
http://dsource.org/projects/tango/browser/trunk/tango/core/tools/

Demangler.d?rev=5248

The exact command line for this was:
dmd -c -I../tango/core -I.. -I../tango/core/vendor -release
-oftango-core-tools-Demangler-release.o
../tango/core/tools/Demangler.d

Again, could anyone confirm this?

Anyway, no time for this anymore, it's going to be 2010 soon here.


Bye,
bearophile

Someone reported the regression already:

http://d.puremagic.com/issues/show_bug.cgi?id=3663

It's caused by the patch for bug 400.

Thanks, that fixed it.

But now there is another problem/regression:

tango/net/device/Berkeley.d(1065): Error: enum member
tango.net.device.Berkeley.IPv4Address.ADDR_ANY conflicts with enum
member tango.net.device.Berkeley.IPv4Address.ADDR_ANY at
tango/net/device/ Berkeley.d(1065)
tango/net/device/Berkeley.d(1066): Error: enum member
tango.net.device.Berkeley.IPv4Address.ADDR_NONE conflicts with enum
member tango.net.device.Berkeley.IPv4Address.ADDR_NONE at tango/net/
device/Berkeley.d(1066)
tango/net/device/Berkeley.d(1067): Error: enum member
tango.net.device.Berkeley.IPv4Address.PORT_ANY conflicts with enum
member tango.net.device.Berkeley.IPv4Address.PORT_ANY at
tango/net/device/ Berkeley.d(1067)


I've made a ticket:
http://d.puremagic.com/issues/show_bug.cgi?id=3664

(tested with original dmd 1.054)

That's also caused by the other half of the patch for 400, in class.c.


Re: dmd 1.054 and 2.038 release

2009-12-31 Thread Don

bearophile wrote:

Walter Bright:

Happy New Year!


Happy end of the year to you too!
Is this the last release for the 2009? ;-)

This is funny:
min(x, y) = 10;// sets x to 10

This looks by far like the most useful improvement/change of this DMD release, 
I've already tried it and I like it a lot, thanks to Don and to you!

Bugzilla 2816: Sudden-death static assert is not very useful


I can't take credit for that. It comes from the LDC guys, I just 
enhanced it slightly.
There are 26 Bugzilla votes fixed in this release, which is probably a 
record. (I'm assuming bug 1961('scoped const') is considered to be fixed).


Re: D in the ix magazine about programming today

2009-12-30 Thread Don

dsimcha wrote:

== Quote from retard (r...@tard.com.invalid)'s article

Quite many young Haskell experts started with Haskell when they were 9-12
years old. Having english as your native language and academically
educated parents has a tremendous effect on e.g. vocabularity at that
age. Some slumdog might only know ~3000 words at that age, child of a
highly educated family perhaps 25.000 words.
I'm not saying that everyone should learn Haskell, but I know it's
possible to learn stuff like Curry-Howard isomorphism, hylomorphisms,
monads, monad transformers, comonads, and analysing amortized costs of
algorithms at that age. It's just dumb to assume that young people can't
learn something as complex as static types!
I remember when I was that young, I used to play with QBasic. I knew very
well why 'DEFINT A-Z' made all programs faster and knew what IEEE
floating point looked like on bit level (well, at least mostly). I knew
how to do blits in graphics programming since I already had done them in
assembly on C-64. Had there been Haskell and all the modern tools
available like today there is, I would have probably spent more time on
them.


Yes, but you were probably exceptionally talented and/or motivated.  From
experiences I have had getting friends through programming 101, I believe that,
when people teach programming, they tend to take for granted some very basic
concepts such as variable assignment, flow control and nesting.  The first
programming language should be one that strikes a balance between allowing the
teaching of these basic concepts on the one hand and not being a completely
useless toy language on the other.

IMHO even Python's strong but dynamic typing is too complex for someone who has
literally never programmed before.  I think weak typing a la PHP or Visual 
Basic,
so that the student doesn't even have to think about types until he/she
understands variable assignment and flow control and has actually experienced 
the
feeling of writing simple but useful programs, is the best way to start off.  
Good
programming practices are useless if you end up totally lost on the variables 
and
flow control level.  Furthermore, I don't think good practices and well 
structured
code can truly be appreciated until you've done it wrong first.  Lastly, to most
absolute beginners automatic conversion, e.g. from strings to numbers, probably
seems like the least surprising behavior, since that is how it works in Excel, 
etc.


Both Pascal and the original BASIC were strongly typed, and widely used 
for beginners.


  1   2   >