Re: Question re specific implicit type coercion

2021-10-18 Thread Don Allen via Digitalmars-d-learn

On Monday, 18 October 2021 at 15:58:35 UTC, Don Allen wrote:

On Monday, 18 October 2021 at 15:34:45 UTC, Paul Backus wrote:

On Monday, 18 October 2021 at 15:04:11 UTC, Don Allen wrote:
Section 12.17 of the Language Reference does not indicate any 
circumstance in which a dynamic array, which the literal is, 
is implicitly coerced to a pointer.


This is a special case for string literals, covered in 
[section 10.23.7][1].


[1]: https://dlang.org/spec/expression.html#string_literals


Yes. Thank you.


I do think it would be helpful, even essential, to have a link in 
Section 12.17 to the section you cite. The Language Reference is 
just that, a reference, not a tutorial and it should not be a 
requirement to read it from cover to cover. Strings and string 
literals are arrays and 12.17 purports to cover the implicit type 
conversions among the various array types, but currently leaves 
out this special case and should not, in my opinion. I will file 
an issue making this suggestion.


Re: Question re specific implicit type coercion

2021-10-18 Thread Don Allen via Digitalmars-d-learn

On Monday, 18 October 2021 at 15:34:45 UTC, Paul Backus wrote:

On Monday, 18 October 2021 at 15:04:11 UTC, Don Allen wrote:
Section 12.17 of the Language Reference does not indicate any 
circumstance in which a dynamic array, which the literal is, 
is implicitly coerced to a pointer.


This is a special case for string literals, covered in [section 
10.23.7][1].


[1]: https://dlang.org/spec/expression.html#string_literals


Yes. Thank you.


Question re specific implicit type coercion

2021-10-18 Thread Don Allen via Digitalmars-d-learn

I am calling a C function, described in D as

extern (C) GtkWidget* gtk_menu_item_new_with_label 
(immutable(char)*);


I call it like this

accounts_menu_item = gtk_menu_item_new_with_label("New account 
(Ctrl-n)");


The type of the string literal in the call is immutable(char)[]. 
The type of the parameter is a pointer to immutable characters.


This setup works.

My question is about the documentation vs. what is really going 
on here.


Section 12.17 of the Language Reference does not indicate any 
circumstance in which a dynamic array, which the literal is, is 
implicitly coerced to a pointer. I realize that this is a special 
case -- the callee is a C function -- that might not be covered 
by this section. So I looked at Section 32.3, Chapter 32 being 
"Interfacing to C". 32.3 contains a table of correspondences 
between D and C types. The entry for the D type T[] indicates no 
correspondence in C.


So at least my reading of the documentation suggests that my code 
should not compile. And yet it does and works, which leads me to 
believe there is a documentation error that ought to be reported.


Before doing so, I decided to send this message so those of you 
who know D a lot better than I do can tell me what I've missed or 
gotten wrong, if that is the case.


Thanks --
/Don


Re: Metaprogramming with D

2020-06-06 Thread Don via Digitalmars-d-learn

On Sunday, 7 June 2020 at 00:45:37 UTC, Ali Çehreli wrote:
False. And again, even if so, that's not because of D, but 
because of humans. Can you imagine a CTO, say, in Silicon 
Valley to have guts to bring D instead of C++? With C++, the 
CTO will never be blamed; but D, he or she can easily be blamed 
upon failure. Not because of the technologies but because of 
politics.


In other words, "Nobody ever got fired for choosing [C++]."



Re: What does this compile-time error mean?

2013-02-25 Thread Don
On Sunday, 24 February 2013 at 19:13:47 UTC, Jonathan M Davis 
wrote:

On Sunday, February 24, 2013 14:05:26 Timon Gehr wrote:

On 02/24/2013 01:59 PM, Maxim Fomin wrote:
> On Sunday, 24 February 2013 at 09:00:17 UTC, Jonathan M 
> Davis wrote:
>> Because main_cont is module-level variable, it must be 
>> initialized with a
>> value at compile time. Classes can be used at compile time 
>> (at least

>> some of
>> the time), but they cannot stick around between compile 
>> time and runtime,
>> meaning that you could potentially use them in a CTFE 
>> function, but

>> you can't
>> initialize a module-level or static variable (or enum) with 
>> them, and

>> you're
>> attempting to initialize maint_cont with a RedBlackTree, 
>> which is a

>> class. It
>> won't work.
>> 
>> - Jonathan M Davis
> 
> I would say that it is arbitrary restriction

>
>...

Yes, IIRC Don once stated that it is a problem with DMD's 
architecture.


Most restrictions in CTFE are a limitation of the current 
implementation
rather than being something that's literally impossible to do 
(though some
like the lack of source code aren't). In the case of a class, I 
believe that
it would have to basically serialize the class at compile time 
and then
deserialize it at runtime to do it. That's certainly possible 
to do in theory,
but it's well beyond what the current CTFE implementation can 
do.


- Jonathan M Davis


Yeah, this one is a limitation of the runtime, not of CTFE. CTFE 
has no problem with it.




Re: Why is the 'protected' attribute considered useless?

2013-01-30 Thread Don

On Wednesday, 30 January 2013 at 03:38:39 UTC, Chad Joan wrote:
I've read more than once now that 'protected' is considered 
useless in D.  Why is this?


I've never heard that before. Where have you read that?
Several people, including me, have said that 'package' is useless 
-- could that be what you mean? (There are many discussions in 
the Java community about how useless it is in Java, D copied the 
same failed design).


It's also true that 'protected' is in practice not very useful in 
D, as long as bug 314 remains unfixed. But I wouldn't describe it 
as useless.


Re: Why is null lowercase?

2013-01-25 Thread Don

On Friday, 25 January 2013 at 01:17:44 UTC, Ali Çehreli wrote:

On 01/24/2013 12:42 PM, Matthew Caron wrote:

>> for not null checks
>>
>> if ( ptr !is null) ...
>
> And too much perl has me wanting to write:
>
> if (ptr is not null)

IIRC, the !is operator is thanks to bearophile.


No, it's from 2002 (well, it was !==, renamed to !is in 2005).
Bearophile only joined us about the time D2 began, in late 2007.


Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)

2012-11-13 Thread Don Clugston

On 13/11/12 06:51, Rob T wrote:

On Monday, 12 November 2012 at 14:28:53 UTC, Andrej Mitrovic wrote:

On 11/12/12, Andrej Mitrovic  wrote:

On 11/12/12, Don Clugston  wrote:

Yeah. Though note that 1000 bug reports are from bearophile.


Actually only around 300 remain open:
http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&emailreporter2=1&emailtype2=substring&order=Importance&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&bug_status=VERIFIED&email2=bearophile




Oh wait, that's only for DMD. It's 559 in total:
http://d.puremagic.com/issues/buglist.cgi?query_format=advanced&emailreporter2=1&emailtype2=substring&order=Importance&bug_status=UNCONFIRMED&bug_status=NEW&bug_status=ASSIGNED&bug_status=REOPENED&bug_status=VERIFIED&email2=bearophile



Issue 8990 and 6969, both related to this thread and unresolved, are not
in the list, so I suspect there's a lot more missing too.

PS: I could not figure out how to make a useful report using that bug
report tool either.

--rt


I recommend deskzilla lite. D is on its list of supported open-source 
projects. It maintains a local copy of the entire bugzilla database, so 
you're not restricted to the slow and horrible html interface.







Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)

2012-11-12 Thread Don Clugston

On 10/11/12 08:53, Rob T wrote:

On Saturday, 10 November 2012 at 06:09:41 UTC, Nick Sabalausky wrote:

I've gone ahead and filed a minimized test case, and also included your
workaround:
http://d.puremagic.com/issues/show_bug.cgi?id=8990

I didn't make that one struct nested since that's not needed to
reproduce the error.


Thanks for filing it.

Looking at the bug reports, there's 2000+ unresolved? Yikes.

--rt


Yeah. Though note that 1000 bug reports are from bearophile.


Re: vestigial delete in language spec

2012-11-02 Thread Don Clugston

On 01/11/12 22:21, Dan wrote:

TDPL states
--
However, unlike in C++, clear does not dispose of the object’s
own memory and there is no delete operator. (D used to have a
delete operator, but it was deprecated.) You still can free
memory manually if you really, really know what you’re doing by
calling the function GC.free() found in the module core.memory.
--
The language spec has this example from the section on Struct
Postblits:
--
struct S {
int[] a;// array is privately owned by this instance
this(this) {
  a = a.dup;
}
~this() {
  delete a;
}
}
--

Is the delete call, then per TDPL not necessary? Is it harmful or
harmless?

Also, are there any guidelines for using and interpreting the output of
valgrind on a D executable?

Thanks
Dan


You'll probably have trouble getting much out of valgrind, because it 
doesn't support 80-bit floating instructions, unfortunately.






Re: Copying with immutable arrays

2012-10-30 Thread Don Clugston

On 29/10/12 07:19, Ali Çehreli wrote:

On 10/28/2012 02:37 AM, Tobias Pankrath wrote:
 > the struct
 > SwA from above does neither correspond to SA nor to SB, it's imo more
 > like SC:
 >
 > struct SC {
 > immutable(int)* i;
 > }

Just to confirm, the above indeed works:

struct SC {
 immutable(int)* i;
}

void main()
{
 immutable(SC)[] arr1;
 SC[] arr2 = arr1.dup;// compiles
}

 > Now a copy would do no harm, everything that was immutable in the source
 > and is still accessable from the copy is immutable, too. Any reason
 > (despite of implemenational issues) that this is not how it works?

Getting back to the original code, the issue boils down to whether we
can copy imutable(string[]) to string[]:

import std.stdio;

struct SwA {
 string[] strings;
}

void main()
{
 immutable(SwA)[] arr1;
 writeln(typeid(arr1[0].strings));
}

The program prints the following:

immutable(immutable(immutable(char)[])[])

Translating the innermost definition as string:

immutable(immutable(string)[])

Let's remove the struct and look at a variable the same type as the member:

 immutable(string[]) imm = [ "a", "b" ];
 writeln(typeid(imm));

The typeid is the same:

immutable(immutable(immutable(char)[])[])

So we can concentrate on 'imm' for this exercise. This is the same
compilation error:

 immutable(string[]) imm = [ "aaa", "bbb" ];
 string[] mut = imm;   // <-- compilation ERROR

If that compiled, then both 'imm' and 'mut' would be providing access to
the same set of strings. But the problem is, 'mut' could replace those
strings (note that it could not modify the characters of those strings,
but it could replace the whole string):

 mut[0] = "hello";

That would effect 'imm' as well. ('imm' is the equivalent of SwA.strings
from your original code.)

Ali


Awesome answer, it's these kinds of responses that make the D community 
so great.






Re: __traits(compiles,...) <=> ? is(typeof(...))

2012-10-30 Thread Don Clugston

On 29/10/12 12:03, Jonathan M Davis wrote:

On Monday, October 29, 2012 11:42:59 Zhenya wrote:

Hi!

Tell me please,in this code first and second static if,are these
equivalent?
with arg = 1, __traits(compiles,"check(arg);") = true,
is(typeof(check(arg))) = false.


In principle, is(typeof(code)) checks whether the code in there is
syntatically and semantically valid but does _not_ check whether the code
actually compiles. For instance, it checks for the existence of the symbols
that you use in it, but it doesn't check whether you can actually use the
symbol (e.g. it's private in another module).


Not even that. It checks if the expression has a type. That's all.
The expression has be syntactically valid or it won't compile at all. 
But inside is(typeof()) it is allowed to be semantically invalid.


I started using is(typeof()) as a check for compilability, and other 
people used the idea as well. But it only works if you can convert 
"compilable" into "has a type".


Re: long compile time question

2012-10-24 Thread Don Clugston

On 24/10/12 17:39, thedeemon wrote:

On Wednesday, 24 October 2012 at 03:50:47 UTC, Dan wrote:

The following takes nearly three minutes to compile.
The culprit is the line bar ~= B();
What is wrong with this?

Thanks,
Dan

struct B {
  const size_t SIZE = 1024*64;
  int[SIZE] x;
}

void main() {
  B[] barr;
  barr ~= B();
}
-


The code DMD generates for initializing the struct does not use loops,
so it's
xor ecx, ecx
mov [eax], ecx
mov [eax+4], ecx
mov [eax+8], ecx
mov [eax+0Ch], ecx
mov [eax+10h], ecx
mov [eax+14h], ecx
mov [eax+18h], ecx
mov [eax+1Ch], ecx
mov [eax+20h], ecx
mov [eax+24h], ecx
mov [eax+28h], ecx
mov [eax+2Ch], ecx
mov [eax+30h], ecx
mov [eax+34h], ecx
mov [eax+38h], ecx
...

So your code creates a lot of work for the compiler.


That's incredibly horrible, please add to bugzilla.




Re: Why are scope variables being deprecated?

2012-10-11 Thread Don Clugston

On 11/10/12 02:30, Piotr Szturmaj wrote:

Jonathan M Davis wrote:

On Thursday, October 11, 2012 01:24:40 Piotr Szturmaj wrote:

Could you give me an example of preventing closure allocation? I think I
knew one but I don't remember now...


Any time that a delegate parameter is marked as scope, the compiler
will skip
allocating a closure. Otherwise, it has to copy the stack from the
caller onto
the heap to create a closure so that the delegate will continue to
work once
the caller has completed (e.g. if the delegate were saved for a
callback and
then called way later in the program). Otherwise, it would refer to an
invalid
stack and really nasty things would happen when the delegate was
called later.

 >

By marking the delegate as scope, you're telling the compiler that it
will not
escape the function that it's being passed to, so the compiler then
knows that
the stack that it refers to will be valid for the duration of that
delegate's
existence, so it knows that a closure is not required, so it doesn't
allocate
it, gaining you efficiency.


Thanks, that's clear now, but I found a bug:

__gshared void delegate() global;

void dgtest(scope void delegate() dg)
{
 global = dg; // compiles
}

void dguse()
{
 int i;
 dgtest({ writeln(i++); });
}

I guess it's a known one.


Looks like bug 5270?





Re: "Best" way of passing in a big struct to a function?

2012-10-10 Thread Don Clugston

On 10/10/12 09:12, thedeemon wrote:

On Wednesday, 10 October 2012 at 07:28:55 UTC, Jonathan M Davis wrote:

Making sure that the aa has been properly initialized before passing
it to a function (which would mean giving it at least one value) would
make the ref completely unnecessary.

- Jonathan M Davis


Ah, thanks a lot! This behavior of a fresh AA being null and then
silently converted to a non-null when being filled confused me.


Yes, it's confusing and annoying.
This is something in the language that we keep talking about fixing, but 
to date it hasn't happened.


Re: Workarounds for forward reference bugs

2012-10-05 Thread Don Clugston

On 05/10/12 16:33, simendsjo wrote:

Are there any known workarounds for forward reference bugs?
I see 80 bugs is filed, but I don't want to read all of them to find any
workarounds. I cannot find anything on the wiki regarding this.

So.. Any good ideas how to get around this?


Those "forward reference" bugs have very little in common with one 
another, other than the name. It's like "regressions".

You'll need to be a little more specific.


Re: Is it possible to force CTFE?

2012-10-01 Thread Don Clugston

On 27/09/12 15:01, bearophile wrote:

Tommi:


2) Is it possible to specialize a function based on whether or not the
parameter that was passed in is a compile time constant?


I am interested in this since some years. I think it's useful, but I
don't know if it can be implemented. I don't remember people discussing
about this much.

Bye,
bearophile


It has been discussed very often, especially around the time that CTFE 
was first introduced. We never came up with a solution.




Re: About std.ascii.toLower

2012-09-27 Thread Don Clugston

On 20/09/12 18:57, Jonathan M Davis wrote:

On Thursday, September 20, 2012 18:35:21 bearophile wrote:

monarch_dodra:

It's not, it only *operates* on ASCII, but non ascii is still a



legal arg:

Then maybe std.ascii.toLower needs a pre-condition that
constraints it to just ASCII inputs, so it's free to return a
char.


Goodness no.

1. Operating on a char is almost always the wrong thing to do. If you really
want to do that, then cast. It should _not_ be encouraged.

2. It would be disastrous if std.ascii's funtions didn't work on unicode.
Right now, you can use them with ranges on strings which are unicode, which
can be very useful.

> I grant you that that's more obvious with something like

isDigit than toLower, but regardless, std.ascii is designed such that its
functions will all operate on unicode strings. It just doesn't alter unicode
characters and returns false for them with any of the query functions.


Are there any use cases of toLower() on non-ASCII strings?
Seriously? I think it's _always_ a bug.

At the very least that function should have a name like 
toLowerIgnoringNonAscii() to indicate that it is performing a really, 
really foul operation.


The fact that toLower("Ü") doesn't generate an error, but doesn't return 
"ü" is a wrong-code bug IMHO. It isn't any better than if it returned a 
random garbage character (eg, it's OK in my opinion for ASCII toLower to 
consider only the lower 7 bits).


OTOH I can see some value in a cased ASCII vs unicode comparison.
ie, given an ASCII string and a unicode string, do a case-insensitive 
comparison, eg look for

"" inside "öähaøſ€đ@ſŋħŋ€ł¶"



Re: object.error: Privileged Instruction

2012-09-26 Thread Don Clugston

On 22/09/12 21:49, Jonathan M Davis wrote:

On Saturday, September 22, 2012 21:19:27 Maxim Fomin wrote:

Privilege instruction is an assembly instruction which can be
executed only at a certain executive process context, typically
os kernel. AFAIK assert(false) was claimed to be implemented by
dmd as a halt instruction, which is privileged one.

However, compiled code shows that dmd generates int 3 instruction
for assert(false) statement and 61_6F_65_75 which is binary
representation of "aoeu" for assert(false, "aoeu") statement and
the latter is interpreted as privileged i/o instruction.


It's a normal assertion without -release. With -release, it's a halt
instruction on Linux but IIRC it's something slightly different (albeit
similar) on Windows, though it might be halt there too.

- Jonathan M Davis



I implemented the code runtime code that does it, at least on Windows. 
You get much better diagnostics on Windows.
IMHO it is a Linux misfeature, they conflate a couple of unrelated 
hardware exceptions together into one signal, making it hard to identify 
which it was.




Re: function is not function

2012-09-26 Thread Don Clugston

On 21/09/12 21:59, Ellery Newcomer wrote:

solution is to use std.traits, but can someone explain this to me?

import std.stdio;

void main() {
 auto a = {
 writeln("hi");
 };
 pragma(msg, typeof(a)); // void function()
 pragma(msg, is(typeof(a) == delegate)); // nope!
 pragma(msg, is(typeof(a) == function)); // nope!
}



The 'function' keyword is an ugly wart in the language.  In

void function()

'function' means 'function pointer'. But in

is (X == function)

'function' means 'function'.

Which is actually pretty much useless. You always want 'function 
pointer'. This is the only case where "function type" still exists in 
the language.





Re: filter out compile error messages involving _error_

2012-09-10 Thread Don Clugston

On 10/09/12 02:31, Jonathan M Davis wrote:

On Monday, September 10, 2012 02:16:19 Timon Gehr wrote:

Don has expressed the desire to weed those out completely.


If he can do it in a way that leaves in all of the necessary information, then
great, but you need to be able to know what the instantiation chain was.

- Jonathan M Davis


Yes, that's the idea. It's pretty much working for CTFE now (you get a 
complete call stack, with no spurious error messages).




Re: bigint <-> python long

2012-09-06 Thread Don Clugston

On 05/09/12 21:23, Paul D. Anderson wrote:

On Wednesday, 5 September 2012 at 18:13:40 UTC, Ellery Newcomer wrote:

Hey.

Investigating the possibility of providing this conversion in pyd.

Python provides an api for accessing the underlying bytes.

std.bigint seemingly doesn't. Am I missing anything?


No, I don't believe so. AFAIK there is no public access to the
underlying array, but I think it is a good idea.

I suspect the reason for not disclosing the details is to disallow
anyone putting the data into an invalid state. But read-only access
would be safe.


No, it's just not disclosed because I didn't know the best way to do it.
I didn't want to put something in unless I was sure it was correct.
(And a key part of that, is what is required to implement BigFloat).



Re: How to have strongly typed numerical values?

2012-09-05 Thread Don Clugston

On 05/09/12 03:42, bearophile wrote:

Nicholas Londey:


for example degrees west and kilograms such that they cannot be
accidentally mixed in an expression.


Using the static typing to avoid similar bugs is the smart thing to do :-)


I'd be interested to know if that idea is ever used in real code. I 
mean, it's a classic trendy template toy, but does anyone actually use it?


I say this because I've done a lot of physics calculation involving 
multiple complicated units, but never seen a use for this sort of thing.
In my experience, problems involving units are easily detectable (two 
test cases will catch them all).
The most insidious bugs are things like when you have used constants at 
25'C instead of 20'C, or when you have a sign wrong.


Re: CTFE question

2012-09-03 Thread Don Clugston

On 28/08/12 19:40, Philippe Sigaud wrote:

On Tue, Aug 28, 2012 at 2:07 PM, Chris Cain  wrote:

On Tuesday, 28 August 2012 at 11:39:20 UTC, Danny Arends wrote:


Ahhh I understand...

As a follow up, is it then possible to 'track' filling a
large enum / immutable on compile time by outputting a msg
every for ?

I'm generating rotation matrices for yaw, pitch and roll
at compile time which can take a long time depending on
how fine grained I create them.



I'm pretty sure there isn't. However, if you're just trying to develop/test
your algorithm, you could write a program that runs it as a normal function
(and just use writeln) as you develop it. After it's done, you remove the
writelns, mark the function as pure and it should work exactly the same in
CTFE.


Godd adivce, except beware of using ++ and --, they don't work at
compile-time. I'm regularly caught unaware by this, particularly while
looping.


Really? That's scary. Is there a bug report for this?


Re: BitArray/BitFields - Review

2012-07-31 Thread Don Clugston

On 29/07/12 23:36, bearophile wrote:

Era Scarecrow:


>>> Another commonly needed operation is a very fast bit count. There 
are very refined algorithms to do this.



 Likely similar to the hamming weight table mentioned in TDPL.
Combined with the canUseBulk I think I could make it fairly fast.


There is lot of literature about implementing this operation
efficiently. For the first implementation a moderately fast (and short)
code is probably enough. Later faster versions of this operation will go
in Phobos, coming from papers.


See bug 4717. On x86, even on 32 bits you can get close to 1 cycle per 
byte, on 64 bits you can do much better.


Re: FYI my experience with D' version

2012-07-30 Thread Don Clugston

On 30/07/12 14:32, Jacob Carlborg wrote:

On 2012-07-30 12:30, torhu wrote:


version is good for global options that you set with -version on the
command line.  And can also be used internally in a module, but doesn't
work across modules.  But it seems you have discovered this the hard way
already.

I think there was a discussion about this a few years ago, Walter did it
this way on purpose.  Can't remember the details, though.


He probably wants to avoid the C macro hell.




IIRC it's because version identifiers are global.

__
module b;
version = CoolStuff;
__

module a;
import b;
version (X86)
{
   version = CoolStuff;
}

version(CoolStuff)
{
   // Looks as though this is only true on X86.
   // But because module b used the same name, it's actually true always.
}
__

These types of problems would be avoided if we used the one-definition 
rule for version statements, bugzilla 7417.

















Re: cast from void[] to ubyte[] in ctfe

2012-07-16 Thread Don Clugston

On 13/07/12 12:52, Johannes Pfau wrote:

Am Fri, 13 Jul 2012 11:53:07 +0200
schrieb Don Clugston :


On 13/07/12 11:16, Johannes Pfau wrote:

Casting from void[] to ubyte[] is currently not allowed in CTFE. Is
there a special reason for this? I don't see how this cast can be
dangerous?


CTFE doesn't allow ANY form of reinterpret cast, apart from
signed<->unsigned. In particular, you can't do anything in CTFE which
exposes endianness.

It might let you cast from ubyte[] to void[] and then back to ubyte[]
or byte[], but that would be all.


So that's a deliberate decision and won't change?
I guess it's a safety measure as the ctfe and runtime endianness could
differ?


Yes.



Anyway, I can understand that reasoning but it also means that
the new std.hash could only be used with raw ubyte[] arrays and it
wouldn't be possible to generate the CRC/SHA1/MD5 etc sum of e.g. a
string in ctfe. (Which might make sense for most types as the result
could really differ depending on endianness, but it shouldn't matter for
UTF8 strings, right?)

Maybe I can special case CTFE so that at least UTF8 strings work.

BTW: casting from void[][] to ubyte[][] seems to work. I guess this is
only an oversight and nothing I could use as a workaround?


Probably a bug.
But you can convert from char[] to byte[]/ubyte[]. That's OK, it doesn't 
depend on endianness.


Re: cast from void[] to ubyte[] in ctfe

2012-07-13 Thread Don Clugston

On 13/07/12 11:16, Johannes Pfau wrote:

Casting from void[] to ubyte[] is currently not allowed in CTFE. Is
there a special reason for this? I don't see how this cast can be
dangerous?


CTFE doesn't allow ANY form of reinterpret cast, apart from 
signed<->unsigned. In particular, you can't do anything in CTFE which 
exposes endianness.


It might let you cast from ubyte[] to void[] and then back to ubyte[] or 
byte[], but that would be all.


Re: stdarg x86_64 problems...

2012-07-12 Thread Don Clugston

On 12/07/12 11:12, John Colvin wrote:

When I compile the following code with -m32 and -m64 i get a totally
different result, the documentation suggests that they should be the
same...

import core.stdc.stdarg, std.stdio;

void main() {
 foo(0,5,4,3);
}

void foo(int dummy, ...) {
 va_list ap;

 for(int i; i<10; ++i) {
 version(X86_64) {
 va_start(ap, __va_argsave);
 }
 else version(X86) {
 va_start(ap, dummy);
 }
 else
 static assert(false, "Unsupported platform");

 int tmp;
 va_arg!(int)(ap,tmp);
 writeln(ap," ", tmp);
 }
}

when compiled with -m32 I get:

FF960278 5
FF960278 5
FF960278 5
FF960278 5
FF960278 5

and with -m64 I get:

7FFFCDF941D0 5
7FFFCDF941D0 4
7FFFCDF941D0 3
7FFFCDF941D0 38
7FFFCDF941D0 -839302560

(the end stuff is garbage, different every time)

I'm uncertain, even after looking over the stdarg src, why this would
happen. The correct output is all 5s obviously.


Known bug, already fixed in git for a couple of months.




Re: A little story

2012-06-26 Thread Don Clugston

On 25/06/12 14:24, bearophile wrote:

Dmitry Olshansky:


Except for the fact, that someone has to implement it.


I am not seeing one of the posts of this thread. So I'll answer here.

The good thing regarding the run-time overflow integral tests is that
they are already implemented and available as efficient compiler
intrinsics in both GCC and LLVM back-ends. It's just a matter of using
them (and introducing the compiler switch and some kind of pragma syntax).

Bye,
bearophile


Bearophile, haven't you ever read that paper on integer overflow, which 
you keep posting to the newsgroup???


It clearly demonstrates that it is NOT POSSIBLE to implement integer 
overflow checking in a C-family language. Valid, correct, code which 
depends on integer overflow is very, very common (when overflow occurs, 
it's more likely to be correct, than incorrect).


I don't think you could do it without introducing a no-overflow integer 
type. The compiler just doesn't have enough information.




Re: floats default to NaN... why?

2012-06-05 Thread Don Clugston

On 14/04/12 16:52, F i L wrote:

On Saturday, 14 April 2012 at 10:38:45 UTC, Silveri wrote:

On Saturday, 14 April 2012 at 07:52:51 UTC, F i L wrote:

On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak wrote:

F i L wrote:

4) use hardware signalling to overcome some of the limitations
impressed by 3).


4) I have no idea what you just said... :)


On Saturday, 14 April 2012 at 07:58:44 UTC, F i L wrote:

That's interesting, but what effect does appending an invalid char to
a valid one have? Does the resulting string end up being "NaS" (Not a
String)? Cause if not, I'm not sure that's a fair comparison.


The initialization values chosen are also determined by the underlying
hardware implementation of the type. Signalling NANs
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with
floats because they are implemented by the CPU, but in the case of
integers or strings their aren't really equivalent values.


I'm sure the hardware can just as easily signal zeros.


It can't.


Re: pure functions calling impure functions at compile-time

2012-05-24 Thread Don Clugston

On 23/05/12 11:41, bearophile wrote:

Simen Kjaeraas:


Should this be filed as a bug, or is the plan that only pure functions be
ctfe-able? (or has someone already filed it, perhaps)


It's already in Bugzilla, see issue 7994 and 6169.


It's just happening because the purity checking is currently being done 
in a very unsophisticated way.



But I think there is a semantic hole in some of the discussions about
this problem. Is a future compile-time JIT allowed to perform
purity-derived optimizations in CTFE?


Some, definitely. Eg.  foo(n) + foo(n)

can be changed to 2*foo(n), where n is an integer, regardless of what 
foo does.


It does need to be a bit conservative, but I think the issues aren't 
CTFE specific. Eg, something like this currently gives an assert at runtime:


pure nothrow void check(int n) pure nothrow
{
  assert(n == 4);
}

void main()
{
check(3);
}

even though check() can do nothing other than cause an error, it still 
cannot be optimized away. But you can get rid of all subsequent calls to 
it, because they'll produce the same error every time.


Re: Limit number of compiler error messages

2012-05-22 Thread Don Clugston

On 20/05/12 00:38, cal wrote:

Is there a way to limit the dmd compiler to outputting just the first
few errors it comes across?


No, but the intention of DMD is to generate only one error per bug in 
your code.
If you are seeing a large number of useless errors, please report it in 
bugzilla.

http://d.puremagic.com/issues/show_bug.cgi?id=8082 is a good example.


Re: Strange measurements when reproducing issue 5650

2012-04-25 Thread Don Clugston

On 25/04/12 10:34, SomeDude wrote:

Discussion here: http://d.puremagic.com/issues/show_bug.cgi?id=5650

On my Windows box, the following program

import std.stdio, std.container, std.range;

void main() {
enum int range = 100;
enum int n = 1_000_000;

auto t = redBlackTree!int(0);

for (int i = 0; i < n; i++) {
if (i > range)
t.removeFront();
t.insert(i);
}

writeln(walkLength(t[]));
//writeln(t[]);
}

runs in about 1793 ms.
The strange thing is, if I comment out the writeln line, runtimes are in
average *slower* by about 20 ms, with timings varying a little bit more
than when the writeln is included.

How can this be ?


Very strange.
Maybe there is some std library cleanup which is slower if nothing got 
written?


Re: Why has base class protection been deprecated?

2012-04-24 Thread Don Clugston

On 24/04/12 15:29, David Bryant wrote:


Because it doesn't make sense. All classes are derived from Object. That
_has_ to be public, otherwise things like == wouldn't work.



Does the same apply for interfaces? I'm specifically implementing an
interface with non-public visibility. This shouldn't affect the
visibility of the implicit Object base-class.


Right. Only classes are affected.


Re: Why has base class protection been deprecated?

2012-04-24 Thread Don Clugston

On 24/04/12 14:22, David Bryant wrote:

With the dmd 2.059 I have started getting the error 'use of base class
protection is deprecated' when I try to implement an interface with
private visibility, ie:

interface Interface { }

class Class : private Interface { }

$ dmd test.d
test.d(4): use of base class protection is deprecated

This bothers me for two reasons: firstly it's not a base class, and
secondly, it's a standard OO pattern of mine.

What's up with this?

Thanks,
Dave


Because it doesn't make sense. All classes are derived from Object. That 
_has_ to be public, otherwise things like == wouldn't work.


Previously, the compiler used to allow base class protection, but it 
ignored it.


Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 13:46, bearophile wrote:

Do you know why is this program:

import std.stdio;
void main() {
 real r = 9223372036854775808UL;
 writefln("%1.19f", r);
}

Printing:
9223372036854775807.800

Instead of this?
9223372036854775808.000

Bye,
bearophile


Poor float->decimal conversion in the C library ftoa() function.


Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 18:53, Timon Gehr wrote:

On 04/04/2012 05:15 PM, Don Clugston wrote:


I don't think so. For 80-bit reals, every long can be represented
exactly in an 80 bit real, as can every ulong from 0 up to and including
ulong.max - 1. The only non-representable built-in integer is ulong.max,
which (depending on rounding mode) gets rounded up to ulong.max+1.



?

assert(0xp0L == ulong.max);
assert(0xfffep0L == ulong.max-1);


Ah, you're right. I forgot about the implicit bit. 80 bit reals are 
equivalent to 65bit signed integers.


It's ulong.max+2 which is the smallest unrepresentable integer.

Conclusion: you cannot blame ANYTHING on the limited precision of reals.


Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 13:40, bearophile wrote:

Jonathan M Davis:


This progam:

import std.math;
import std.stdio;
import std.typetuple;

ulong log2(ulong n)
{
 return n == 1 ? 0
   : 1 + log2(n / 2);
}

void print(ulong value)
{
 writefln("%s: %s %s", value, log2(value), std.math.log2(value));
}

void main()
{
 foreach(T; TypeTuple!(byte, ubyte, short, ushort, int, uint, long, ulong))
 print(T.max);
}


prints out

127: 6 6.98868
255: 7 7.99435
32767: 14 15
65535: 15 16
2147483647: 30 31
4294967295: 31 32
9223372036854775807: 62 63
18446744073709551615: 63 64


So, the question is: Is std.math.log2 buggy, or is it just an issue with the
fact that std.math.log2 is using reals, or am I completely misunderstanding
something here?


What is the problem you are perceiving?

The values you see are badly rounded, this is why I said that the Python 
floating point printing function is better than the D one :-)

I print the third result with more decimal digits using %1.20f, to avoid that 
bad rounding:


import std.stdio, std.math, std.typetuple;

ulong mlog2(in ulong n) pure nothrow {
 return (n == 1) ? 0 : (1 + mlog2(n / 2));
}

void print(in ulong x) {
 writefln("%s: %s %1.20f", x, mlog2(x), std.math.log2(x));
}

void main() {
 foreach (T; TypeTuple!(byte, ubyte, short, ushort, int, uint, long, ulong))
 print(T.max);
 print(9223372036854775808UL);
 real r = 9223372036854775808UL;
 writefln("%1.20f", r);
}


The output is:

127: 6 6.98868468677216585300
255: 7 7.99435343685885793770
32767: 14 14.5597176955822200
65535: 15 15.7798605273604400
2147483647: 30 30.932819277100
4294967295: 31 31.966409638400
9223372036854775807: 62 63.0100
18446744073709551615: 63 64.
9223372036854775808: 63 63.0100
9223372036854775807.8000

And it seems essentially correct, the only significant (but very little) 
problem I am seeing is
log2(9223372036854775807) that returns a value>  63, while of course in truth 
it's strictly less than 63 (found with Wolfram Alpha):
62.9984358269024341355489972678233
The cause of that small error is the limited integer representation precision 
of reals.


I don't think so. For 80-bit reals, every long can be represented 
exactly in an 80 bit real, as can every ulong from 0 up to and including 
ulong.max - 1. The only non-representable built-in integer is ulong.max, 
which (depending on rounding mode) gets rounded up to ulong.max+1.


The decimal floating point output for DMC, which DMD uses, is not very 
accurate. I suspect the value is actually <= 63.



In Python there is a routine to compute precisely the logarithm of large 
integer numbers. It will be useful to have in D too, for BigInts too.

Bye,
bearophile




Re: Comparison issue

2012-03-20 Thread Don Clugston

On 19/03/12 15:45, H. S. Teoh wrote:

On Mon, Mar 19, 2012 at 08:50:02AM -0400, bearophile wrote:

James Miller:


 writeln(v1 == 1); //false
 writeln(v1 == 1.0); //false
 writeln(v1 == 1.0f); //false
 writeln(v1+1 == 2.0f); //true





Maybe I'd like to deprecate and then statically forbid the use of ==
among floating point values, and replace it with a library-defined
function.

[...]

I agree. Using == for any floating point values is pretty much never
right. Either we should change the definition of == for floats to use
abs(y-x)

No, no, no. That's nonsense.

For starters, note that ANY integer expression which is exact, is also 
exact in floating point.

Another important case is that
if (f == 0)
is nearly always correct.


> Using == to compare floating point values is wrong. Due to the nature of
> floating point computation, there's always a possibility of roundoff
> error. Therefore, the correct way to compare floats is:
>
>immutable real epsilon = 1.0e-12; // adjustable accuracy here
>if (abs(y-x)<  epsilon) {
>// approximately equal
>} else {
>// not equal
>}

And this is wrong, if y and x are both small, or both large. Your 
epsilon value is arbitrary.

Absolute tolerance works for few functions like sin(), but not in general.

See std.math.feqrel for a method which gives tolerance in terms of 
roundoff error, which is nearly always what you want.


To summarize:

For scientific/mathematical programming:
* Usually you want relative tolerance
* Sometimes you want exact equality.
* Occasionally you want absolute tolerance

But it depends on your application. For graphics programming you 
probably want absolute tolerance in most cases.


Re: Vector Swizzling in D

2012-03-15 Thread Don Clugston

On 14/03/12 18:46, Boscop wrote:

On Wednesday, 14 March 2012 at 17:35:06 UTC, Don Clugston wrote:

In the last bit of code, why not use CTFE for valid(string s) instead
of templates?

bool valid(string s)
{
foreach(c; s)
{
if (c < 'w' || c > 'z') return false;
}
return true;
}

In fact you can use CTFE for the other template functions as well.


In the original version I actually did this, but even with -O -inline
-release the opDispatchs call didn't get inlined. I thought it was
caused by CTFE-code that prevented the inlining.
FWIW, this was the original code using CTFE:
---
import std.algorithm: reduce;
struct Vec {
double[4] v;
@property auto X() {return v[0];}
@property auto Y() {return v[1];}
@property auto Z() {return v[2];}
@property auto W() {return v[3];}
this(double x, double y, double z, double w) {v = [x,y,z,w];}
@property auto opDispatch(string s)() if(s.length <= 4 &&
reduce!((s,c)=>s && 'w' <= c && c <= 'z')(true, s)) {
char[] p = s.dup;


This won't be CTFEd, because it's not forced to be a compile-time constant.


foreach(i; s.length .. 4) p ~= p[$-1];

This too.

But, you can do something like:
enum p = extend(s);
since p is an enum, it must use CTFE.


int i(char c) {return [3,0,1,2][c-'w'];}

this isn't forced to be CTFE either.


return Vec(v[i(p[0])], v[i(p[1])], v[i(p[2])], v[i(p[3])]);
}



---
(I was using reduce here only to demonstrate D's functional features and
nice lambda syntax. Maybe that's what prevented inlining?)




Re: Vector Swizzling in D

2012-03-14 Thread Don Clugston

On 14/03/12 15:57, Boscop wrote:

Hi everyone,

I wrote a blog post for people who know a bit of D and want to dig
deeper, it shows different approaches to get vector swizzling syntax in D:

http://boscop.tk/blog/?p=1

There is nothing revolutionary involved but it might still be useful to
someone.
Comments and criticism are welcome.

In the last bit of code, why not use CTFE for valid(string s) instead of 
templates?


bool valid(string s)
{
   foreach(c; s)
   {
 if (c < 'w' || c > 'z') return false;
   }
   return true;
}

In fact you can use CTFE for the other template functions as well.



Re: About CTFE and pointers

2012-02-24 Thread Don Clugston

On 24/02/12 15:18, Alex Rønne Petersen wrote:

On 24-02-2012 15:08, bearophile wrote:

I have seen this C++11 program:
http://kaizer.se/wiki/log/post/C++_constexpr/

I have translated it to this D code:


bool notEnd(const char *s, const int n) {
return s&& s[n];
}
bool strPrefix(const char *s, const char *t, const int ns, const int
nt) {
return (s == t) ||
!t[nt] ||
(s[ns] == t[nt]&& (strPrefix(s, t, ns+1, nt+1)));
}
bool contains(const char *s, const char *needle, const int n=0) {
// Works only with C-style 0-terminated strings
return notEnd(s, n)&&
(strPrefix(s, needle, n, 0) || contains(s, needle, n+1));
}
enum int x = contains("froogler", "oogle");
void main() {
// assert(contains("froogler", "oogle"));
}


If I run the version of the code with the run-time, it generates no
errors.

If I run the version with enum with the latest dmd it gives:

test.d(6): Error: string index 5 is out of bounds [0 .. 5]
test.d(7): called from here: strPrefix(s,t,ns + 1,nt + 1)
test.d(4): 5 recursive calls to function strPrefix
test.d(12): called from here: strPrefix(s,needle,n,0)
test.d(12): called from here: contains(s,needle,n + 1)
test.d(12): called from here: contains(s,needle,n + 1)
test.d(14): called from here: contains("froogler","oogle",0)


At first sight it looks like a CTFE bug, but studying the code a
little it seems there is a off-by-one bug in the code
(http://en.wikipedia.org/wiki/Off-by-one_error ). A quick translation
to D arrays confirms it:


bool notEnd(in char[] s, in int n) {
return s&& s[n];
}
bool strPrefix(in char[] s, in char[] t, in int ns, in int nt) {
return (s == t) ||
!t[nt] ||
(s[ns] == t[nt]&& (strPrefix(s, t, ns+1, nt+1)));
}
bool contains(in char[] s, in char[] needle, in int n=0) {
// Works only with C-style 0-terminated strings
return notEnd(s, n)&&
(strPrefix(s, needle, n, 0) || contains(s, needle, n+1));
}
//enum int x = contains("froogler", "oogle");
void main() {
assert(contains("froogler", "oogle"));
}


It gives at run-time:

core.exception.RangeError@test(6): Range violation
----
\test.d(6): bool test.strPrefix(const(char[]), const(char[]),
const(int), const(int))




So it seems that Don, when he has implemented the last parts of the
CTFE interpreter, has done something curious, because in some cases it
seems able to find out of bounds even when you use just raw pointers :-)

Bye,
bearophile


It's not at all unlikely that the CTFE interpreter represents blocks of
memory as a pointer+length pair internally.



Yes, that's exactly what it does. That's how it's able to implement 
pointers safely.


That's a nice story, Thanks, bearophile.



Re: Everything on the Stack

2012-02-21 Thread Don Clugston

On 21/02/12 12:12, Timon Gehr wrote:

On 02/21/2012 11:27 AM, Daniel Murphy wrote:

scope/scoped isn't broken, they're just not safe. It's better to have an
unsafe library feature than an unsafe language feature.




scope is broken because it is not enforced by the means of
flow-analysis. As a result, it is not safe. Copying it to the library
and temporarily disabling 'scope' for classes is a good move however,
because this means we will be able to fix it at an arbitrary point in
the future without additional code breakage.


Does the library solution actually work the same as the language solution?


Re: Flushing denormals to zero

2012-02-17 Thread Don Clugston

On 17/02/12 09:09, Mantis wrote:

17.02.2012 4:30, bearophile пишет:

After seeing this interesting thread:
http://stackoverflow.com/questions/9314534/why-does-changing-0-1f-to-0-slow-down-performance-by-10x


Do you know if there's a simple way to perform _MM_SET_FLUSH_ZERO_MODE
in D?

According to Agner that operation is not needed on Sandy Bridge
processors, but most CPUs around are not that good:
http://www.agner.org/optimize/blog/read.php?i=142

Bye,
bearophile


I could expect this to be adjustable in std.math.FloatingPointControl,
but it isn't.


That's because the x87 doesn't support flush-to-zero, and 32-bit x86 
doesn't necessarily have SSE.

But it should probably be added for 64 bit.

Anyway, the assembly code to change FPU control word is

pretty tiny:
http://software.intel.com/en-us/articles/x87-and-sse-floating-point-assists-in-ia-32-flush-to-zero-ftz-and-denormals-are-zero-daz/







Re: Hex floats

2012-02-17 Thread Don Clugston

On 16/02/12 17:36, Timon Gehr wrote:

On 02/16/2012 05:06 PM, Don Clugston wrote:

On 16/02/12 13:28, Stewart Gordon wrote:

On 16/02/2012 12:04, Don Clugston wrote:

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals
must
always have an exponent? For example, 0xFFi obviously must be float,
not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Do you mean the syntax has just been copied straight from C99 without
any thought about making it more lenient?

Stewart.


Yes. There would need to be a good reason to do so.

For the case in question, note that mathematically, imaginary integers
are perfectly valid. Would an imaginary integer literal be an idouble, a
ifloat, or an ireal? I don't think it could be any:

foor(float x)
foor(double x)
fooi(ifloat x)
fooi(idouble x)

foor(7); //ambiguous, doesn't compile
fooi(7i); // by symmetry, this shouldn't compile either


static assert(is(typeof(7i)==idouble));


Ooh, that's bad.



Re: Hex floats

2012-02-16 Thread Don Clugston

On 16/02/12 13:28, Stewart Gordon wrote:

On 16/02/2012 12:04, Don Clugston wrote:

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals must
always have an exponent? For example, 0xFFi obviously must be float, not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Do you mean the syntax has just been copied straight from C99 without
any thought about making it more lenient?

Stewart.


Yes. There would need to be a good reason to do so.

For the case in question, note that mathematically, imaginary integers 
are perfectly valid. Would an imaginary integer literal be an idouble, a 
ifloat, or an ireal? I don't think it could be any:


foor(float x)
foor(double x)
fooi(ifloat x)
fooi(idouble x)

foor(7); //ambiguous, doesn't compile
fooi(7i); // by symmetry, this shouldn't compile either


Re: Hex floats

2012-02-16 Thread Don Clugston

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals must
always have an exponent? For example, 0xFFi obviously must be float, not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Re: Compiler error with static vars/functions

2012-02-10 Thread Don Clugston

On 10/02/12 16:08, Artur Skawina wrote:

On 02/10/12 15:18, Don Clugston wrote:

On 09/02/12 23:03, Jonathan M Davis wrote:

On Thursday, February 09, 2012 14:45:43 bearophile wrote:

Jonathan M Davis:

Normally, it's considered good practice to give modules names which are
all lowercase (particularly since some OSes aren't case-sensitive for
file operations).


That's just a Walter thing, and it's bollocks. There's no need to use all lower 
case.


No, having non-lower case filenames would just lead to problems. Like different
modules being imported depending on the filesystem being used, or the user's
locale settings.


I don't think it's possible without deliberate sabotage. You can't have 
two files with names differing only in case on Windows. And module 
declarations acts as a check anyway.


The D community is a *lot* of experience with mixed case names. No 
problems have ever been reported.




Re: Compiler error with static vars/functions

2012-02-10 Thread Don Clugston

On 09/02/12 23:03, Jonathan M Davis wrote:

On Thursday, February 09, 2012 14:45:43 bearophile wrote:

Jonathan M Davis:

Normally, it's considered good practice to give modules names which are
all lowercase (particularly since some OSes aren't case-sensitive for
file operations).


That's just a Walter thing, and it's bollocks. There's no need to use 
all lower case.




That's just a fragile work-around for a module system design problem that I
didn't like from the first day I've seen D. I'll take a look in Bugzilla if
there is already something on this.


What design problem? The only design problem I see is the fact that some OSes
were badly designed to be case insensitive when dealing with files, and that's
not a D issue.

- Jonathan M Davis


You can argue this either way. There are not many use cases for having 
two files differing only in case.




Re: dmd & gdc

2012-01-27 Thread Don Clugston

On 26/01/12 18:59, xancorreu wrote:

Al 26/01/12 18:43, En/na H. S. Teoh ha escrit:

On Thu, Jan 26, 2012 at 06:06:38PM +0100, xancorreu wrote:
[...]

I note that gdc is completely free software but dmd runtime is not.

You mean free as in freedom, not as in price.


Yes, both


I don't what that means. The DMD backend license is not compatible with 
the GPL, so it could never be bundled with a *nix distro. Otherwise, I 
don't think there are any consequences. It has nothing else in common 
with non-free products.


The complete source is on github, and you're allowed to freely download 
it, compile it, and make your own clone on github. What else would you want?




Re: Constant function/delegate literal

2012-01-17 Thread Don Clugston

On 15/01/12 20:35, Timon Gehr wrote:

On 01/14/2012 07:13 PM, Vladimir Matveev wrote:

Hi,

Is there a reason why I cannot compile the following code:

module test;

struct Test {
int delegate(int) f;
}

Test s = Test((int x) { return x + 1; });

void main(string[] args) {
return;
}

dmd 2.057 says:

test.d(7): Error: non-constant expression cast(int
delegate(int))delegate pure
nothrow @safe int(int x)
{
return x + 1;
}

?


The 'this' pointer in the delegate:
Test((int x) { return x + 1; });

is a pointer to the struct literal, which isn't constant. You actually 
want it to point to variable s.


OTOH if it were a function, it should work, but it currently doesn't.



This is simple example; what I want to do is to create a global variable
containing a structure with some ad-hoc defined functions. The compiler
complains that it "cannot evaluate  at compile time". I think this
could
be solved by defining a function returning needed structure, but I
think this
is cumbersome and inconvenient.

Best regards,
Vladimir Matveev.


I think it should work. I have filed a bug report:
http://d.puremagic.com/issues/show_bug.cgi?id=7298






Re: CTFE and cast

2012-01-13 Thread Don Clugston
On 13/01/12 10:01, k2 wrote:
> When replace typedef to enum, it became impossible to compile a certain
> portion.
> 
> dmd v2.057 Windows
> 
> enum HANDLE : void* {init = (void*).init}
> 
> pure HANDLE int_to_HANDLE(int x)
> {
>  return cast(HANDLE)x;
> }
> 
> void bar()
> {
>  HANDLE a = cast(HANDLE)1;// ok
>  HANDLE b = int_to_HANDLE(2);// ok
> }
> 
> HANDLE c = cast(HANDLE)3;// ok
> HANDLE d = int_to_HANDLE(4);// NG
> 
> foo.d(17): Error: cannot implicitly convert expression (cast(void*)4u)
> of type void* to HANDLE

It's a problem. Casting integers to pointers is a very unsafe operation,
and is disallowed in CTFE. There is a special hack, specifically for
Windows HANDLES, which allows you to cast integers to pointers at
compile time, but only after they've left CTFE.





Re: C++ vs D aggregates

2011-12-03 Thread Don

On 03.12.2011 20:14, Dejan Lekic wrote:

I recently stumbled on this thread: http://stackoverflow.com/
questions/5666321/what-is-assignment-via-curly-braces-called-and-can-it-
be-controlled

The important part is this:

 8<  - begin -
The Standard says in section §8.5.1/1,

An aggregate is an array or a class (clause 9) with no user-declared
constructors (12.1), no private or protected non-static data members
(clause 11), no base classes (clause 10), and no virtual functions (10.3).

And then it says in §8.5.1/2 that,

When an aggregate is initialized the initializer can contain an
initializer-clause consisting of a brace-enclosed, comma-separated list
of initializer-clauses for the members of the aggregate, written in
increasing subscript or member order. If the aggregate contains
subaggregates, this rule applies recursively to the members of the
subaggregate.
>8 - end -

Do D2 aggregates behave the same, or are there notable differences?


Yes, struct static initializers are the same in D as in C++.
Differences are:
* D also has struct literals, which can be used in contexts other than 
initialization;
* There are no static initializers for classes. (D's classes are never 
'aggregates' in the C++ sense);

* Static initializers for unions are currently very buggy in DMD.


Re: -release compiler switch and associative arrays

2011-10-11 Thread Don

On 09.10.2011 13:24, Graham Cole wrote:

I understand from the documentation that the "-release" compiler switch turns off 
"array bounds checking for system and trusted functions".

Is it correct that the following code should seg fault when compiled with 
"-release" ?

string[string] h;
h["abc"] = "def";
string s = h["aaa"];

I.e. retrieving from an associative array for a non existent key.

I would have thought that an exception should be generated in this case when 
compiled with
"-release" (as it is when compiled without).

This code behaves the same when compiled by both D1 and D2.

Should I report this as a bug ?


In the compiler source (e2ir.c, IndexExp::toElem), AA checking is 
generated only when bounds checking is enabled. So it's intentional.


What really happens is that h[xxx] returns a pointer to the element, or 
null if none.
If it's null, then a bounds error is thrown. But this check is removed 
for -release.
Then string s= h["aaa"] dereferences the null pointer, and you get a 
segfault.


Here's a funny side-effect: the code below works fine with -release. p 
is null. But you get an array bounds error if not using -release.


 string[string] h;
 h["abc"] = "def";
 string *p = &h["aaa"];



How to get WinDbg to show a call stack on Windows 7?

2011-09-12 Thread Don
I've set up a Windows 7 machine for working on DMD but I can't get 
windbg to work properly. Specifically, when I try to debug DMD itself, I 
don't get a call stack; I only see the current function.

Everything else seems OK.

Has anyone else experienced this? Any ideas?


Re: How do "pure" member functions work?

2011-08-24 Thread Don

Simen Kjaeraas wrote:

On Mon, 22 Aug 2011 22:19:50 +0200, Don  wrote:

BTW: The whole "weak pure"/"strong pure" naming was just something I 
came up with, to convince Walter to relax the purity rules. I'd rather 
those names disappeared, they aren't very helpful.


The concepts are useful, but better names might be worth it. But what
short word eloquently conveys 'accesses no mutable global state'? :p


@nostatic  ?
stateless ?


What we call strongly pure is what in other languages is simply called
'pure', and that is likely the word that should be used for it.




Weakly pure is a somewhat different beast, and the 'best' solution would
likely be for it to be the default (But as we all know, this would
require changing the language too much. Perhaps in D3...). Noglobal
might be the best we have. My favorite thus far is 'conditionally pure'.
It conveys that the function is pure in certain circumstances, and not
in others. However, it might be somewhat diluted by the addition of pure
inference in newer versions of DMD - that definitely is conditionally
pure.

Const pure is not a concept I'm particularly familiar with. Is this the
special case of calling a conditionally pure function with only
const/immutable parameters, with arguments that are immutable in the
calling context, and that it in those cases can be considered
strongly pure?


No, it's where the signature contains const parameters, rather than 
immutable ones.
Two calls to a const-pure function, with the same parameters, may give 
different results. You'd need to do a deep inspection of the parameters,

to see if they changed.

Consider:

x = foo(y);
x = foo(y);
where foo is const-pure.
Perhaps y contains a pointer to x. In that case, foo could depend on x.
Or, there might be a mutable pointer to y somewhere, and x might have an 
opAssign which modifies y.

In each case, the second y is different to the first one.

But, if foo is immutable-pure, it will return the same value both times, 
so one of the calls can be optimized away.


Re: How do "pure" member functions work?

2011-08-22 Thread Don

Timon Gehr wrote:

On 08/21/2011 09:10 PM, Don wrote:

bearophile wrote:

Sean Eskapp:


Oh, I see, thanks! This isn't documented in the function documentation!


D purity implementation looks like a simple thing, but it's not
simple, it has several parts that in the last months have be added to
the language and compiler, and we are not done yet, there are few more
things to add (like implicit conversion to immutable of the results of
strongly pure functions). It will need several book pages to document
all such necessary design details.

Bye,
bearophile


It is actually very simple: a function marked as 'pure' is not allowed
to explicitly access any static variables.
Everything else is just compiler optimisation, and the programmer
shouldn't need to worry about it.


It can be of value to know that a function is pure as in mathematics if 
it is strongly pure, but can have restricted side-effects if it is 
weakly pure.


Well, from the compiler's point of view, it's more complicated than 
that. There are const-pure as well as immutable-pure functions. A 
const-pure function has no side-effects, but cannot be optimised as 
strongly as an immutable-pure function.


BTW: The whole "weak pure"/"strong pure" naming was just something I 
came up with, to convince Walter to relax the purity rules. I'd rather 
those names disappeared, they aren't very helpful.


But the basic point is, that knowing that there are no static variables 
is hugely significant for reasoning about code. The strong pure/weak 
pure distinction is not very interesting (you can distinguish them

using only the function signature).


Re: How do "pure" member functions work?

2011-08-21 Thread Don

bearophile wrote:

Sean Eskapp:


Oh, I see, thanks! This isn't documented in the function documentation!


D purity implementation looks like a simple thing, but it's not simple, it has 
several parts that in the last months have be added to the language and 
compiler, and we are not done yet, there are few more things to add (like 
implicit conversion to immutable of the results of strongly pure functions). It 
will need several book pages to document all such necessary design details.

Bye,
bearophile


It is actually very simple: a function marked as 'pure' is not allowed 
to explicitly access any static variables.
Everything else is just compiler optimisation, and the programmer 
shouldn't need to worry about it.


Re: (int << ulong) == int ?

2011-08-09 Thread Don

Jonathan M Davis wrote:

On Tuesday 09 August 2011 09:32:41 Don wrote:

Jonathan M Davis wrote:

On Monday 08 August 2011 00:33:31 Dmitry Olshansky wrote:

Just lost the best part of an hour figuring the cause of this small
problem, consider:

void main()
{

 uint j = 42;
 ulong k = 1<
I would not expect that type of integer being used to give the number of
bits to shift to affect thet type of integer being shifted. It doesn't
generally make much sense to shift more than the size of the integer
type being shifted, and that can always fit in a byte. So, why would
the type of the integer being used to give the number of bits to shift
matter? It's like an index. The index doesn't affect the type of what's
being indexed. It just gives you an index.

You have to deal with integer promotions and all that when doing
arithmetic, because arithmetic needs to be done with a like number of
bits on both sides of the operation. But with shifting, all your doing
is asking it to shift some number of bits. The type which holds the
number of bits shouldn't really matter. I wouldn't expect _any_ integer
promotions to occur in a shift expression. If you want to affect what's
being shifted, then cast what's being shifted.

Your intuition is wrong!

expression.html explicitly states the operands to shifts undergo
integral promotions. But they don't get arithmetic conversions.

I think this is terrible.

short x = -1;
x >>>= 1;

Guess what x is...


That's just downright weird. Why would any integral promotions occur with a 
shift? And given your example, the current behavior seems like a bad design. 
Is this some weird hold-over from C/C++?


I think it must be, some silly idea of 'simplifying' things by applying 
the same promotion rules to all binary operators. Given that 256-bit 
integers still don't seem to be coming any time soon, even shifts by a 
signed byte shouldn't require promotion.
Incidentally, the x86 instruction set doesn't use integers, you can only 
shift a 64-bit number by an 8-bit value. So x << y always compiles to

x << cast(ubyte)y. Shift should not need a common type.

This is one of those things that can cause problems, but never helps.



Re: (int << ulong) == int ?

2011-08-09 Thread Don

Jonathan M Davis wrote:

On Monday 08 August 2011 00:33:31 Dmitry Olshansky wrote:

Just lost the best part of an hour figuring the cause of this small
problem, consider:

void main()
{
 uint j = 42;
 ulong k = 1<

I would not expect that type of integer being used to give the number of bits 
to shift to affect thet type of integer being shifted. It doesn't generally 
make much sense to shift more than the size of the integer type being shifted, 
and that can always fit in a byte. So, why would the type of the integer being 
used to give the number of bits to shift matter? It's like an index. The index 
doesn't affect the type of what's being indexed. It just gives you an index.


You have to deal with integer promotions and all that when doing arithmetic, 
because arithmetic needs to be done with a like number of bits on both sides 
of the operation. But with shifting, all your doing is asking it to shift some 
number of bits. The type which holds the number of bits shouldn't really 
matter. I wouldn't expect _any_ integer promotions to occur in a shift 
expression. If you want to affect what's being shifted, then cast what's being 
shifted.


Your intuition is wrong!

expression.html explicitly states the operands to shifts undergo 
integral promotions. But they don't get arithmetic conversions.


I think this is terrible.

short x = -1;
x >>>= 1;

Guess what x is...



Re: Need OMF MySQL lib that actually f^*&^ works...

2011-07-22 Thread Don

Nick Sabalausky wrote:
Anyone have a known-working Windows OMF library for MySQL? Static or 
dynamic, I don't care. I've tried fucking everything and I can't get the 
dang thing to work. Static was a total no-go. With dynamic, using implib I 
got it to link, but calling any of it resulted in an Access Violation. Using 
coffimplib, best I could do had one linker error, a missing 
"_mysql_real_connect". This was all with "Connector/C" v6.0.2. I'd have 
tried an older version, 5.x, but 6.0.2 is the only version that seems to 
still exist.



I have one that works (dates from 2009). I tried to attach it, but 
Thunderbird died on me.


Re: This seems like what could be a common cause of bugs

2011-07-22 Thread Don

Andrej Mitrovic wrote:

This is just an observation, not a question or anything.

void main()
{
enum width = 100;
double step = 1 / width;

writeln(step);  // 0
}

I've just had this bug in my code. I forgot to make either width or 1
a floating-point type. IOW, I didn't do this:

void main()
{
enum width = 100.0;
double step = 1 / width;   // or .1

writeln(step);  // now 0.01
}

This seems like a very easy mistake to make. But I know the compiler
is probably powerless to do anything about it. It has an expression
resulting in an int at the RHS, and it can easily cast this to a
double since there's no obvious loss of precision when casting
int->double.

Where's those lint tools for D.. :/


It is indeed a common bug. I made a proposal for fixing it, which was 
accepted, but I still haven't got around to making the patch. It's a 
simple addition to range propagation: integer operations need to record 
if there is a potentially non-zero fractional part in the result.

Non-zero fractional parts are created by /, %, and ^^.
Certain operations (bitwise operations, casts) force the fractional part 
to zero, enabling it to be convertable to float.


Re: Constructor call must be in a constructor

2011-07-07 Thread Don

Jesse Phillips wrote:

Loopback Wrote:


Hi!

While implementing and overloading several different operators for my
structure I've got stuck with an error.

As noticed in the attachment, in my opBinaryRight function I mimic the
opBinary (left) operator by instantiating the structure itself to avoid
implementing duplicates of the binary operator overloads.

The opBinaryRight operator is defined as following:

DVector2 opBinaryRight(string op, T)(T lhs) if(Accepts!T)
{
	// Error: template instance vector.DVector2.__ctor!(DVector2) error 
instantiating

return DVector2(lhs).opBinary!op(this);
}

I create an additional DVector2 structure and then calls the opBinary
operator. When creating this DVector2 structure the following
constructor gets called:

this(T)(T arg) if(Accepts!T)
{
static if(isScalar!T)
this(arg, arg);
else
// Error: constructor call must be in a constructor
this(arg.tupleof);
}

[Blah, blah, blah]


This is probably related to a question recently asked on SO, which you might 
have even been the author. But for synergy: 
http://stackoverflow.com/questions/6553950/how-to-use-template-constructors-in-d


Bizarrely, the bit about template constructors was added to the docs as 
part of the fix to bug 2616.

Yet, bug 435 "Constructors should be templatized" is still open.
In view of this confusion, I would expect that compiler bugs related to 
this feature are very likely. Please submit a bug report.


Re: A different vector op

2011-07-06 Thread Don

bearophile wrote:

Currently this is not allowed, but do you desire a feature like this?


struct Foo {
int x, y;
int[100] array;
}
void main() {
auto foos = new Foo[100];
foos[].y += 10; // ***
}

Bye,
bearophile


An interesting use case:

void main()
{
   cdouble[100] foos;
   foos[].re = 5.0;
}


Re: + operators

2011-06-12 Thread Don

Jonathan M Davis wrote:

On 2011-06-12 02:37, bearophile wrote:

Jonathan M Davis:

Certainly, once range propagation has been fully implemented, this
particular will work without needing any casts, but as soon as the
compiler doesn't know what the values of x and y are, I believe that it
would still end up complaining.

I am not sure D range propagation is supposed to work across different
lines of code (I think not).


I'm pretty sure that it is (it wouldn't be worth much if it didn't IMHO), but 
I'd have to look up discussions on it to be sure.


- Jonathan M Davis
Bearophile is right. It only applies to single expressions. (Using more 
than one statement requires flow analysis).


Re: DMD Backend: Deciding instructions to use/avoid?

2011-06-10 Thread Don

Nick Sabalausky wrote:
"Nick Sabalausky"  wrote in message 
news:isoltk$1ehd$1...@digitalmars.com...
"Don"  wrote in message 
news:isoh6c$15jb$1...@digitalmars.com...

Nick Sabalausky wrote:
So my main question: Does DMD do anything like, say, detecting the CPU 
at compile time and then enabling instructions only available on that 
CPU and up? Or does it do anything like always assuming the target CPU 
has SSE2? Anything like that that could cause differences between our 
CPUs to result in object code that will work on one of the CPUs, but not 
the other? FWIW, the CPU on my linux box is i686, so it's not like I'm 
on some super-ultra-old i586, or anything like that.
DMD itself doesn't, but the array operations do. The DMD backend is 
ancient, and generates code for original Pentiums (+ 64 bit equivalents 
of the same instructions).
Is there any way to force the array operations down to a certain level? Or 
even better yet, have them detect the CPU at startup and then use the 
appropriate version?


The runtime detects the CPU type at startup, and uses SSE/SSE2 if 
available, otherwise falls back to x87.

The 64-bit DMD doesn't use SSE2 at all, yet.

It would be a bad thing if we can't use an SSE2 CPU to compile a binary 
that'll work on a non-SSE2 machine.

Indeed.


Re: Is it reasonable to learn D

2011-06-08 Thread Don

Trass3r wrote:

- The D compiler has only bad code optimization
Yep, but there is LDC and GDC which use LLVM and GCC as backends 
respectively.



- There are no maintained GUI libraries
I wouldn't agree with that. Some people are still working on GtkD, QtD 
and DWT.



- The development of the compiler is very slow
More and more people are contributing patches so development has 
definitely become faster.
Also Don has more or less taken over development of the CTFE 
functionality. Nice trend.



- Only a small community
  => no real German community

There is no separate German community but there are plenty of Germans here.
Manchmal sieht man sie nur nicht sofort ^^


z.B. Don wohnt in Deutschland.


Re: DMD Backend: Deciding instructions to use/avoid?

2011-06-08 Thread Don

Nick Sabalausky wrote:
So my main question: Does DMD do anything like, say, detecting the CPU at 
compile time and then enabling instructions only available on that CPU and 
up? Or does it do anything like always assuming the target CPU has SSE2? 
Anything like that that could cause differences between our CPUs to result 
in object code that will work on one of the CPUs, but not the other? FWIW, 
the CPU on my linux box is i686, so it's not like I'm on some 
super-ultra-old i586, or anything like that. 


DMD itself doesn't, but the array operations do. The DMD backend is 
ancient, and generates code for original Pentiums (+ 64 bit equivalents 
of the same instructions).


Re: how to get the local?

2011-06-01 Thread Don

Lloyd Dupont wrote:

I tried to add that to my D file
===
public import std.c.windows.windows;

extern(Windows)
{
   int GetUserDefaultLocaleName(LPWSTR lpLocaleName, int cchLocaleName);
}
===

Try:
extern(Windows)
{
   int GetUserDefaultLocaleNameW(LPWSTR lpLocaleName, int cchLocaleName);
}


and compile and link to kernel32.lib

But I got the following compile error:
Error1Error 42: Symbol Undefined _GetUserDefaultLocaleName@8 
C:\Dev\DTest\DTest1\Dexperiment\


Any clues?


"Lloyd Dupont"  wrote in message news:is5gm7$1a8u$1...@digitalmars.com...

I'm on a windows PC in Australia
I'd like to get the string "en-AU" and "en" from Windows
How do I do that please?


Re: github: What to do when unittests fail?

2011-05-29 Thread Don

Dmitry Olshansky wrote:

On 24.05.2011 1:33, Andrej Mitrovic wrote:
I've cloned Phobos just a few minutes ago, and I've tried to build it 
with unittests, I'm getting these:


Warning: AutoImplement!(C_6) ignored variadic arguments to the 
constructor C_6(...)

  --- std.socket(316) broken test ---
  --- std.regex(3671) broken test ---


Windows build is polluted by these messages that meant something 
sometime ago (usually there was a test that failed, and it was commented 
out for now).


Those messages are still meaningful. They are to remind people that the 
bugby tests haven't been fixed.

Still it builds quite easily, though I skipped a couple of recent commits.

Yes, that's the idea.


So what's the procedure now? Do I have to first revert to some earlier 
version of Phobos that has unittests that pass, before doing any 
edits? I want to edit an unrelated module and change some code 
(actually change a unittest), and make a pull request. This is for an 
already reported bug in bugzilla.

Check auto-tester first ;)
http://d.puremagic.com/test-results/



Re: Cannot interpret struct at compile time

2011-05-09 Thread Don

Robert Clipsham wrote:

Hey all,

I was wondering if anyone could enlighten me as to why the following 
code does not compile (dmd2, latest release or the beta):


Added as bug 5969.


Re: expression templates

2011-05-02 Thread Don

Mr enuhtac wrote:

Hello everyone,

I'm new to D and this list (although I've had a look onto D a few years ago). I 
hope you guys can help me with my questions.

At the moment I'm trying to implement some expression template stuff. My first 
goal is to encode an expression into a type representing that expression 
without any additional functionality (like the possibility to evaluate that 
expression). Actually this is very simple and short in D. This is my approach:

struct OpBinary( string Op, R1, R2 )
{
alias typeof( mixin( "R1.EvalT.init" ~ Op ~ "R2.EvalT.init" ) ) EvalT;

enum string Operator = Op;
};

struct Constant( T, T v )
{
alias T EvalT;

enum T value = v;
};

struct Expr( R )
{
auto opBinary( string Op, R2 )( Expr!R2 )
{
return Expr!( OpBinary!( Op, R, R2 ) )();
}

auto opBinary( string Op, T )( T v ) if( isNumeric!T )
{
return Expr!( OpBinary!( Op, R, Constant!( T, v ) ) )();
}

auto opBinaryRight( string Op, T )( T v ) if( isNumeric!T )
{
return Expr!( OpBinary!( Op, Constant!( T, v ), R ) )();
}
};

But I cannot figure out how to implement expression templates for comparison 
operators, which is crucial for my purpose. The opCmp function is great for 
implementing comparison functionality, but when building an expression template 
tree the information on the actual comparison operator is needed. opCmp just 
knows that a comparison is going on, the actual type of comparison is unknown.
What I would like to have is something like this:

auto opCmp( string Op, R2 )( Expr!R2 )
{
return Expr!( OpBinary!( Op, R, R2 ) )();
}

So opCmp knows about the actual operator and would just use my OpBinary struct 
to encode it. But this is not possible.

The only workaround for I this problem I can imagine is using my own comparison 
functions instead of the comparison operators:
op!"<"( a, b ) instead of a < b.
Another possibility would be to call opBinary explicitly:
a.opCmp!"<"( b )
In this case I would not even have to write additional code.

But these workarounds are ugly, if would greatly prefer the normal comparison 
operators.
Does anyone has an idea how to use them?

Regards,
enuhtac


In the present language I don't think it's possible to do expression 
templates involving opCmp or opEquals, since you have no control over 
the return type.


One other solution I did: I had some code where the result of the 
expression template was used in a mixin, so I moved the mixin outside 
the expression. Then I changed all the >, <, == into something else 
before converting it into an expression template.

But this is ugly as well.


Re: opDollar()

2011-05-02 Thread Don

Dmitry Olshansky wrote:

On 26.03.2011 11:03, Caligo wrote:

"In the expression a[, ...,], if $ occurs in, it is rewritten as a.opDollar!(i)()."  -- TDPL, pg 380

Is that correct? if so, could some one give an example code?  I don't
understand the need for the parameter.

Also, what is the signature for opDollar() in a struct.  I'm getting
errors trying to implement this.
That parameter means number of dimension. When implementing some kind of 
multidimensional array (e.g. an 2D raster Image) you'd have:
img[$-1, $-1] = lastValue; // the first dollar should resolve to 
"width", the second to "height"


Now speaking of it's implementation - it's quite broken.


It's not broken -- it's not implemented at all!

The relevant bug report is 
http://d.puremagic.com/issues/show_bug.cgi?id=3474 (vote up!)
Still it's not considered to be a critical one, since you can workaround 
it by:

img[img.width-1,img.height-1] = lastValue;



Re: Matrix creation quiz

2011-04-30 Thread Don

bearophile wrote:

Pedro Rodrigues:

The fact that 'i' and 'j' are deduced to type 'uint' in the second 
version. That's the kind of bug that would keep me up at night.


Almost right answer. i and j are size_t, that is not uint in 64 bit 
compilations. Unsigned numbers cause the (i-j) sub-expression to give wrong 
results.



Moritz Warning:


I wonder, can there be done smth. on behalf of the language to prevent
this kind of bug?


Two possible solutions, both refused by Walter:
- Dmd may use signed word for array indexes and lenghts.

Yes -- but see below.


- dmd may introduce runtime overflows.
That would not fix this problem. You're doing arithmetic on unsigned 
values, where overflow doesn't happen.



Solution 3:
Dmd could use a special size_t type internally, defined as an integer of 
range equal to the address space. Internally, the compiler would view it 
as a long of range 0..cast(long)uint.max.
Thus, although it would implicitly convert to uint, it would not have 
uint semantics (size_t*size_t would no longer convert to uint).
But it wouldn't be an int, either. ( int a; if (a>b.length).. would be a 
signed/unsigned mismatch).


Incidentally a size_t type would allow us to catch bugs like:

uint n = a.length;
-- which compiles happily on 32 bits, but won't compile on a 64 bit 
system. I think it should be rejected on all systems.




Re: Which import lib holds lrintf()?

2011-04-28 Thread Don

Andrej Mitrovic wrote:

I have a hunch that this function is only available on Linux. If
that's so it should maybe be put in a version(linux) statement.

But I just found lrint is in std.math as well so I can actually use that.


There should be no reason to use anything from c.math.


Re: Next Release

2011-04-20 Thread Don

Jonathan M Davis wrote:

Greetings All

It has been 2 months since we had release 2.052. Just wondering when is the
2.053 release planned?




There isn't really a release schedule. A release kind of just happens when 
Walter decides that it's time or when someone else on the dev team (like Don) 
pushes for one.


I'm about to push for one . Normally, it's every two months.



Given the major changes that Don is currently making to CTFE, I wouldn't 
expect a release for a while. When he's likely to be done and those changes 
stable enough to be released, I don't know, but until they are, I wouldn't 
expect a release.


I'm only fixing regressions now. I've just made a pull request for the 
last ones which are open.


 I would guess that it'll be within the next month, but I
don't know. I don't know how far along Don is. I do know that there are 
definite issues with the current state of dmd though, thanks to the fact that 
he's in the midst of making his changes.


I have another round of CTFE improvements planned, but I won't work on 
them until the next release. They won't be as disruptive to stability as 
these ones were. Basically, most of the infrastructure is in place now.


Re: Assertion failure: '!vthis->csym' on line 703 in file 'glue.c'

2011-03-28 Thread Don

nrgyzer wrote:

Hey guys,

I got "Assertion failure: '!vthis->csym' on line 703 in file 'glue.c'"
after I add "LinkList!(uint) myList;" to my source file. I figured out
that the same bug was already reported on http://lists.puremagic.com/
pipermail/digitalmars-d-bugs/2010-October/019237.html
Ticket 4129 describes a bug in glue.c but for DMD 1.x and I'm using DMD
2.052. 


It's still the same bug. The source code for the backend is the same for 
 all versions of DMD. The reported line number might change slightly 
(eg, might be line 691 on an old DMD version).


Re: Want to help DMD bugfixing? Write a simple utility.

2011-03-25 Thread Don

spir wrote:

On 03/25/2011 12:08 PM, Regan Heath wrote:
On Wed, 23 Mar 2011 21:16:02 -, Jonathan M Davis 
 wrote:
There are tasks for which you need to be able to lex and parse D 
code. To

100% correctly remove unit tests would be one such task.


Is that last bit true? You definitely need to be able to lex it, but 
instead of
actually parsing it you just count { and } and remove 'unittest' plus 
{ plus }

plus everything in between right?


At first sight, you're both wrong: you'd need to count { } levels. Also, 
I think true lexing is not really needed: you'd only need to put apart 
strings and comments that could hold non-code { & }.

(But these are only very superficial notes.)

Denis


Yes, exactly: you just need to lex strings (including q{}), comments 
(which you remove),

unittest, and count levels of {.
You need to worry about backslashes in comments, but that's about it.

I even did this in a CTFE function once, I know it isn't complicated.
Should be possible in < 50 lines of code.
I just didn't want to have to do it myself.

In fact, it would be adequate to replace:
unittest
{
   blah...
}
with:
unittest{}

Then you don't need to worry about special cases like:

version(XXX)
unittest
{
...
}


Re: Want to help DMD bugfixing? Write a simple utility.

2011-03-20 Thread Don

Jonathan M Davis wrote:

Jonathan M Davis wrote:

On Saturday 19 March 2011 18:04:57 Don wrote:

Jonathan M Davis wrote:

On Saturday 19 March 2011 17:11:56 Don wrote:

Here's the task:
Given a .d source file, strip out all of the unittest {} blocks,
including everything inside them.
Strip out all comments as well.
Print out the resulting file.

Motivation: Bug reports frequently come with very large test cases.
Even ones which look small often import from Phobos.
Reducing the test case is the first step in fixing the bug, and it's
frequently ~30% of the total time required. Stripping out the unit
tests is the most time-consuming and error-prone part of reducing the
test case.

This should be a good task if you're relatively new to D but would
like to do something really useful.

Unfortunately, to do that 100% correctly, you need to actually have a
working D lexer (and possibly parser). You might be able to get
something close enough to work in most cases, but it doesn't take all
that much to throw off a basic implementation of this sort of thing if
you don't lex/parse it with something which properly understands D.

- Jonathan M Davis

I didn't say it needs 100% accuracy. You can assume, for example, that
"unittest" always occurs at the start of a line. The only other things
you need to lex are {}, string literals, and comments.

BTW, the immediate motivation for this is std.datetime in Phobos. The
sheer number of unittests in there is an absolute catastrophe for
tracking down bugs. It makes a tool like this MANDATORY.

I tried to create a similar tool before and gave up because I couldn't
make it 100% accurate and was running into problems with it. If someone
wants to take a shot at it though, that's fine.

As for the unit tests in std.datetime making it hard to track down bugs,
that only makes sense to me if you're trying to look at the whole thing
at once and track down a compiler bug which happens _somewhere_ in the
code, but you don't know where. Other than a problem like that, I don't
really see how the unit tests get in the way of tracking down bugs. Is
it that you need to compile in a version of std.datetime which doesn't
have any unit tests compiled in but you still need to compile with
-unittest for other stuff?

No. All you know there's a bug that's being triggered somewhere in
Phobos (with -unittest). It's probably not in std.datetime.
But Phobos is a horrible ball of mud where everything imports everything
else, and std.datetime is near the centre of that ball. What you have to
do is reduce the amount of code, and especially the number of modules,
as rapidly as possible; this means getting rid of imports.

To do this, you need to remove large chunks of code from the files. This
is pretty simple; comment out half of the file, if it still works, then
delete it. Normally this works well because typically only about a dozen
lines are actually being used. After doing this about three or four
times it's small enough that you can usually get rid of most of the
imports. Unittests foul this up because they use functions/classes from
inside the file.

In the case of std.datetime it's even worse because the signal-to-noise
ratio is so incredibly poor; it's really difficult to find the few lines
of code that are actually being used by other Phobos modules.

My experience (obviously only over the last month or so) has been that
if the reduction of a bug is non-obvious, more than 10% of the total
time taken to fix that bug is the time taken to cut down std.datetime.


Hmmm. I really don't know what could be done to fix that (other than making it 
easier to rip out the unittest blocks). And enough of std.datetime depends on 
other parts of std.datetime that trimming it down isn't (and can't be) exactly 
easy. In general, SysTime is the most likely type to be used, and it depends 
on Date, TimeOfDay, and DateTime, and all 4 of those depend on most of the 
free functions in the module. It's not exactly designed in a manner which 
allows you to cut out large chunks and still have it compile. And I don't 
think that it _could_ be designed that way and still have the functionality 
that it has.


The problem is purely the large fraction of the module which is devoted 
to unit tests. That's all.




I guess that this sort of problem is one that would pop up mainly when dealing 
with compiler bugs. I have a hard time seeing it popping up with your typical 
bug in Phobos itself. So, I guess that this is the sort of thing that you'd 
run into and I likely wouldn't.


Yes.

I really don't know how the situation could be improved though other than 
making it easier to cut out the unit tests.


- Jonathan M Davis


Hence the motivation for this utility. The problem exists in all 
modules, but in std.datetime it's such an obvious time-waster that I 
can't keep ignoring it.


Re: Want to help DMD bugfixing? Write a simple utility.

2011-03-19 Thread Don

Jonathan M Davis wrote:

On Saturday 19 March 2011 18:04:57 Don wrote:

Jonathan M Davis wrote:

On Saturday 19 March 2011 17:11:56 Don wrote:

Here's the task:
Given a .d source file, strip out all of the unittest {} blocks,
including everything inside them.
Strip out all comments as well.
Print out the resulting file.

Motivation: Bug reports frequently come with very large test cases.
Even ones which look small often import from Phobos.
Reducing the test case is the first step in fixing the bug, and it's
frequently ~30% of the total time required. Stripping out the unit tests
is the most time-consuming and error-prone part of reducing the test
case.

This should be a good task if you're relatively new to D but would like
to do something really useful.

Unfortunately, to do that 100% correctly, you need to actually have a
working D lexer (and possibly parser). You might be able to get
something close enough to work in most cases, but it doesn't take all
that much to throw off a basic implementation of this sort of thing if
you don't lex/parse it with something which properly understands D.

- Jonathan M Davis

I didn't say it needs 100% accuracy. You can assume, for example, that
"unittest" always occurs at the start of a line. The only other things
you need to lex are {}, string literals, and comments.

BTW, the immediate motivation for this is std.datetime in Phobos. The
sheer number of unittests in there is an absolute catastrophe for
tracking down bugs. It makes a tool like this MANDATORY.


I tried to create a similar tool before and gave up because I couldn't make it 
100% accurate and was running into problems with it. If someone wants to take a 
shot at it though, that's fine.


As for the unit tests in std.datetime making it hard to track down bugs, that 
only makes sense to me if you're trying to look at the whole thing at once and 
track down a compiler bug which happens _somewhere_ in the code, but you don't 
know where. Other than a problem like that, I don't really see how the unit 
tests get in the way of tracking down bugs. Is it that you need to compile in a 
version of std.datetime which doesn't have any unit tests compiled in but you 
still need to compile with -unittest for other stuff?


No. All you know there's a bug that's being triggered somewhere in 
Phobos (with -unittest). It's probably not in std.datetime.
But Phobos is a horrible ball of mud where everything imports everything 
else, and std.datetime is near the centre of that ball. What you have to 
do is reduce the amount of code, and especially the number of modules, 
as rapidly as possible; this means getting rid of imports.


To do this, you need to remove large chunks of code from the files. This 
is pretty simple; comment out half of the file, if it still works, then 
delete it. Normally this works well because typically only about a dozen 
lines are actually being used. After doing this about three or four 
times it's small enough that you can usually get rid of most of the imports.
Unittests foul this up because they use functions/classes from inside 
the file.


In the case of std.datetime it's even worse because the signal-to-noise 
ratio is so incredibly poor; it's really difficult to find the few lines 
of code that are actually being used by other Phobos modules.


My experience (obviously only over the last month or so) has been that 
if the reduction of a bug is non-obvious, more than 10% of the total 
time taken to fix that bug is the time taken to cut down std.datetime.


Re: Want to help DMD bugfixing? Write a simple utility.

2011-03-19 Thread Don

Jonathan M Davis wrote:

On Saturday 19 March 2011 17:11:56 Don wrote:

Here's the task:
Given a .d source file, strip out all of the unittest {} blocks,
including everything inside them.
Strip out all comments as well.
Print out the resulting file.

Motivation: Bug reports frequently come with very large test cases.
Even ones which look small often import from Phobos.
Reducing the test case is the first step in fixing the bug, and it's
frequently ~30% of the total time required. Stripping out the unit tests
is the most time-consuming and error-prone part of reducing the test case.

This should be a good task if you're relatively new to D but would like
to do something really useful.


Unfortunately, to do that 100% correctly, you need to actually have a working D 
lexer (and possibly parser). You might be able to get something close enough to 
work in most cases, but it doesn't take all that much to throw off a basic 
implementation of this sort of thing if you don't lex/parse it with something 
which properly understands D.


- Jonathan M Davis


I didn't say it needs 100% accuracy. You can assume, for example, that 
"unittest" always occurs at the start of a line. The only other things 
you need to lex are {}, string literals, and comments.


BTW, the immediate motivation for this is std.datetime in Phobos. The 
sheer number of unittests in there is an absolute catastrophe for 
tracking down bugs. It makes a tool like this MANDATORY.




Want to help DMD bugfixing? Write a simple utility.

2011-03-19 Thread Don

Here's the task:
Given a .d source file, strip out all of the unittest {} blocks,
including everything inside them.
Strip out all comments as well.
Print out the resulting file.

Motivation: Bug reports frequently come with very large test cases.
Even ones which look small often import from Phobos.
Reducing the test case is the first step in fixing the bug, and it's 
frequently ~30% of the total time required. Stripping out the unit tests 
is the most time-consuming and error-prone part of reducing the test case.


This should be a good task if you're relatively new to D but would like 
to do something really useful.

-Don


Re: BigInt problem

2011-02-18 Thread Don

tsukikage wrote:

Please see source in attachment.
The output is

 M2 M3 M5 M7 M13 M17 M19 M31 M61 M89 M107 M127 M521 M607 M1279 M2203 
M2281 M3217 M4253 M4423

*** M9689***
 M9941 M11213 M19937
*** M21701***
 M23209

It missed 2 Mersenne Primes 9689 & 21701.
Is it my program bug or bigint broken?
It seems subtle.

Thank you!



That's quite a terrible bug in Bigint. It's slipped through testing 
because of a fault in the win32 Phobos makefile (someone disabled 
asserts!). If Phobos is compiled in debug mode, your code causes an 
assert failure inside the bigint code.


It's a straightforward problem with the fast recursive division 
implementation, failing to properly normalize intermediate quotients.
Unfortunately the fix won't make the next release, which happens in a 
few hours.


Re: github syntax hilighting

2011-01-28 Thread Don

Jacob Carlborg wrote:

On 2011-01-26 20:30, Jonathan M Davis wrote:

On Wednesday, January 26, 2011 11:21:55 Brad Roberts wrote:

On 1/26/2011 7:13 AM, Steven Schveighoffer wrote:

Anyone have any clue why this file is properly syntax-aware:

https://github.com/D-Programming-Language/druntime/blob/master/src/rt/lif 


etime.d

but this file isn't

https://github.com/D-Programming-Language/druntime/blob/master/src/core/t 


hread.d

I'm still not familiar at all with git or github...

-Steve


I'm going to guess it's filesize.  Totally a guess, but consider that
adding highlighting costs additional markup.  The file that's not
highlighted is over 100k, the file that is is only(!) 62k.


LOL. It won't even _show_ std.datetime. You may be on to something there.

- Jonathan M Davis


If github even won't show the file you clearly has too much in one file. 
More than 34k lines of code (looking at the latest svn), are you kidding 
me. That's insane, std.datetimem should clearly be a package. I don't 
know why Andrei and Walter keeps insisting on having so much code in one 
file


It takes about 10 seconds to get syntax highlighting at the bottom of 
the file in TextMate.


You can compile the whole of Phobos in that time... 


Re: array of elements of various sybtypes

2011-01-28 Thread Don

spir wrote:

Hello,

This fails:

class T0 {}
class T1 : T0 {}
class T2 : T0 {}

unittest {
auto t1 = new T1();
auto t2 = new T2();
T0[] ts = [t1, t2];
}

Error: cannot implicitly convert expression (t1) of type __trials__.T0 
to __trials__.T2
Error: cannot implicitly convert expression ([(__error),t2]) of type 
T2[] to T0[]


I guess it should be accepted due to explicite typing 'T0[]'. What do 
you think? D first determines the type of the last element (always the 
last one), here T2. Then, /ignoring/ the array's defined type, tries to 
cast other elements to the same type T2. 


No, it's not supposed to do that. What is supposed to happen is, to use 
?: to determine the common type of the array. This will give a T0[] array.

Then, it attempts to implicitly cast to the defined type.

Your code should work. You've just hit an implementation bug.


Re: Assertion failure: '!cases' on line 2620 in file 'statement.c'

2011-01-14 Thread Don

%u wrote:

== Quote from Don (nos...@nospam.com)'s article

Yay for first time compiling dmd :)

Sorry you had to do that!


Had to learn that once anyway :)
Maybe I'll even be able to take a stab at fixing bugs someday..



Added your bug as:

http://d.puremagic.com/issues/show_bug.cgi?id=5453


Re: Assertion failure: '!cases' on line 2620 in file 'statement.c'

2011-01-13 Thread Don

%u wrote:

== Quote from Don (nos...@nospam.com)'s article

It's in a switch statement somewhere.
It sounds as though this is a bug which involves multiple files, so
it'll be difficult to reduce it.
If you're able to compile DMD, change this line in statement.c line 2620:
Statement *SwitchStatement::semantic(Scope *sc)
{
 //printf("SwitchStatement::semantic(%p)\n", this);
 tf = sc->tf;
+   if (cases) error("xxx");
 assert(!cases); // ensure semantic() is only run once
and then you'll get the line number where the error is.


Yay for first time compiling dmd :)


Sorry you had to do that!




If you can provide the function which contains the switch statement,
there's a chance I could reproduce it.


I've got something better.. a minimal version :)
Which even crashes through bud.


Awesome!
I'll track this sucker down so it doesn't hit anyone else.




module main;

enum E { A = 0 };

struct S{
  int i;

  static S func( S s, E e ){
switch( e ) //<-- here
{
  default:return s;
}
  }

  static const S s_def = { 1 };
  //static const S A = func(s_def, E.A ); // forward reference error + crash
  //static const S[1] ARR = [ E.A : func(s_def, E.A )]; // error : xxx + crash
}

void main(){}

To test all this I switched from 1.065 to 1.066; just to make sure it hadn't 
been
fixed already.
And now my project won't compile any more even though bud+1.065 will happily do 
so..
bud+ 1.066 gives me the following (no crash though)

Max # of fixups = 89
Max # of fixups = 4
Max # of fixups = 112
Max # of fixups = 17
Max # of fixups = 2871
Max # of fixups = 233
Max # of fixups = 138
Max # of fixups = 7
Max # of fixups = 353
Max # of fixups = 446
Max # of fixups = 5
Max # of fixups = 4117
Max # of fixups = 37
Max # of fixups = 288
Max # of fixups = 330
Max # of fixups = 338
Max # of fixups = 144
Max # of fixups = 660
Max # of fixups = 51
Max # of fixups = 4
Max # of fixups = 220
Max # of fixups = 2765
Max # of fixups = 12
Max # of fixups = 5
Max # of fixups = 5564
Max # of fixups = 2714
Internal error: backend\cgobj.c 2424

What does that mean?


The DMD makefile compiles in debug mode by default. It prints some 
useless 'fixups' junk. You need to compile with make -fwin32.mak release

or make -flinux.mak release
to make a release compiler. But, not important, you can go back to using 
an official compiler again. 
The internal error means it's now crashing in the backend, rather than 
the frontend. Not an improvement!


Re: Assertion failure: '!cases' on line 2620 in file 'statement.c'

2011-01-13 Thread Don

%u wrote:

== Quote from Don (nos...@nospam.com)'s article

%u wrote:

Should I post it as a bug, even though I have no code to accompany it?
I have no clue as to where to start my directed search for a minimal case.

Can you post the entire source code?
It's important that it be reproducible. It doesn't need to be minimal -
someone else can reduce it.


I crashed it again :)

Sorry, can't share the code..


Bummer.


I don't really have any time atm, look at it again tomorrow.


It's in a switch statement somewhere.
It sounds as though this is a bug which involves multiple files, so 
it'll be difficult to reduce it.


If you're able to compile DMD, change this line in statement.c line 2620:

Statement *SwitchStatement::semantic(Scope *sc)
{
//printf("SwitchStatement::semantic(%p)\n", this);
tf = sc->tf;
+   if (cases) error("xxx");
assert(!cases); // ensure semantic() is only run once


and then you'll get the line number where the error is.
If you can provide the function which contains the switch statement, 
there's a chance I could reproduce it.


Re: Assertion failure: '!cases' on line 2620 in file 'statement.c'

2011-01-12 Thread Don

%u wrote:

Should I post it as a bug, even though I have no code to accompany it?
I have no clue as to where to start my directed search for a minimal case.


Can you post the entire source code?
It's important that it be reproducible. It doesn't need to be minimal - 
someone else can reduce it.


Re: DMD2 out parameters

2010-12-23 Thread Don

Pete wrote:

Ok, i've done some more investigating and it appears that in DMD2 a float NaN is
0x7FE0 (in dword format) but when it initialises a float 'out' parameter it
initialises it with 0x7FA0H. This causes an FPU trap which is where the time
is going. This looks like a bug to me. Can anyone confirm?

Thanks.


Yes, it sounds like a NaN-related peformance issue. Note, though, that 
the slowdown you experience is processor-model specific. It's a penalty 
of ~250 cycles on a Pentium 4 with x87 instructions, but zero cycles on 
many other processors. (in fact, it's also zero cycles with SSE on 
Pentium 4!).


Re: double -> double[]... | feature or bug?

2010-12-23 Thread Don

bearophile wrote:

spir:


While I understand some may consider this a nice feature, for me this is an 
enormous bug. A great way toward code obfuscation. I like D among other reasons 
because it's rather clear compared to other languages of the family.


The main problem here is that I have never felt the need of that feature, so 
for me it's useless. Has Walter added it after a good number of people have 
asked for this feature? Has any one here needed it?

Bye,
bearophile


I'm almost certain that the behaviour is not intentional.


Re: Why does this floating point comparison fail?

2010-12-16 Thread Don

Jonathan M Davis wrote:
Maybe I'm just totally missing something, but this seems really wrong to me. 
This program:


import std.stdio;

void main()
{
writeln(2.1);
writeln(2.1 == 2.1);
writeln(3 * .7);
writeln(2.1 == 3 * .7);

auto a = 2.1;
auto b = 3 * .7;
writeln(a);
writeln(b);
writeln(a == b);
}



prints out

2.1
true
2.1
false
2.1
2.1
true


How on earth is 2.1 not equal to 3 * .7?


0.7 is not exactly representable in binary floating point, nor is 2.1.
(they are the binary equivalent of a recurring decimal like 
0....)


 Adding extra parens doesn't help, so
it's not an operator precedence issue or anything like that. For some reason, 
the result of 3 * .7 is not considered to be equal to the literal 2.1. What's 
the deal? As I understand it, floating point operations at compile time do not 
necessarily match those done at runtime, but 2.1 should be plenty exact for both 
compile time and runtime. It's not like there are a lot of digits in the number, 
and it prints out 2.1 whether you're dealing with a variable or a literal. 
What's the deal here? Is this a bug? Or am I just totally misunderstanding 
something?


Constant folding is done at real precision. So 3 * .7 is an 80-bit number.
But, 'a' has type double. So it's been truncated to 64 bit precision.

When something confusing like this happens, I recommend using the "%a" 
format, since it never lies.

writefln("%a %a", a, 3*.7);


Re: Ddoc doesn't support documentation for mixined code?

2010-12-10 Thread Don

Alex Moroz wrote:

Hi,

As of present Ddoc doesn't seem to process documenatation for code that 
is inserted with mixins.


Bug 2440?


Re: casting int to uint and vice versa - performance

2010-12-03 Thread Don

spir wrote:

On Fri, 03 Dec 2010 17:49:47 -0500
"Steven Schveighoffer"  wrote:


just that i + i generates different instructions than ui + ui where int
i; uint ui;

Not really important, because I'm currently interested in cast only.

Thank you.  
That is odd, I would think that i+i generates the same instructions as  
ui+ui.  As far as I know, adding unsigned and signed is the same  
instruction, but unsigned add just ignores the carry bit.


That used to be true (at least on some machines, a long time ago).

Denis
-- -- -- -- -- -- --
vit esse estrany ☣

spir.wikidot.com



It's only multiply, divide, and right shift where the instructions are 
different.


Re: Crash in struct opAssign

2010-12-02 Thread Don

Adam Burton wrote:

olivier wrote:


Hi,

This program, compiled with dmd 1.065 under Linux, crashes:

void main() {
Num a = 1;
}

struct Num {
int[] d;

Num opAssign( int v ) {
d.length = 1;
d[0] = v;
return *this;
}
}

It looks like d is not initialized before opAssign() is called.
It doesn't crash if I replace "Num a = 1" with:
Num a;
a = 1;

Is this normal behaviour?
I don't believe the failing code uses opAssign. I believe that is 
initialization. In D2 "Num a = 1;" would be initialization or call a 
constructor if appropriate and "Num a; a = 1;" would use opAssign. Based on 
http://www.digitalmars.com/d/1.0/struct.html I am thinking it does the same 
(minus the constructors). So I think this might be a bug with struct 
initialization in D1, I don't think you should be able to just assign 1 to 
it.

My guess is that it's doing:
a.d[] = 1;
and since d.length==0, that's a no-op.
It shouldn't compile.


Re: bigint

2010-11-29 Thread Don

Kagamin wrote:

Don Wrote:


Why are they templated to begin with? Just for the heck of it?

bool opEquals(ref const BigInt y) const
bool opEquals(long y) const

No, because then it fails for ulong.
It's those bloody C implicit conversions.


hmm... works for me:
---
struct A
{
bool opEquals(ref const A y) const
{
return false;
}
bool opEquals(long y) const
{
return true;
}
}

int main()
{
A a;
ulong b=42;
assert(a==b);
return 0;
}
---

Yes, but the code is incorrect if b is larger than long.max.
The problem is that values in the range long.max+1..ulong.max
get turned into values in the range -1 .. -long.max-1

How can you distinguish them?


Re: bigint

2010-11-28 Thread Don

Kagamin wrote:

Matthias Walter Wrote:


bool opEquals(Tdummy=void)(ref const BigInt y) const
bool opEquals(T: int)(T y) const

The only working sigature for array-of-structs-comparison to work is

bool opEquals(ref const BigInt y) const

But this removes the ability to compare against ints.


Why are they templated to begin with? Just for the heck of it?

bool opEquals(ref const BigInt y) const
bool opEquals(long y) const


No, because then it fails for ulong.
It's those bloody C implicit conversions.


Re: Internal error: e2ir.c 4629

2010-11-27 Thread Don

Trass3r wrote:

If this isn't in bugzilla, please file a bug report.
http://d.puremagic.com/issues/query.cgi


It probably has the same root cause as bug 4066.


Re: public imports and template functions

2010-11-24 Thread Don

Jonathan M Davis wrote:
Am I correct in my understanding that if you wish a template function which is 
imported from another module to compile correctly without requiring other 
imports in the module that your using the function in that the module with the 
template function needs to publically import all of the functions and types that 
it needs?


That's only true of mixins. (And for that reason, you should generally 
use fully qualified names in mixins).


  1   2   3   >