Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)

2012-11-13 Thread Don Clugston

On 13/11/12 06:51, Rob T wrote:

On Monday, 12 November 2012 at 14:28:53 UTC, Andrej Mitrovic wrote:

On 11/12/12, Andrej Mitrovic andrej.mitrov...@gmail.com wrote:

On 11/12/12, Don Clugston d...@nospam.com wrote:

Yeah. Though note that 1000 bug reports are from bearophile.


Actually only around 300 remain open:
http://d.puremagic.com/issues/buglist.cgi?query_format=advancedemailreporter2=1emailtype2=substringorder=Importancebug_status=UNCONFIRMEDbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDbug_status=VERIFIEDemail2=bearophile




Oh wait, that's only for DMD. It's 559 in total:
http://d.puremagic.com/issues/buglist.cgi?query_format=advancedemailreporter2=1emailtype2=substringorder=Importancebug_status=UNCONFIRMEDbug_status=NEWbug_status=ASSIGNEDbug_status=REOPENEDbug_status=VERIFIEDemail2=bearophile



Issue 8990 and 6969, both related to this thread and unresolved, are not
in the list, so I suspect there's a lot more missing too.

PS: I could not figure out how to make a useful report using that bug
report tool either.

--rt


I recommend deskzilla lite. D is on its list of supported open-source 
projects. It maintains a local copy of the entire bugzilla database, so 
you're not restricted to the slow and horrible html interface.







Re: Compilable Recursive Data Structure ( was: Recursive data structure using template won't compile)

2012-11-12 Thread Don Clugston

On 10/11/12 08:53, Rob T wrote:

On Saturday, 10 November 2012 at 06:09:41 UTC, Nick Sabalausky wrote:

I've gone ahead and filed a minimized test case, and also included your
workaround:
http://d.puremagic.com/issues/show_bug.cgi?id=8990

I didn't make that one struct nested since that's not needed to
reproduce the error.


Thanks for filing it.

Looking at the bug reports, there's 2000+ unresolved? Yikes.

--rt


Yeah. Though note that 1000 bug reports are from bearophile.


Re: vestigial delete in language spec

2012-11-02 Thread Don Clugston

On 01/11/12 22:21, Dan wrote:

TDPL states
--
However, unlike in C++, clear does not dispose of the object’s
own memory and there is no delete operator. (D used to have a
delete operator, but it was deprecated.) You still can free
memory manually if you really, really know what you’re doing by
calling the function GC.free() found in the module core.memory.
--
The language spec has this example from the section on Struct
Postblits:
--
struct S {
int[] a;// array is privately owned by this instance
this(this) {
  a = a.dup;
}
~this() {
  delete a;
}
}
--

Is the delete call, then per TDPL not necessary? Is it harmful or
harmless?

Also, are there any guidelines for using and interpreting the output of
valgrind on a D executable?

Thanks
Dan


You'll probably have trouble getting much out of valgrind, because it 
doesn't support 80-bit floating instructions, unfortunately.






Re: __traits(compiles,...) = ? is(typeof(...))

2012-10-30 Thread Don Clugston

On 29/10/12 12:03, Jonathan M Davis wrote:

On Monday, October 29, 2012 11:42:59 Zhenya wrote:

Hi!

Tell me please,in this code first and second static if,are these
equivalent?
with arg = 1, __traits(compiles,check(arg);) = true,
is(typeof(check(arg))) = false.


In principle, is(typeof(code)) checks whether the code in there is
syntatically and semantically valid but does _not_ check whether the code
actually compiles. For instance, it checks for the existence of the symbols
that you use in it, but it doesn't check whether you can actually use the
symbol (e.g. it's private in another module).


Not even that. It checks if the expression has a type. That's all.
The expression has be syntactically valid or it won't compile at all. 
But inside is(typeof()) it is allowed to be semantically invalid.


I started using is(typeof()) as a check for compilability, and other 
people used the idea as well. But it only works if you can convert 
compilable into has a type.


Re: Copying with immutable arrays

2012-10-30 Thread Don Clugston

On 29/10/12 07:19, Ali Çehreli wrote:

On 10/28/2012 02:37 AM, Tobias Pankrath wrote:
  the struct
  SwA from above does neither correspond to SA nor to SB, it's imo more
  like SC:
 
  struct SC {
  immutable(int)* i;
  }

Just to confirm, the above indeed works:

struct SC {
 immutable(int)* i;
}

void main()
{
 immutable(SC)[] arr1;
 SC[] arr2 = arr1.dup;// compiles
}

  Now a copy would do no harm, everything that was immutable in the source
  and is still accessable from the copy is immutable, too. Any reason
  (despite of implemenational issues) that this is not how it works?

Getting back to the original code, the issue boils down to whether we
can copy imutable(string[]) to string[]:

import std.stdio;

struct SwA {
 string[] strings;
}

void main()
{
 immutable(SwA)[] arr1;
 writeln(typeid(arr1[0].strings));
}

The program prints the following:

immutable(immutable(immutable(char)[])[])

Translating the innermost definition as string:

immutable(immutable(string)[])

Let's remove the struct and look at a variable the same type as the member:

 immutable(string[]) imm = [ a, b ];
 writeln(typeid(imm));

The typeid is the same:

immutable(immutable(immutable(char)[])[])

So we can concentrate on 'imm' for this exercise. This is the same
compilation error:

 immutable(string[]) imm = [ aaa, bbb ];
 string[] mut = imm;   // -- compilation ERROR

If that compiled, then both 'imm' and 'mut' would be providing access to
the same set of strings. But the problem is, 'mut' could replace those
strings (note that it could not modify the characters of those strings,
but it could replace the whole string):

 mut[0] = hello;

That would effect 'imm' as well. ('imm' is the equivalent of SwA.strings
from your original code.)

Ali


Awesome answer, it's these kinds of responses that make the D community 
so great.






Re: long compile time question

2012-10-24 Thread Don Clugston

On 24/10/12 17:39, thedeemon wrote:

On Wednesday, 24 October 2012 at 03:50:47 UTC, Dan wrote:

The following takes nearly three minutes to compile.
The culprit is the line bar ~= B();
What is wrong with this?

Thanks,
Dan

struct B {
  const size_t SIZE = 1024*64;
  int[SIZE] x;
}

void main() {
  B[] barr;
  barr ~= B();
}
-


The code DMD generates for initializing the struct does not use loops,
so it's
xor ecx, ecx
mov [eax], ecx
mov [eax+4], ecx
mov [eax+8], ecx
mov [eax+0Ch], ecx
mov [eax+10h], ecx
mov [eax+14h], ecx
mov [eax+18h], ecx
mov [eax+1Ch], ecx
mov [eax+20h], ecx
mov [eax+24h], ecx
mov [eax+28h], ecx
mov [eax+2Ch], ecx
mov [eax+30h], ecx
mov [eax+34h], ecx
mov [eax+38h], ecx
...

So your code creates a lot of work for the compiler.


That's incredibly horrible, please add to bugzilla.




Re: Why are scope variables being deprecated?

2012-10-11 Thread Don Clugston

On 11/10/12 02:30, Piotr Szturmaj wrote:

Jonathan M Davis wrote:

On Thursday, October 11, 2012 01:24:40 Piotr Szturmaj wrote:

Could you give me an example of preventing closure allocation? I think I
knew one but I don't remember now...


Any time that a delegate parameter is marked as scope, the compiler
will skip
allocating a closure. Otherwise, it has to copy the stack from the
caller onto
the heap to create a closure so that the delegate will continue to
work once
the caller has completed (e.g. if the delegate were saved for a
callback and
then called way later in the program). Otherwise, it would refer to an
invalid
stack and really nasty things would happen when the delegate was
called later.

 

By marking the delegate as scope, you're telling the compiler that it
will not
escape the function that it's being passed to, so the compiler then
knows that
the stack that it refers to will be valid for the duration of that
delegate's
existence, so it knows that a closure is not required, so it doesn't
allocate
it, gaining you efficiency.


Thanks, that's clear now, but I found a bug:

__gshared void delegate() global;

void dgtest(scope void delegate() dg)
{
 global = dg; // compiles
}

void dguse()
{
 int i;
 dgtest({ writeln(i++); });
}

I guess it's a known one.


Looks like bug 5270?





Re: Best way of passing in a big struct to a function?

2012-10-10 Thread Don Clugston

On 10/10/12 09:12, thedeemon wrote:

On Wednesday, 10 October 2012 at 07:28:55 UTC, Jonathan M Davis wrote:

Making sure that the aa has been properly initialized before passing
it to a function (which would mean giving it at least one value) would
make the ref completely unnecessary.

- Jonathan M Davis


Ah, thanks a lot! This behavior of a fresh AA being null and then
silently converted to a non-null when being filled confused me.


Yes, it's confusing and annoying.
This is something in the language that we keep talking about fixing, but 
to date it hasn't happened.


Re: Is it possible to force CTFE?

2012-10-01 Thread Don Clugston

On 27/09/12 15:01, bearophile wrote:

Tommi:


2) Is it possible to specialize a function based on whether or not the
parameter that was passed in is a compile time constant?


I am interested in this since some years. I think it's useful, but I
don't know if it can be implemented. I don't remember people discussing
about this much.

Bye,
bearophile


It has been discussed very often, especially around the time that CTFE 
was first introduced. We never came up with a solution.




Re: About std.ascii.toLower

2012-09-27 Thread Don Clugston

On 20/09/12 18:57, Jonathan M Davis wrote:

On Thursday, September 20, 2012 18:35:21 bearophile wrote:

monarch_dodra:

It's not, it only *operates* on ASCII, but non ascii is still a



legal arg:

Then maybe std.ascii.toLower needs a pre-condition that
constraints it to just ASCII inputs, so it's free to return a
char.


Goodness no.

1. Operating on a char is almost always the wrong thing to do. If you really
want to do that, then cast. It should _not_ be encouraged.

2. It would be disastrous if std.ascii's funtions didn't work on unicode.
Right now, you can use them with ranges on strings which are unicode, which
can be very useful.

 I grant you that that's more obvious with something like

isDigit than toLower, but regardless, std.ascii is designed such that its
functions will all operate on unicode strings. It just doesn't alter unicode
characters and returns false for them with any of the query functions.


Are there any use cases of toLower() on non-ASCII strings?
Seriously? I think it's _always_ a bug.

At the very least that function should have a name like 
toLowerIgnoringNonAscii() to indicate that it is performing a really, 
really foul operation.


The fact that toLower(Ü) doesn't generate an error, but doesn't return 
ü is a wrong-code bug IMHO. It isn't any better than if it returned a 
random garbage character (eg, it's OK in my opinion for ASCII toLower to 
consider only the lower 7 bits).


OTOH I can see some value in a cased ASCII vs unicode comparison.
ie, given an ASCII string and a unicode string, do a case-insensitive 
comparison, eg look for

HTML inside öähaøſ€đ@htmlſŋħŋ€ł¶



Re: function is not function

2012-09-26 Thread Don Clugston

On 21/09/12 21:59, Ellery Newcomer wrote:

solution is to use std.traits, but can someone explain this to me?

import std.stdio;

void main() {
 auto a = {
 writeln(hi);
 };
 pragma(msg, typeof(a)); // void function()
 pragma(msg, is(typeof(a) == delegate)); // nope!
 pragma(msg, is(typeof(a) == function)); // nope!
}



The 'function' keyword is an ugly wart in the language.  In

void function()

'function' means 'function pointer'. But in

is (X == function)

'function' means 'function'.

Which is actually pretty much useless. You always want 'function 
pointer'. This is the only case where function type still exists in 
the language.





Re: object.error: Privileged Instruction

2012-09-26 Thread Don Clugston

On 22/09/12 21:49, Jonathan M Davis wrote:

On Saturday, September 22, 2012 21:19:27 Maxim Fomin wrote:

Privilege instruction is an assembly instruction which can be
executed only at a certain executive process context, typically
os kernel. AFAIK assert(false) was claimed to be implemented by
dmd as a halt instruction, which is privileged one.

However, compiled code shows that dmd generates int 3 instruction
for assert(false) statement and 61_6F_65_75 which is binary
representation of aoeu for assert(false, aoeu) statement and
the latter is interpreted as privileged i/o instruction.


It's a normal assertion without -release. With -release, it's a halt
instruction on Linux but IIRC it's something slightly different (albeit
similar) on Windows, though it might be halt there too.

- Jonathan M Davis



I implemented the code runtime code that does it, at least on Windows. 
You get much better diagnostics on Windows.
IMHO it is a Linux misfeature, they conflate a couple of unrelated 
hardware exceptions together into one signal, making it hard to identify 
which it was.




Re: filter out compile error messages involving _error_

2012-09-10 Thread Don Clugston

On 10/09/12 02:31, Jonathan M Davis wrote:

On Monday, September 10, 2012 02:16:19 Timon Gehr wrote:

Don has expressed the desire to weed those out completely.


If he can do it in a way that leaves in all of the necessary information, then
great, but you need to be able to know what the instantiation chain was.

- Jonathan M Davis


Yes, that's the idea. It's pretty much working for CTFE now (you get a 
complete call stack, with no spurious error messages).




Re: bigint - python long

2012-09-06 Thread Don Clugston

On 05/09/12 21:23, Paul D. Anderson wrote:

On Wednesday, 5 September 2012 at 18:13:40 UTC, Ellery Newcomer wrote:

Hey.

Investigating the possibility of providing this conversion in pyd.

Python provides an api for accessing the underlying bytes.

std.bigint seemingly doesn't. Am I missing anything?


No, I don't believe so. AFAIK there is no public access to the
underlying array, but I think it is a good idea.

I suspect the reason for not disclosing the details is to disallow
anyone putting the data into an invalid state. But read-only access
would be safe.


No, it's just not disclosed because I didn't know the best way to do it.
I didn't want to put something in unless I was sure it was correct.
(And a key part of that, is what is required to implement BigFloat).



Re: How to have strongly typed numerical values?

2012-09-05 Thread Don Clugston

On 05/09/12 03:42, bearophile wrote:

Nicholas Londey:


for example degrees west and kilograms such that they cannot be
accidentally mixed in an expression.


Using the static typing to avoid similar bugs is the smart thing to do :-)


I'd be interested to know if that idea is ever used in real code. I 
mean, it's a classic trendy template toy, but does anyone actually use it?


I say this because I've done a lot of physics calculation involving 
multiple complicated units, but never seen a use for this sort of thing.
In my experience, problems involving units are easily detectable (two 
test cases will catch them all).
The most insidious bugs are things like when you have used constants at 
25'C instead of 20'C, or when you have a sign wrong.


Re: CTFE question

2012-09-03 Thread Don Clugston

On 28/08/12 19:40, Philippe Sigaud wrote:

On Tue, Aug 28, 2012 at 2:07 PM, Chris Cain clc...@uncg.edu wrote:

On Tuesday, 28 August 2012 at 11:39:20 UTC, Danny Arends wrote:


Ahhh I understand...

As a follow up, is it then possible to 'track' filling a
large enum / immutable on compile time by outputting a msg
every for ?

I'm generating rotation matrices for yaw, pitch and roll
at compile time which can take a long time depending on
how fine grained I create them.



I'm pretty sure there isn't. However, if you're just trying to develop/test
your algorithm, you could write a program that runs it as a normal function
(and just use writeln) as you develop it. After it's done, you remove the
writelns, mark the function as pure and it should work exactly the same in
CTFE.


Godd adivce, except beware of using ++ and --, they don't work at
compile-time. I'm regularly caught unaware by this, particularly while
looping.


Really? That's scary. Is there a bug report for this?


Re: BitArray/BitFields - Review

2012-07-31 Thread Don Clugston

On 29/07/12 23:36, bearophile wrote:

Era Scarecrow:


 Another commonly needed operation is a very fast bit count. There 
are very refined algorithms to do this.



 Likely similar to the hamming weight table mentioned in TDPL.
Combined with the canUseBulk I think I could make it fairly fast.


There is lot of literature about implementing this operation
efficiently. For the first implementation a moderately fast (and short)
code is probably enough. Later faster versions of this operation will go
in Phobos, coming from papers.


See bug 4717. On x86, even on 32 bits you can get close to 1 cycle per 
byte, on 64 bits you can do much better.


Re: FYI my experience with D' version

2012-07-30 Thread Don Clugston

On 30/07/12 14:32, Jacob Carlborg wrote:

On 2012-07-30 12:30, torhu wrote:


version is good for global options that you set with -version on the
command line.  And can also be used internally in a module, but doesn't
work across modules.  But it seems you have discovered this the hard way
already.

I think there was a discussion about this a few years ago, Walter did it
this way on purpose.  Can't remember the details, though.


He probably wants to avoid the C macro hell.




IIRC it's because version identifiers are global.

__
module b;
version = CoolStuff;
__

module a;
import b;
version (X86)
{
   version = CoolStuff;
}

version(CoolStuff)
{
   // Looks as though this is only true on X86.
   // But because module b used the same name, it's actually true always.
}
__

These types of problems would be avoided if we used the one-definition 
rule for version statements, bugzilla 7417.

















Re: cast from void[] to ubyte[] in ctfe

2012-07-16 Thread Don Clugston

On 13/07/12 12:52, Johannes Pfau wrote:

Am Fri, 13 Jul 2012 11:53:07 +0200
schrieb Don Clugston d...@nospam.com:


On 13/07/12 11:16, Johannes Pfau wrote:

Casting from void[] to ubyte[] is currently not allowed in CTFE. Is
there a special reason for this? I don't see how this cast can be
dangerous?


CTFE doesn't allow ANY form of reinterpret cast, apart from
signed-unsigned. In particular, you can't do anything in CTFE which
exposes endianness.

It might let you cast from ubyte[] to void[] and then back to ubyte[]
or byte[], but that would be all.


So that's a deliberate decision and won't change?
I guess it's a safety measure as the ctfe and runtime endianness could
differ?


Yes.



Anyway, I can understand that reasoning but it also means that
the new std.hash could only be used with raw ubyte[] arrays and it
wouldn't be possible to generate the CRC/SHA1/MD5 etc sum of e.g. a
string in ctfe. (Which might make sense for most types as the result
could really differ depending on endianness, but it shouldn't matter for
UTF8 strings, right?)

Maybe I can special case CTFE so that at least UTF8 strings work.

BTW: casting from void[][] to ubyte[][] seems to work. I guess this is
only an oversight and nothing I could use as a workaround?


Probably a bug.
But you can convert from char[] to byte[]/ubyte[]. That's OK, it doesn't 
depend on endianness.


Re: cast from void[] to ubyte[] in ctfe

2012-07-13 Thread Don Clugston

On 13/07/12 11:16, Johannes Pfau wrote:

Casting from void[] to ubyte[] is currently not allowed in CTFE. Is
there a special reason for this? I don't see how this cast can be
dangerous?


CTFE doesn't allow ANY form of reinterpret cast, apart from 
signed-unsigned. In particular, you can't do anything in CTFE which 
exposes endianness.


It might let you cast from ubyte[] to void[] and then back to ubyte[] or 
byte[], but that would be all.


Re: stdarg x86_64 problems...

2012-07-12 Thread Don Clugston

On 12/07/12 11:12, John Colvin wrote:

When I compile the following code with -m32 and -m64 i get a totally
different result, the documentation suggests that they should be the
same...

import core.stdc.stdarg, std.stdio;

void main() {
 foo(0,5,4,3);
}

void foo(int dummy, ...) {
 va_list ap;

 for(int i; i10; ++i) {
 version(X86_64) {
 va_start(ap, __va_argsave);
 }
 else version(X86) {
 va_start(ap, dummy);
 }
 else
 static assert(false, Unsupported platform);

 int tmp;
 va_arg!(int)(ap,tmp);
 writeln(ap, , tmp);
 }
}

when compiled with -m32 I get:

FF960278 5
FF960278 5
FF960278 5
FF960278 5
FF960278 5

and with -m64 I get:

7FFFCDF941D0 5
7FFFCDF941D0 4
7FFFCDF941D0 3
7FFFCDF941D0 38
7FFFCDF941D0 -839302560

(the end stuff is garbage, different every time)

I'm uncertain, even after looking over the stdarg src, why this would
happen. The correct output is all 5s obviously.


Known bug, already fixed in git for a couple of months.




Re: A little story

2012-06-26 Thread Don Clugston

On 25/06/12 14:24, bearophile wrote:

Dmitry Olshansky:


Except for the fact, that someone has to implement it.


I am not seeing one of the posts of this thread. So I'll answer here.

The good thing regarding the run-time overflow integral tests is that
they are already implemented and available as efficient compiler
intrinsics in both GCC and LLVM back-ends. It's just a matter of using
them (and introducing the compiler switch and some kind of pragma syntax).

Bye,
bearophile


Bearophile, haven't you ever read that paper on integer overflow, which 
you keep posting to the newsgroup???


It clearly demonstrates that it is NOT POSSIBLE to implement integer 
overflow checking in a C-family language. Valid, correct, code which 
depends on integer overflow is very, very common (when overflow occurs, 
it's more likely to be correct, than incorrect).


I don't think you could do it without introducing a no-overflow integer 
type. The compiler just doesn't have enough information.




Re: floats default to NaN... why?

2012-06-05 Thread Don Clugston

On 14/04/12 16:52, F i L wrote:

On Saturday, 14 April 2012 at 10:38:45 UTC, Silveri wrote:

On Saturday, 14 April 2012 at 07:52:51 UTC, F i L wrote:

On Saturday, 14 April 2012 at 06:43:11 UTC, Manfred Nowak wrote:

F i L wrote:

4) use hardware signalling to overcome some of the limitations
impressed by 3).


4) I have no idea what you just said... :)


On Saturday, 14 April 2012 at 07:58:44 UTC, F i L wrote:

That's interesting, but what effect does appending an invalid char to
a valid one have? Does the resulting string end up being NaS (Not a
String)? Cause if not, I'm not sure that's a fair comparison.


The initialization values chosen are also determined by the underlying
hardware implementation of the type. Signalling NANs
(http://en.wikipedia.org/wiki/NaN#Signaling_NaN) can be used with
floats because they are implemented by the CPU, but in the case of
integers or strings their aren't really equivalent values.


I'm sure the hardware can just as easily signal zeros.


It can't.


Re: pure functions calling impure functions at compile-time

2012-05-24 Thread Don Clugston

On 23/05/12 11:41, bearophile wrote:

Simen Kjaeraas:


Should this be filed as a bug, or is the plan that only pure functions be
ctfe-able? (or has someone already filed it, perhaps)


It's already in Bugzilla, see issue 7994 and 6169.


It's just happening because the purity checking is currently being done 
in a very unsophisticated way.



But I think there is a semantic hole in some of the discussions about
this problem. Is a future compile-time JIT allowed to perform
purity-derived optimizations in CTFE?


Some, definitely. Eg.  foo(n) + foo(n)

can be changed to 2*foo(n), where n is an integer, regardless of what 
foo does.


It does need to be a bit conservative, but I think the issues aren't 
CTFE specific. Eg, something like this currently gives an assert at runtime:


pure nothrow void check(int n) pure nothrow
{
  assert(n == 4);
}

void main()
{
check(3);
}

even though check() can do nothing other than cause an error, it still 
cannot be optimized away. But you can get rid of all subsequent calls to 
it, because they'll produce the same error every time.


Re: Limit number of compiler error messages

2012-05-22 Thread Don Clugston

On 20/05/12 00:38, cal wrote:

Is there a way to limit the dmd compiler to outputting just the first
few errors it comes across?


No, but the intention of DMD is to generate only one error per bug in 
your code.
If you are seeing a large number of useless errors, please report it in 
bugzilla.

http://d.puremagic.com/issues/show_bug.cgi?id=8082 is a good example.


Re: Strange measurements when reproducing issue 5650

2012-04-25 Thread Don Clugston

On 25/04/12 10:34, SomeDude wrote:

Discussion here: http://d.puremagic.com/issues/show_bug.cgi?id=5650

On my Windows box, the following program

import std.stdio, std.container, std.range;

void main() {
enum int range = 100;
enum int n = 1_000_000;

auto t = redBlackTree!int(0);

for (int i = 0; i  n; i++) {
if (i  range)
t.removeFront();
t.insert(i);
}

writeln(walkLength(t[]));
//writeln(t[]);
}

runs in about 1793 ms.
The strange thing is, if I comment out the writeln line, runtimes are in
average *slower* by about 20 ms, with timings varying a little bit more
than when the writeln is included.

How can this be ?


Very strange.
Maybe there is some std library cleanup which is slower if nothing got 
written?


Re: Why has base class protection been deprecated?

2012-04-24 Thread Don Clugston

On 24/04/12 14:22, David Bryant wrote:

With the dmd 2.059 I have started getting the error 'use of base class
protection is deprecated' when I try to implement an interface with
private visibility, ie:

interface Interface { }

class Class : private Interface { }

$ dmd test.d
test.d(4): use of base class protection is deprecated

This bothers me for two reasons: firstly it's not a base class, and
secondly, it's a standard OO pattern of mine.

What's up with this?

Thanks,
Dave


Because it doesn't make sense. All classes are derived from Object. That 
_has_ to be public, otherwise things like == wouldn't work.


Previously, the compiler used to allow base class protection, but it 
ignored it.


Re: Why has base class protection been deprecated?

2012-04-24 Thread Don Clugston

On 24/04/12 15:29, David Bryant wrote:


Because it doesn't make sense. All classes are derived from Object. That
_has_ to be public, otherwise things like == wouldn't work.



Does the same apply for interfaces? I'm specifically implementing an
interface with non-public visibility. This shouldn't affect the
visibility of the implicit Object base-class.


Right. Only classes are affected.


Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 13:40, bearophile wrote:

Jonathan M Davis:


This progam:

import std.math;
import std.stdio;
import std.typetuple;

ulong log2(ulong n)
{
 return n == 1 ? 0
   : 1 + log2(n / 2);
}

void print(ulong value)
{
 writefln(%s: %s %s, value, log2(value), std.math.log2(value));
}

void main()
{
 foreach(T; TypeTuple!(byte, ubyte, short, ushort, int, uint, long, ulong))
 print(T.max);
}


prints out

127: 6 6.98868
255: 7 7.99435
32767: 14 15
65535: 15 16
2147483647: 30 31
4294967295: 31 32
9223372036854775807: 62 63
18446744073709551615: 63 64


So, the question is: Is std.math.log2 buggy, or is it just an issue with the
fact that std.math.log2 is using reals, or am I completely misunderstanding
something here?


What is the problem you are perceiving?

The values you see are badly rounded, this is why I said that the Python 
floating point printing function is better than the D one :-)

I print the third result with more decimal digits using %1.20f, to avoid that 
bad rounding:


import std.stdio, std.math, std.typetuple;

ulong mlog2(in ulong n) pure nothrow {
 return (n == 1) ? 0 : (1 + mlog2(n / 2));
}

void print(in ulong x) {
 writefln(%s: %s %1.20f, x, mlog2(x), std.math.log2(x));
}

void main() {
 foreach (T; TypeTuple!(byte, ubyte, short, ushort, int, uint, long, ulong))
 print(T.max);
 print(9223372036854775808UL);
 real r = 9223372036854775808UL;
 writefln(%1.20f, r);
}


The output is:

127: 6 6.98868468677216585300
255: 7 7.99435343685885793770
32767: 14 14.5597176955822200
65535: 15 15.7798605273604400
2147483647: 30 30.932819277100
4294967295: 31 31.966409638400
9223372036854775807: 62 63.0100
18446744073709551615: 63 64.
9223372036854775808: 63 63.0100
9223372036854775807.8000

And it seems essentially correct, the only significant (but very little) 
problem I am seeing is
log2(9223372036854775807) that returns a value  63, while of course in truth 
it's strictly less than 63 (found with Wolfram Alpha):
62.9984358269024341355489972678233
The cause of that small error is the limited integer representation precision 
of reals.


I don't think so. For 80-bit reals, every long can be represented 
exactly in an 80 bit real, as can every ulong from 0 up to and including 
ulong.max - 1. The only non-representable built-in integer is ulong.max, 
which (depending on rounding mode) gets rounded up to ulong.max+1.


The decimal floating point output for DMC, which DMD uses, is not very 
accurate. I suspect the value is actually = 63.



In Python there is a routine to compute precisely the logarithm of large 
integer numbers. It will be useful to have in D too, for BigInts too.

Bye,
bearophile




Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 18:53, Timon Gehr wrote:

On 04/04/2012 05:15 PM, Don Clugston wrote:


I don't think so. For 80-bit reals, every long can be represented
exactly in an 80 bit real, as can every ulong from 0 up to and including
ulong.max - 1. The only non-representable built-in integer is ulong.max,
which (depending on rounding mode) gets rounded up to ulong.max+1.



?

assert(0xp0L == ulong.max);
assert(0xfffep0L == ulong.max-1);


Ah, you're right. I forgot about the implicit bit. 80 bit reals are 
equivalent to 65bit signed integers.


It's ulong.max+2 which is the smallest unrepresentable integer.

Conclusion: you cannot blame ANYTHING on the limited precision of reals.


Re: log2 buggy or is a real thing?

2012-04-04 Thread Don Clugston

On 04/04/12 13:46, bearophile wrote:

Do you know why is this program:

import std.stdio;
void main() {
 real r = 9223372036854775808UL;
 writefln(%1.19f, r);
}

Printing:
9223372036854775807.800

Instead of this?
9223372036854775808.000

Bye,
bearophile


Poor float-decimal conversion in the C library ftoa() function.


Re: Comparison issue

2012-03-20 Thread Don Clugston

On 19/03/12 15:45, H. S. Teoh wrote:

On Mon, Mar 19, 2012 at 08:50:02AM -0400, bearophile wrote:

James Miller:


 writeln(v1 == 1); //false
 writeln(v1 == 1.0); //false
 writeln(v1 == 1.0f); //false
 writeln(v1+1 == 2.0f); //true





Maybe I'd like to deprecate and then statically forbid the use of ==
among floating point values, and replace it with a library-defined
function.

[...]

I agree. Using == for any floating point values is pretty much never
right. Either we should change the definition of == for floats to use
abs(y-x)epsilon for some given epsilon value, or we should prohibit it
altogether, and force people to always write abs(y-x)epsilon.


No, no, no. That's nonsense.

For starters, note that ANY integer expression which is exact, is also 
exact in floating point.

Another important case is that
if (f == 0)
is nearly always correct.


 Using == to compare floating point values is wrong. Due to the nature of
 floating point computation, there's always a possibility of roundoff
 error. Therefore, the correct way to compare floats is:

immutable real epsilon = 1.0e-12; // adjustable accuracy here
if (abs(y-x)  epsilon) {
// approximately equal
} else {
// not equal
}

And this is wrong, if y and x are both small, or both large. Your 
epsilon value is arbitrary.

Absolute tolerance works for few functions like sin(), but not in general.

See std.math.feqrel for a method which gives tolerance in terms of 
roundoff error, which is nearly always what you want.


To summarize:

For scientific/mathematical programming:
* Usually you want relative tolerance
* Sometimes you want exact equality.
* Occasionally you want absolute tolerance

But it depends on your application. For graphics programming you 
probably want absolute tolerance in most cases.


Re: Vector Swizzling in D

2012-03-15 Thread Don Clugston

On 14/03/12 18:46, Boscop wrote:

On Wednesday, 14 March 2012 at 17:35:06 UTC, Don Clugston wrote:

In the last bit of code, why not use CTFE for valid(string s) instead
of templates?

bool valid(string s)
{
foreach(c; s)
{
if (c  'w' || c  'z') return false;
}
return true;
}

In fact you can use CTFE for the other template functions as well.


In the original version I actually did this, but even with -O -inline
-release the opDispatchs call didn't get inlined. I thought it was
caused by CTFE-code that prevented the inlining.
FWIW, this was the original code using CTFE:
---
import std.algorithm: reduce;
struct Vec {
double[4] v;
@property auto X() {return v[0];}
@property auto Y() {return v[1];}
@property auto Z() {return v[2];}
@property auto W() {return v[3];}
this(double x, double y, double z, double w) {v = [x,y,z,w];}
@property auto opDispatch(string s)() if(s.length = 4 
reduce!((s,c)=s  'w' = c  c = 'z')(true, s)) {
char[] p = s.dup;


This won't be CTFEd, because it's not forced to be a compile-time constant.


foreach(i; s.length .. 4) p ~= p[$-1];

This too.

But, you can do something like:
enum p = extend(s);
since p is an enum, it must use CTFE.


int i(char c) {return [3,0,1,2][c-'w'];}

this isn't forced to be CTFE either.


return Vec(v[i(p[0])], v[i(p[1])], v[i(p[2])], v[i(p[3])]);
}



---
(I was using reduce here only to demonstrate D's functional features and
nice lambda syntax. Maybe that's what prevented inlining?)




Re: Vector Swizzling in D

2012-03-14 Thread Don Clugston

On 14/03/12 15:57, Boscop wrote:

Hi everyone,

I wrote a blog post for people who know a bit of D and want to dig
deeper, it shows different approaches to get vector swizzling syntax in D:

http://boscop.tk/blog/?p=1

There is nothing revolutionary involved but it might still be useful to
someone.
Comments and criticism are welcome.

In the last bit of code, why not use CTFE for valid(string s) instead of 
templates?


bool valid(string s)
{
   foreach(c; s)
   {
 if (c  'w' || c  'z') return false;
   }
   return true;
}

In fact you can use CTFE for the other template functions as well.



Re: About CTFE and pointers

2012-02-24 Thread Don Clugston

On 24/02/12 15:18, Alex Rønne Petersen wrote:

On 24-02-2012 15:08, bearophile wrote:

I have seen this C++11 program:
http://kaizer.se/wiki/log/post/C++_constexpr/

I have translated it to this D code:


bool notEnd(const char *s, const int n) {
return s s[n];
}
bool strPrefix(const char *s, const char *t, const int ns, const int
nt) {
return (s == t) ||
!t[nt] ||
(s[ns] == t[nt] (strPrefix(s, t, ns+1, nt+1)));
}
bool contains(const char *s, const char *needle, const int n=0) {
// Works only with C-style 0-terminated strings
return notEnd(s, n)
(strPrefix(s, needle, n, 0) || contains(s, needle, n+1));
}
enum int x = contains(froogler, oogle);
void main() {
// assert(contains(froogler, oogle));
}


If I run the version of the code with the run-time, it generates no
errors.

If I run the version with enum with the latest dmd it gives:

test.d(6): Error: string index 5 is out of bounds [0 .. 5]
test.d(7): called from here: strPrefix(s,t,ns + 1,nt + 1)
test.d(4): 5 recursive calls to function strPrefix
test.d(12): called from here: strPrefix(s,needle,n,0)
test.d(12): called from here: contains(s,needle,n + 1)
test.d(12): called from here: contains(s,needle,n + 1)
test.d(14): called from here: contains(froogler,oogle,0)


At first sight it looks like a CTFE bug, but studying the code a
little it seems there is a off-by-one bug in the code
(http://en.wikipedia.org/wiki/Off-by-one_error ). A quick translation
to D arrays confirms it:


bool notEnd(in char[] s, in int n) {
return s s[n];
}
bool strPrefix(in char[] s, in char[] t, in int ns, in int nt) {
return (s == t) ||
!t[nt] ||
(s[ns] == t[nt] (strPrefix(s, t, ns+1, nt+1)));
}
bool contains(in char[] s, in char[] needle, in int n=0) {
// Works only with C-style 0-terminated strings
return notEnd(s, n)
(strPrefix(s, needle, n, 0) || contains(s, needle, n+1));
}
//enum int x = contains(froogler, oogle);
void main() {
assert(contains(froogler, oogle));
}


It gives at run-time:

core.exception.RangeError@test(6): Range violation

\test.d(6): bool test.strPrefix(const(char[]), const(char[]),
const(int), const(int))




So it seems that Don, when he has implemented the last parts of the
CTFE interpreter, has done something curious, because in some cases it
seems able to find out of bounds even when you use just raw pointers :-)

Bye,
bearophile


It's not at all unlikely that the CTFE interpreter represents blocks of
memory as a pointer+length pair internally.



Yes, that's exactly what it does. That's how it's able to implement 
pointers safely.


That's a nice story, Thanks, bearophile.



Re: Everything on the Stack

2012-02-21 Thread Don Clugston

On 21/02/12 12:12, Timon Gehr wrote:

On 02/21/2012 11:27 AM, Daniel Murphy wrote:

scope/scoped isn't broken, they're just not safe. It's better to have an
unsafe library feature than an unsafe language feature.




scope is broken because it is not enforced by the means of
flow-analysis. As a result, it is not safe. Copying it to the library
and temporarily disabling 'scope' for classes is a good move however,
because this means we will be able to fix it at an arbitrary point in
the future without additional code breakage.


Does the library solution actually work the same as the language solution?


Re: Hex floats

2012-02-17 Thread Don Clugston

On 16/02/12 17:36, Timon Gehr wrote:

On 02/16/2012 05:06 PM, Don Clugston wrote:

On 16/02/12 13:28, Stewart Gordon wrote:

On 16/02/2012 12:04, Don Clugston wrote:

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals
must
always have an exponent? For example, 0xFFi obviously must be float,
not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Do you mean the syntax has just been copied straight from C99 without
any thought about making it more lenient?

Stewart.


Yes. There would need to be a good reason to do so.

For the case in question, note that mathematically, imaginary integers
are perfectly valid. Would an imaginary integer literal be an idouble, a
ifloat, or an ireal? I don't think it could be any:

foor(float x)
foor(double x)
fooi(ifloat x)
fooi(idouble x)

foor(7); //ambiguous, doesn't compile
fooi(7i); // by symmetry, this shouldn't compile either


static assert(is(typeof(7i)==idouble));


Ooh, that's bad.



Re: Flushing denormals to zero

2012-02-17 Thread Don Clugston

On 17/02/12 09:09, Mantis wrote:

17.02.2012 4:30, bearophile пишет:

After seeing this interesting thread:
http://stackoverflow.com/questions/9314534/why-does-changing-0-1f-to-0-slow-down-performance-by-10x


Do you know if there's a simple way to perform _MM_SET_FLUSH_ZERO_MODE
in D?

According to Agner that operation is not needed on Sandy Bridge
processors, but most CPUs around are not that good:
http://www.agner.org/optimize/blog/read.php?i=142

Bye,
bearophile


I could expect this to be adjustable in std.math.FloatingPointControl,
but it isn't.


That's because the x87 doesn't support flush-to-zero, and 32-bit x86 
doesn't necessarily have SSE.

But it should probably be added for 64 bit.

Anyway, the assembly code to change FPU control word is

pretty tiny:
http://software.intel.com/en-us/articles/x87-and-sse-floating-point-assists-in-ia-32-flush-to-zero-ftz-and-denormals-are-zero-daz/







Re: Hex floats

2012-02-16 Thread Don Clugston

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals must
always have an exponent? For example, 0xFFi obviously must be float, not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Re: Hex floats

2012-02-16 Thread Don Clugston

On 16/02/12 13:28, Stewart Gordon wrote:

On 16/02/2012 12:04, Don Clugston wrote:

On 15/02/12 22:24, H. S. Teoh wrote:

What's the original rationale for requiring that hex float literals must
always have an exponent? For example, 0xFFi obviously must be float, not
integer, so why does the compiler (and the spec) require an exponent?


The syntax comes from C99.


Do you mean the syntax has just been copied straight from C99 without
any thought about making it more lenient?

Stewart.


Yes. There would need to be a good reason to do so.

For the case in question, note that mathematically, imaginary integers 
are perfectly valid. Would an imaginary integer literal be an idouble, a 
ifloat, or an ireal? I don't think it could be any:


foor(float x)
foor(double x)
fooi(ifloat x)
fooi(idouble x)

foor(7); //ambiguous, doesn't compile
fooi(7i); // by symmetry, this shouldn't compile either


Re: Compiler error with static vars/functions

2012-02-10 Thread Don Clugston

On 10/02/12 16:08, Artur Skawina wrote:

On 02/10/12 15:18, Don Clugston wrote:

On 09/02/12 23:03, Jonathan M Davis wrote:

On Thursday, February 09, 2012 14:45:43 bearophile wrote:

Jonathan M Davis:

Normally, it's considered good practice to give modules names which are
all lowercase (particularly since some OSes aren't case-sensitive for
file operations).


That's just a Walter thing, and it's bollocks. There's no need to use all lower 
case.


No, having non-lower case filenames would just lead to problems. Like different
modules being imported depending on the filesystem being used, or the user's
locale settings.


I don't think it's possible without deliberate sabotage. You can't have 
two files with names differing only in case on Windows. And module 
declarations acts as a check anyway.


The D community is a *lot* of experience with mixed case names. No 
problems have ever been reported.




Re: dmd gdc

2012-01-27 Thread Don Clugston

On 26/01/12 18:59, xancorreu wrote:

Al 26/01/12 18:43, En/na H. S. Teoh ha escrit:

On Thu, Jan 26, 2012 at 06:06:38PM +0100, xancorreu wrote:
[...]

I note that gdc is completely free software but dmd runtime is not.

You mean free as in freedom, not as in price.


Yes, both


I don't what that means. The DMD backend license is not compatible with 
the GPL, so it could never be bundled with a *nix distro. Otherwise, I 
don't think there are any consequences. It has nothing else in common 
with non-free products.


The complete source is on github, and you're allowed to freely download 
it, compile it, and make your own clone on github. What else would you want?




Re: Constant function/delegate literal

2012-01-17 Thread Don Clugston

On 15/01/12 20:35, Timon Gehr wrote:

On 01/14/2012 07:13 PM, Vladimir Matveev wrote:

Hi,

Is there a reason why I cannot compile the following code:

module test;

struct Test {
int delegate(int) f;
}

Test s = Test((int x) { return x + 1; });

void main(string[] args) {
return;
}

dmd 2.057 says:

test.d(7): Error: non-constant expression cast(int
delegate(int))delegate pure
nothrow @safe int(int x)
{
return x + 1;
}

?


The 'this' pointer in the delegate:
Test((int x) { return x + 1; });

is a pointer to the struct literal, which isn't constant. You actually 
want it to point to variable s.


OTOH if it were a function, it should work, but it currently doesn't.



This is simple example; what I want to do is to create a global variable
containing a structure with some ad-hoc defined functions. The compiler
complains that it cannot evaluate  at compile time. I think this
could
be solved by defining a function returning needed structure, but I
think this
is cumbersome and inconvenient.

Best regards,
Vladimir Matveev.


I think it should work. I have filed a bug report:
http://d.puremagic.com/issues/show_bug.cgi?id=7298






Re: CTFE and cast

2012-01-13 Thread Don Clugston
On 13/01/12 10:01, k2 wrote:
 When replace typedef to enum, it became impossible to compile a certain
 portion.
 
 dmd v2.057 Windows
 
 enum HANDLE : void* {init = (void*).init}
 
 pure HANDLE int_to_HANDLE(int x)
 {
  return cast(HANDLE)x;
 }
 
 void bar()
 {
  HANDLE a = cast(HANDLE)1;// ok
  HANDLE b = int_to_HANDLE(2);// ok
 }
 
 HANDLE c = cast(HANDLE)3;// ok
 HANDLE d = int_to_HANDLE(4);// NG
 
 foo.d(17): Error: cannot implicitly convert expression (cast(void*)4u)
 of type void* to HANDLE

It's a problem. Casting integers to pointers is a very unsafe operation,
and is disallowed in CTFE. There is a special hack, specifically for
Windows HANDLES, which allows you to cast integers to pointers at
compile time, but only after they've left CTFE.