Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 22:51 deadalnix via Digitalmars-d napsal(a):


On Tuesday, 6 September 2016 at 07:52:47 UTC, Daniel Kozak wrote:
No, it is really important rule. If there will be automatic promotion 
to float for auto it will hurt performance

in cases when you want int and it will break things.



The performance have nothing to do with it. In fact float division is 
way faster than integer division, try it. It is all about correctness. 
Integer and floating point division have different semantic.




You are right, on my pc speed is same, but I am remember that there has 
been some performance problems last time i checked (something about only 
one FPU on my bulldozer cpu)




Re: workspace-d 2.7.2 & code-d 0.10.14

2016-09-06 Thread Manu via Digitalmars-d-announce
On 7 September 2016 at 13:29, WebFreak001 via Digitalmars-d-announce
 wrote:
> On Wednesday, 7 September 2016 at 02:04:21 UTC, Manu wrote:
>>
>> Awesome work, thanks again!
>> Suggest getting the deb hosted in d-apt along with the other tools
>> already there, and set them as dependencies?
>
>
> would probably be nice, but I have no idea how package maintaining for apt
> really works. I am not quite sure how to make an i386 package, I only made
> an amd64 one. The script for generating the apt file is in makedeb.d if you
> want to check it. I'm surprised it even works because I haven't tested it
> once.
>
> But I guess I can manage adding dependencies. Just not really sure if I also
> need to make an i386 package or other architectures

So, the 'normal' way is to create a debian source package, which
effectively contains code and build instructions, and then generate a
matrix of binary deb's from that. *buntu users would just put that on
LaunchPad, which will populate the build matrix for your PPA
automatically.
d-apt is not a PPA though, so maybe it would be simplest for you to
contact the maintainer of d-apt and ask his advice. It might just
slide into his scripts without requiring any additional effort on your
part...?


Re: Template constraints for reference/value types?

2016-09-06 Thread Jon Degenhardt via Digitalmars-d-learn
On Wednesday, 7 September 2016 at 00:40:27 UTC, Jonathan M Davis 
wrote:
On Tuesday, September 06, 2016 21:16:05 Jon Degenhardt via 
Digitalmars-d-learn wrote:

On Tuesday, 6 September 2016 at 21:00:53 UTC, Lodovico Giaretta

wrote:
> On Tuesday, 6 September 2016 at 20:46:54 UTC, Jon Degenhardt
>
> wrote:
>> Is there a way to constrain template arguments to reference 
>> or value types? I'd like to do something like:

>>
>> T foo(T)(T x)
>>
>> if (isReferenceType!T)
>>
>> { ... }
>>
>> --Jon
>
> You can use `if(is(T : class) || is(T : interface))`.
>
> If you also need other types, std.traits contains a bunch of 
> useful templates: isArray, isAssociativeArray, isPointer, ...


Thanks. This looks like a practical approach.


It'll get you most of the way there, but I don't think that 
it's actually possible to test for reference types in the 
general case


[snip]

- Jonathan M Davis


Thanks, very helpful. I've concluded that what I wanted to do 
isn't worth pursuing at the moment (see the thread on associative 
arrays in the General forum). However, your description is 
helpful to understand the details involved.


Re: associative arrays: how to insert key and return pointer in 1 step to avoid searching twice

2016-09-06 Thread Jon Degenhardt via Digitalmars-d

On Tuesday, 6 September 2016 at 04:32:52 UTC, Daniel Kozak wrote:

Dne 6.9.2016 v 06:14 mogu via Digitalmars-d napsal(a):

On Tuesday, 6 September 2016 at 01:17:00 UTC, Timothee Cour 
wrote:

is there a way to do this efficiently with associative arrays:

aa[key]=value;
auto ptr=key in aa;

without suffering the cost of the 2nd search (compiler should 
know ptr during aa[key]=value but it's not exposed it seems)


auto pa = &(aa[key] = value);


Yep, but this is a implementation detail, so be careful


My question as well. Occurs often when I use AAs. The above 
technique works in cases I've tried. However, to Daniel's point, 
from the spec I don't find it clear if it's expected to work. It 
would be useful to have better clarity on this. Anyone have more 
details?


Below is a test for simple class and struct cases, they work at 
present. The template is the type of helper I've wanted. I don't 
trust this particular template, but it'd be useful to know if 
there is a way to get something like this.


--Jon

/* Note: Not general template. Fails for nested classes (compile 
error). */

T* getOrInsertNew(T, K)(ref T[K] aa, K key)
if (is(T == class) || is(T == struct))
{
T* p = (key in aa);
static if (is (T == class))
return (p !is null) ? p : &(aa[key] = new T());
else static if (is(T == struct))
return (p !is null) ? p : &(aa[key] = T());
else
static assert(0, "Invalid object type");
}

class  FooClass  { int x = 0; }
struct BarStruct { int x = 0; }

void main(string[] args)
{
FooClass[string] aaFoo;
BarStruct[string] aaBar;

/* Class is reference type. Pointer should be to instance in 
AA. */

auto foo1 = aaFoo.getOrInsertNew("foo1");
foo1.x = 100;
auto foo1b = getOrInsertNew(aaFoo, "foo1");
assert(foo1 == foo1b && foo1.x == foo1b.x && foo1b.x == 100);

/* Struct is value type. Will pointer be to instance in AA? */
auto bar1 = aaBar.getOrInsertNew("bar1");
bar1.x = 100;
auto bar1b = getOrInsertNew(aaBar, "bar1");
assert(bar1 == bar1b && bar1.x == bar1b.x && bar1b.x == 100);

import std.stdio;
writeln("Success");
}


Re: workspace-d 2.7.2 & code-d 0.10.14

2016-09-06 Thread WebFreak001 via Digitalmars-d-announce

On Wednesday, 7 September 2016 at 02:04:21 UTC, Manu wrote:

Awesome work, thanks again!
Suggest getting the deb hosted in d-apt along with the other 
tools

already there, and set them as dependencies?


would probably be nice, but I have no idea how package 
maintaining for apt really works. I am not quite sure how to make 
an i386 package, I only made an amd64 one. The script for 
generating the apt file is in makedeb.d if you want to check it. 
I'm surprised it even works because I haven't tested it once.


But I guess I can manage adding dependencies. Just not really 
sure if I also need to make an i386 package or other architectures


Re: [GSoC] Precise GC

2016-09-06 Thread Dsby via Digitalmars-d-announce

On Friday, 2 September 2016 at 03:25:33 UTC, Jeremy DeHaan wrote:

Hi everyone,

I know I'm super late to the party for this, and sorry for 
that. While my work on the precise GC didn't go as planned, it 
is closer than it was to be getting merged.


[...]


In Mac 32 bit. the test is not pass.


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 12:00, finalpatch via Digitalmars-d
 wrote:
> On Wednesday, 7 September 2016 at 01:38:47 UTC, Manu wrote:
>>
>> On 7 September 2016 at 11:04, finalpatch via Digitalmars-d
>>  wrote:
>>>
>>>
>>> It shouldn't be hard to have the framework look at the buffer size and
>>> choose the scalar version when number of elements are small, it wasn't done
>>> that way simply because we didn't need it.
>>
>>
>> No, what's hard is working this into D's pipeline patterns seamlessly.
>
>
> The lesson I learned from this is that you need the user code to provide a
> lot of extra information about the algorithm at compile time for the
> templates to work out a way to fuse pipeline stages together efficiently.
>
> I believe it is possible to get something similar in D because D has more
> powerful templates than C++ and D also has some type introspection which C++
> lacks.  Unfortunately I'm not as good on D so I can only provide some ideas
> rather than actual working code.
>
> Once this problem is solved, the benefit is huge.  It allowed me to perform
> high level optimizations (streaming load/save, prefetching, dynamic
> dispatching depending on data alignment etc.) in the main loop which
> automatically benefits all kernels and pipelines.

Exactly!


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Wednesday, 7 September 2016 at 01:38:47 UTC, Manu wrote:
On 7 September 2016 at 11:04, finalpatch via Digitalmars-d 
 wrote:


It shouldn't be hard to have the framework look at the buffer 
size and choose the scalar version when number of elements are 
small, it wasn't done that way simply because we didn't need 
it.


No, what's hard is working this into D's pipeline patterns 
seamlessly.


The lesson I learned from this is that you need the user code to 
provide a lot of extra information about the algorithm at compile 
time for the  templates to work out a way to fuse pipeline stages 
together efficiently.


I believe it is possible to get something similar in D because D 
has more powerful templates than C++ and D also has some type 
introspection which C++ lacks.  Unfortunately I'm not as good on 
D so I can only provide some ideas rather than actual working 
code.


Once this problem is solved, the benefit is huge.  It allowed me 
to perform high level optimizations (streaming load/save, 
prefetching, dynamic dispatching depending on data alignment 
etc.) in the main loop which automatically benefits all kernels 
and pipelines.




Re: workspace-d 2.7.2 & code-d 0.10.14

2016-09-06 Thread Manu via Digitalmars-d-announce
Awesome work, thanks again!
Suggest getting the deb hosted in d-apt along with the other tools
already there, and set them as dependencies?

On 7 September 2016 at 07:05, WebFreak001 via Digitalmars-d-announce
 wrote:
> I just pushed a new release of workspace-d (bridge between DCD, DScanner,
> dfmt and dub with some utility stuff) and code-d (my vscode D extension
> using workspace-d).
>
> The latest update features several smaller additions such as better auto
> completion for DlangUI Markup Language and more configurability.
>
> As an addition I am starting to bundle .deb files and precompiled windows
> binaries with workspace-d releases, to make it easier for the users to
> install the latest version.
>
> You can get the latest workspace-d version from here:
> https://github.com/Pure-D/workspace-d/releases/tag/v2.7.2
>
> And to get the visual studio code extension, simply search for `code-d` in
> the extensions manager. It will pop up as `D Programming Language (code-d)`
> by webfreak.
>
> Also I recently started collecting some ideas for even more features &
> commands to integrate into workspace-d & code-d, if you want to take a look
> and submit more ideas:
>
> https://github.com/Pure-D/workspace-d/issues (commands & features for all
> IDEs/Text Editors which will support workspace-d)
>
> https://github.com/Pure-D/code-d/issues (features specific to the visual
> studio code plugin such as UI changes)
>
>
> A mostly complete list of all code-d/workspace-d features can be found here:
> https://github.com/Pure-D/code-d/wiki


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 11:04, finalpatch via Digitalmars-d
 wrote:
>
> It shouldn't be hard to have the framework look at the buffer size and
> choose the scalar version when number of elements are small, it wasn't done
> that way simply because we didn't need it.

No, what's hard is working this into D's pipeline patterns seamlessly.


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Wednesday, 7 September 2016 at 00:21:23 UTC, Manu wrote:
The end of a scan line is special cased . If I need 12 pixels 
for the last iteration but there are only 8 left, an instance 
of Kernel::InputVector is allocated on stack, 8 remaining 
pixels are memcpy into it then send to the kernel. Output from 
kernel are also assigned to a stack variable first, then 
memcpy 8 pixels to the output buffer.


Right, and this is a classic problem with this sort of 
function; it is

only more efficient if numElements is suitable long.
See, I often wonder if it would be worth being able to provide 
both
functions, a scalar and array version, and have the algorithms 
select

between them intelligently.


We normally process full HD or higher resolution images so the 
overhead of having to copy the last iteration was negligible.


It was fairly easy to put together a scalar version as they are 
much easier to write than the SIMD ones.  In fact I had scalar 
version for every SIMD kernel,  and use them for unit testing.


It shouldn't be hard to have the framework look at the buffer 
size and choose the scalar version when number of elements are 
small, it wasn't done that way simply because we didn't need it.




Re: Template constraints for reference/value types?

2016-09-06 Thread Jonathan M Davis via Digitalmars-d-learn
On Tuesday, September 06, 2016 21:16:05 Jon Degenhardt via Digitalmars-d-learn 
wrote:
> On Tuesday, 6 September 2016 at 21:00:53 UTC, Lodovico Giaretta
>
> wrote:
> > On Tuesday, 6 September 2016 at 20:46:54 UTC, Jon Degenhardt
> >
> > wrote:
> >> Is there a way to constrain template arguments to reference or
> >> value types? I'd like to do something like:
> >>
> >> T foo(T)(T x)
> >>
> >> if (isReferenceType!T)
> >>
> >> { ... }
> >>
> >> --Jon
> >
> > You can use `if(is(T : class) || is(T : interface))`.
> >
> > If you also need other types, std.traits contains a bunch of
> > useful templates: isArray, isAssociativeArray, isPointer, ...
>
> Thanks. This looks like a practical approach.

It'll get you most of the way there, but I don't think that it's actually
possible to test for reference types in the general case - the problem being
structs with postblit constructors. In many cases, it's possible to detect
that a struct has to be a value type, and it's often possible to detect that
a struct is a reference type (by looking at the types of the member
variables in both cases), but as soon as postblit constructors get involved,
it's not possible anymore, because the exact semantics depend on the
implementation of the postblit constructor. For instance,

struct S
{
int* i;
}

is clearly a reference type, but

struct S
{
this(this)
{
...
}

int* i;
}

may or may not be one. In this particular case, the postblit constructor
_probably_ does a deep copy of the member variables, but it might also just
print out that the postblit constructor was called or do some other
non-obvious thing that the person who wrote it thought that it should do.

So, there will be some types where you cannot determine whether they're
reference types or value types. And that doesn't even take into
consideration types like dynamic arrays or structs like this

struct S
{
int* i;
byte b;
}

which are pseudo-reference types, because part of their state is local and
part of it is on the heap.

So, depending on what you're trying to do, checking for classes, interfaces,
and pointers may get you what you're looking for, and you can get really
fancy with trying to determine whether a struct might be a value type or
reference type if that suits your purposes, but you're not going to
determine the copy semantics of all types. You can figure it out for a lot
of them though.

- Jonathan M Davis



Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 07:11, finalpatch via Digitalmars-d
 wrote:
> On Tuesday, 6 September 2016 at 14:47:21 UTC, Manu wrote:
>
>>> with a main loop that reads the source buffer in *12* pixels step, call
>>> MySimpleKernel 3 times, then call AnotherKernel 4 times.
>>
>>
>> It's interesting thoughts. What did you do when buffers weren't multiple
>> of the kernels?
>
>
> The end of a scan line is special cased . If I need 12 pixels for the last
> iteration but there are only 8 left, an instance of Kernel::InputVector is
> allocated on stack, 8 remaining pixels are memcpy into it then send to the
> kernel. Output from kernel are also assigned to a stack variable first, then
> memcpy 8 pixels to the output buffer.

Right, and this is a classic problem with this sort of function; it is
only more efficient if numElements is suitable long.
See, I often wonder if it would be worth being able to provide both
functions, a scalar and array version, and have the algorithms select
between them intelligently.


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 01:54, Wyatt via Digitalmars-d
 wrote:
> On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:
>>
>>
>> A central premise of performance-oriented programming which I've
>> employed my entire career, is "where there is one, there is probably
>> many", and if you do something to one, you should do it to many.
>
>
> From a conceptual standpoint, this sounds like the sort of thing array
> languages like APL and J thrive on, so there's solid precedent for the
> concept.  I might suggest looking into optimising compilers in that space
> for inspiration and such; APEX, for example:
> http://www.snakeisland.com/apexup.htm

Thanks, that's really interesting, I'll check it out.


> Of course, this comes with the caveat that this is (still!) some relatively
> heavily-academic stuff.  And I'm not sure to what extent that can help
> mitigate the problem of relaxing type requirements such that you can e.g.
> efficiently ,/⍉ your 4 2⍴"LR" vector for SIMD on modern processors.

That's not what I want though.
I intend to hand-write that function (I was just giving examples of
how auto-vectorisation almost always fails), the question here is, how
to work that new array function into our pipelines transparently...



Re: Template constraints for reference/value types?

2016-09-06 Thread Jon Degenhardt via Digitalmars-d-learn
On Tuesday, 6 September 2016 at 21:00:53 UTC, Lodovico Giaretta 
wrote:
On Tuesday, 6 September 2016 at 20:46:54 UTC, Jon Degenhardt 
wrote:
Is there a way to constrain template arguments to reference or 
value types? I'd like to do something like:


T foo(T)(T x)
if (isReferenceType!T)
{ ... }

--Jon


You can use `if(is(T : class) || is(T : interface))`.

If you also need other types, std.traits contains a bunch of 
useful templates: isArray, isAssociativeArray, isPointer, ...


Thanks. This looks like a practical approach.


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:47:21 UTC, Manu wrote:

with a main loop that reads the source buffer in *12* pixels 
step, call

MySimpleKernel 3 times, then call AnotherKernel 4 times.


It's interesting thoughts. What did you do when buffers weren't 
multiple of the kernels?


The end of a scan line is special cased . If I need 12 pixels for 
the last iteration but there are only 8 left, an instance of 
Kernel::InputVector is allocated on stack, 8 remaining pixels are 
memcpy into it then send to the kernel. Output from kernel are 
also assigned to a stack variable first, then memcpy 8 pixels to 
the output buffer.


workspace-d 2.7.2 & code-d 0.10.14

2016-09-06 Thread WebFreak001 via Digitalmars-d-announce
I just pushed a new release of workspace-d (bridge between DCD, 
DScanner, dfmt and dub with some utility stuff) and code-d (my 
vscode D extension using workspace-d).


The latest update features several smaller additions such as 
better auto completion for DlangUI Markup Language and more 
configurability.


As an addition I am starting to bundle .deb files and precompiled 
windows binaries with workspace-d releases, to make it easier for 
the users to install the latest version.


You can get the latest workspace-d version from here:
https://github.com/Pure-D/workspace-d/releases/tag/v2.7.2

And to get the visual studio code extension, simply search for 
`code-d` in the extensions manager. It will pop up as `D 
Programming Language (code-d)` by webfreak.


Also I recently started collecting some ideas for even more 
features & commands to integrate into workspace-d & code-d, if 
you want to take a look and submit more ideas:


https://github.com/Pure-D/workspace-d/issues (commands & features 
for all IDEs/Text Editors which will support workspace-d)


https://github.com/Pure-D/code-d/issues (features specific to the 
visual studio code plugin such as UI changes)



A mostly complete list of all code-d/workspace-d features can be 
found here: https://github.com/Pure-D/code-d/wiki


Re: Template constraints for reference/value types?

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d-learn
On Tuesday, 6 September 2016 at 20:46:54 UTC, Jon Degenhardt 
wrote:
Is there a way to constrain template arguments to reference or 
value types? I'd like to do something like:


T foo(T)(T x)
if (isReferenceType!T)
{ ... }

--Jon


You can use `if(is(T : class) || is(T : interface))`.

If you also need other types, std.traits contains a bunch of 
useful templates: isArray, isAssociativeArray, isPointer, ...


Re: Promotion rules ... why no float?

2016-09-06 Thread deadalnix via Digitalmars-d

On Tuesday, 6 September 2016 at 07:52:47 UTC, Daniel Kozak wrote:
No, it is really important rule. If there will be automatic 
promotion to float for auto it will hurt performance

in cases when you want int and it will break things.



The performance have nothing to do with it. In fact float 
division is way faster than integer division, try it. It is all 
about correctness. Integer and floating point division have 
different semantic.




Template constraints for reference/value types?

2016-09-06 Thread Jon Degenhardt via Digitalmars-d-learn
Is there a way to constrain template arguments to reference or 
value types? I'd like to do something like:


T foo(T)(T x)
if (isReferenceType!T)
{ ... }

--Jon


Re: Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

On Tuesday, 6 September 2016 at 17:58:44 UTC, Timon Gehr wrote:


I don't think so (although your case could be made to work 
easily enough). This seems to be accepts-invalid.


What do you think of the original example [1] in the bug report 
that uses

`mixin Proxy!i;` ?

[1] https://issues.dlang.org/show_bug.cgi?id=16471#c0


[Issue 16464] opCast doco is insufficient

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16464

--- Comment #5 from github-bugzi...@puremagic.com ---
Commits pushed to master at https://github.com/dlang/dlang.org

https://github.com/dlang/dlang.org/commit/9fdba00fb38984e76ebe4c26f392d03692917b72
Fix issue 16464. Define more clearly opCast usage

https://github.com/dlang/dlang.org/commit/5af0f724bc85a06168ed7c7707a9d6249e8a582a
Merge pull request #1467 from schveiguy/patch-4

Fix issue 16464. Define more clearly opCast usage

--


Re: Unicode function name? ∩

2016-09-06 Thread Illuminati via Digitalmars-d-learn
On Tuesday, 6 September 2016 at 19:58:11 UTC, Jesper Tholstrup 
wrote:

On Tuesday, 6 September 2016 at 18:46:52 UTC, Illuminati wrote:

[...]


You are somewhat of topic here.


[...]


A lot of code is written by non-mathematicians and has to be 
maintained by non-mathematicians. Mathematicians will be 
confused when people start using the symbols incorrectly, 
simply because they try to be clever. Sure, some developers 
would probably mix up english words like 'intersect' and 
'union', but I think that it is less common.


[...]


Ok, continue your game I see you are quite involved with it.


Re: Unicode function name? ∩

2016-09-06 Thread Jesper Tholstrup via Digitalmars-d-learn

On Tuesday, 6 September 2016 at 18:46:52 UTC, Illuminati wrote:


Symbols are inherently meaningless so you are not making sense. 
Clever is what got humans out the jungle, I think it is a good 
thing. No need to denigrate it when you benefit from people 
being clever. Of course, you could argue that staying in the 
jungle would have been the best thing...


You are somewhat of topic here.


Mathematicians don't seem to get confused by symbols.


A lot of code is written by non-mathematicians and has to be 
maintained by non-mathematicians. Mathematicians will be confused 
when people start using the symbols incorrectly, simply because 
they try to be clever. Sure, some developers would probably mix 
up english words like 'intersect' and 'union', but I think that 
it is less common.


You have simply memorized what those groups of symbols mean and 
you are too lazy to memorize what some other symbol means.


Personal... and wrong...

Once you realize the game you are playing with yourself then it 
becomes easier to break the bad habit and not get caught up in 
the nonsense.


Personal... and wrong... My argument goes towards code 
maintainability and it is still valid.




The reason why ∩ is better than intersect is because it is 
quicker to see and analyze(due to its "size"). True, it is more 
ambiguous, as ambiguity depends on size, but at least in this 
case it has been well established just as π and 1,2,3..., etc. 
But if you are so concerned about ambiguity then why not 
intersectsetofarithmeticintegerswithsetofarithmeticintegers? 
That is far less ambiguous than intersect.


As long as humans write software I think (personal estimate) that 
few would call 
'intersectsetofarithmeticintegerswithsetofarithmeticintegers' a 
readable symbol. I suppose that our invention of snake case, 
camel case, pascal case, ect. lends some support to my claim.




My point is, you are playing games. You might not realize it 
but that is what is going on. If you want to be truthful, it is 
better to say "I prefer to use my own personal standardized 
notation that I have already learned since it takes precious 
time away from my own life.".


Personal, again. No real content.

You do realize that 'my own personal standardized notation' 
encompass >99% of all software thus far - right?


Your argument is exactly analogous to "I don't speak french! 
Use English you stupid french speaking person, french is for 
idiots anyways".


I'm not a native English speaker, so I'm not sure that your 
argument is valid here. I have learned different languages and 
various diciplines of natural science. I still don't think that 
methods/functions should contain e.g. math symbols.


The tone is irrelevant, the point is acting like one 
superficial system is better than some other superficial system 
simply because of perspective/laziness/arrogance/etc. The only 
issues are that either you are failing to see the systems as 
superficial or you are failing to see that your own personal 
center of the universe is not actually the center of the 
universe.


Personal, again... and it could easily be the other way around I 
think.


So just be real. The reason you don't like it is because it 
confuses you.


Personal, again... and not really

If it didn't, you wouldn't have a problem with it. If you could 
speak French and English then you, if you were that 
hypothetical person, wouldn't care what language was used.


So far of the topic.

 All I can say is that everyone is confused until they learn 
something. But don't confuse your confusion with some innate 
scale of better/worse, it only leads to more confusion. The 
solution to all confusion is familiarity. Become familiar with 
your confusion(e.g., using ∩) and it won't be confusing anymore.


Personal, again. I'm not really confused (I think).



The reason the mathematical symbols don't phase me is because I 
spent years using them. In my mind ∩ = intersection of 
sets(which I have a non-verbal meaning in my own mind on what 
that means). I see no difference between ∩ and intersect. Hence 
I am not confused.


Cool, the *years* of usage really payed off...

If someone comes along and uses ∩ to mean what I call union. 
Then it won't confuse me either. Because I realize they have 
just relabeled stuff.


Okay, thats quite a statement... I would argue that many 
developers, not you of course, could oversee the incorrect symbol.


Sure I have to keep track, but as long as they are 
logical(consistent) then I'll get used to(familiar) with their 
system and it won't be a problem.


Okay.

I won't get angry or upset that they are trying to pull the rug 
out from underneath me. I'll realize that they just speak 
French and if I want to communicate with them I'll learn 
French. No big deal, I'll be more enlightened from doing so. 
Sure it takes some time, but what else do we humans have to do? 
Everything we do just superficial anyways.


Eh, okay...

You will win in terms of their usage, 

Re: @property Incorrectly Implemented?

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 3:18 PM, John wrote:


Currently it seems that @property isn't implemented correctly. For some
reason when you try to get the pointer of a property it returns a
delegate for some reason. This just seems plain wrong, the whole purpose
of a property is for a function to behave as if it weren't a function.
There's also some inconsistencies in the behavior, "get" is implemented
one way and then "set" is implemented another.

http://ideone.com/ZgGT2G

 // returns "ref int delegate()"
()   // ok returns "int*", but defeats purpose of @property
&(t.j = 10)  // shouldn't this return "ref int delegate(int)" ?

It would be nice to get this behavior fixed, so that it doesn't become
set in stone. I think returning a delegate pointer is not what people
would except nor is there really any use case for it.


Just FYI, at the moment I believe the only difference (aside from the 
defunct -property switch) between a @property function and a 
non-@property function is:


int foo1() { return 1; }
@property int foo2() { return 1; }

pragma(msg, typeof(foo1)); // int function()
pragma(msg, typeof(foo2)); // int

That's it. typeof returns a different thing. All @property functions act 
just like normal functions when used in all other cases, and property 
syntax (assignment and getting) work on non-@property functions.


This situation is less than ideal. But it's been battled about dozens of 
times on the forums (including the very reasonable points you are 
making). It hasn't been solved, and the cynic in me says it won't ever be.


-Steve


Re: @property Incorrectly Implemented?

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 19:18:11 UTC, John wrote:


Currently it seems that @property isn't implemented correctly. 
For some reason when you try to get the pointer of a property 
it returns a delegate for some reason. This just seems plain 
wrong, the whole purpose of a property is for a function to 
behave as if it weren't a function. There's also some 
inconsistencies in the behavior, "get" is implemented one way 
and then "set" is implemented another.


http://ideone.com/ZgGT2G

 // returns "ref int delegate()"
()   // ok returns "int*", but defeats purpose of 
@property
&(t.j = 10)  // shouldn't this return "ref int 
delegate(int)" ?


It would be nice to get this behavior fixed, so that it doesn't 
become set in stone. I think returning a delegate pointer is 
not what people would except nor is there really any use case 
for it.


With properties, the & operator is the only way to have the 
function itself and not it's return value. The reason is that the 
return value of a function is not necessarily an lvalue, so 
taking its address is not always correct. Imagine this:


@property int x() { return 3; }

As 3 is an rvalue, you cannot take its address. That's the 
difference between a true field and a computed one.


The purpose of properties is the following:

struct S
{
@property int x() { /* whatever */ }
int y() { /* whatever */ }
}

writeln(typeof(S.x).stringof); // prints int
writeln(typeof(S.y).stringof); // prints int delegate()


Re: @property Incorrectly Implemented?

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 19:18:11 John via Digitalmars-d wrote:
> Currently it seems that @property isn't implemented correctly.
> For some reason when you try to get the pointer of a property it
> returns a delegate for some reason. This just seems plain wrong,
> the whole purpose of a property is for a function to behave as if
> it weren't a function. There's also some inconsistencies in the
> behavior, "get" is implemented one way and then "set" is
> implemented another.
>
> http://ideone.com/ZgGT2G
>
>   // returns "ref int delegate()"
>  ()   // ok returns "int*", but defeats purpose of
> @property
>  &(t.j = 10)  // shouldn't this return "ref int delegate(int)"
> ?
>
> It would be nice to get this behavior fixed, so that it doesn't
> become set in stone. I think returning a delegate pointer is not
> what people would except nor is there really any use case for it.

Okay. How would it work to safely get a pointer to anything but the
@property function when taking its address? () is just going to give you
the address of the return value - which in most cases, is going to be a
temporary, so if that even compiles in most cases, it's a bit scary. Sure,
if the @property function returns by ref, then it can work, but an @property
function which returns by ref isn't worth much, since you're then giving
direct access to the member variable which is what an @property function is
usually meant to avoid. If you wanted to do that, you could just use a
public member variable.

So, given that most @property functions are not going to return by ref, how
does it make any sense at all for taking the address of an @property
function to do anything different than give you a delegate? Sure, that's not
what happens when you take the address of a variable, but you're not dealing
with a variable. You're dealing with an @property function which is just
trying to emulate a variable.

The reality of the matter is that an @property function is _not_ a variable.
It's just trying to emulate one, and that abstraction falls apart _really_
fast once you try and do much beyond getting and setting the value - e.g.
passing by ref falls flat on its face. We could do better than currently do
(e.g. making += lower to code that uses both the getter and the setter when
the getter doesn't return by ref), but there are some areas where a property
function simply can't act like a variable, because it isn't one. There isn't
even a guarantee that an @property function is backed by memory. It could be
a completely calculated value, in which case, expecting to get an address of
a variable when taking the address of the @property function makes even less
sense.

- Jonathan M Davis



Re: @property Incorrectly Implemented?

2016-09-06 Thread ag0aep6g via Digitalmars-d

On 09/06/2016 09:18 PM, John wrote:

&(t.j = 10)  // shouldn't this return "ref int delegate(int)" ?


`` should and does. With `= 10`, it's definitely a call, just like 
`()`.



It would be nice to get this behavior fixed, so that it doesn't become
set in stone.


Unfortunately, it already kinda is. Just flipping the switch would break 
circa all D code in existence. That's deemed unacceptable by the 
leadership, as far as I know.


If this can even be fixed, it must be done very carefully. The -property 
compiler switch is currently being deprecated. Maybe it can be 
repurposed later on to change behavior to your liking. But that's at 
least a couple releases in the future, i.e. months, maybe years.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread ag0aep6g via Digitalmars-d

On 09/06/2016 08:47 PM, Sai wrote:

1. The "Jump to" section at the top lists all the items available in
that module nicely, but the layout could be improved if it were listed
as a bunch of columns instead of one giant list with flow layout.


That's a solid idea. Unfortunately, variance in name length is large. 
Might be hard to find a good column width. But it's definitely worth 
exploring.



Even better if they are listed as the "cheat sheet" available in
algorithm module (which is lovely BTW). Can this cheat sheet be
automated for all modules?


The newer, DDOX-based version of the documentation has generated tables 
like that. You can find those docs here:


http://dlang.org/library-prerelease/index.html

It's supposed to become the main/default form of documentation 
soonishly. One thing we have to figure out is how to consolidate the 
hand-written cheat sheets with the generated ones. For example, 
std.algorithm modules currently have both. That's confusing for the reader.



2. I know red is the color of Mars, is there any way to change the theme
to blue or something soft?


Principally, we could offer an alternative stylesheet with another 
color. That's going to be forgotten during maintenance, though, 
especially if the demand for it is low.



Since we can download the documentation, is
there an easy way to do it myself maybe?


You can of course edit the stylesheet. In the zip, that's 
dmd2/html/d/css/style.css.


The color codes for the different reds are mentioned in a comment at the 
top. I hope they're up to date. A couple search/replace operations 
should take care of most of it.


The logo is colored independently. It's a relatively simple SVG file. 
Just edit the "background" color in dmd2/html/d/images/dlogo.svg.


I'm not sure if this qualifies as "easy".


@property Incorrectly Implemented?

2016-09-06 Thread John via Digitalmars-d


Currently it seems that @property isn't implemented correctly. 
For some reason when you try to get the pointer of a property it 
returns a delegate for some reason. This just seems plain wrong, 
the whole purpose of a property is for a function to behave as if 
it weren't a function. There's also some inconsistencies in the 
behavior, "get" is implemented one way and then "set" is 
implemented another.


http://ideone.com/ZgGT2G

 // returns "ref int delegate()"
()   // ok returns "int*", but defeats purpose of 
@property
&(t.j = 10)  // shouldn't this return "ref int delegate(int)" 
?


It would be nice to get this behavior fixed, so that it doesn't 
become set in stone. I think returning a delegate pointer is not 
what people would except nor is there really any use case for it.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread Sai via Digitalmars-d
I have few suggestions, especially for people like me with 
migraine, it could be a bit easy eyes and overall less stressful.


1. The "Jump to" section at the top lists all the items available 
in that module nicely, but the layout could be improved if it 
were listed as a bunch of columns instead of one giant list with 
flow layout.


Even better if they are listed as the "cheat sheet" available in 
algorithm module (which is lovely BTW). Can this cheat sheet be 
automated for all modules?



2. I know red is the color of Mars, is there any way to change 
the theme to blue or something soft? Since we can download the 
documentation, is there an easy way to do it myself maybe?



PS: As many people have already said, documentation has improved 
very very much recently. Thank you for all the people working on 
it.





Re: Unicode function name? ∩

2016-09-06 Thread Illuminati via Digitalmars-d-learn
On Tuesday, 6 September 2016 at 13:41:22 UTC, Jesper Tholstrup 
wrote:

On Tuesday, 6 September 2016 at 02:22:50 UTC, Illuminati wrote:

It's concise and has a very specific meaning.


Well, only if we can agree on what the symbols mean. I'm not 
sure that every symbol is concise and specific across the 
fields of mathematics, statistics, and physics.


The worst part, however, is our (humans, that is) intrinsic 
desire to be "clever". There will be an endless incorrect use 
of symbols, which will render code very difficult to understand 
Friday afternoon when things break.


Symbols are inherently meaningless so you are not making sense. 
Clever is what got humans out the jungle, I think it is a good 
thing. No need to denigrate it when you benefit from people being 
clever. Of course, you could argue that staying in the jungle 
would have been the best thing...


The whole point of symbols is to simplify, ∩ is more simple 
than intersect as the first requires 1 symbol and the second 
requires 8 symbols.


I don't buy that argument - fewer symbols is better? If so 
disagree, its a lot easier to make and a lot harder to catch a 
∩ vs ∪ error compared to 'intersect()' vs 'union()' error.



How is Unicode normalization handled? It is my impression that 
certain symbols can be represented in more than one way. I 
could be wrong...


Mathematicians don't seem to get confused by symbols. intersect 
is a symbol. It is no different than ∩ or any other symbol. It is 
just chicken scratch. There is no inherent meaning in the wavy 
lines. If you think there is then you are deluding yourself. You 
have simply memorized what those groups of symbols mean and you 
are too lazy to memorize what some other symbol means. Once you 
realize the game you are playing with yourself then it becomes 
easier to break the bad habit and not get caught up in the 
nonsense.


The reason why ∩ is better than intersect is because it is 
quicker to see and analyze(due to its "size"). True, it is more 
ambiguous, as ambiguity depends on size, but at least in this 
case it has been well established just as π and 1,2,3..., etc. 
But if you are so concerned about ambiguity then why not 
intersectsetofarithmeticintegerswithsetofarithmeticintegers? That 
is far less ambiguous than intersect.


My point is, you are playing games. You might not realize it but 
that is what is going on. If you want to be truthful, it is 
better to say "I prefer to use my own personal standardized 
notation that I have already learned since it takes precious time 
away from my own life.". When do you this, I cannot argue with 
you, but then you also have to accept that you cannot argue with 
me(or anyone else). Because what makes sense or works for you 
might not work for me or someone else.


Your argument is exactly analogous to "I don't speak french! Use 
English you stupid french speaking person, french is for idiots 
anyways". The tone is irrelevant, the point is acting like one 
superficial system is better than some other superficial system 
simply because of perspective/laziness/arrogance/etc. The only 
issues are that either you are failing to see the systems as 
superficial or you are failing to see that your own personal 
center of the universe is not actually the center of the universe.


So just be real. The reason you don't like it is because it 
confuses you. If it didn't, you wouldn't have a problem with it. 
If you could speak French and English then you, if you were that 
hypothetical person, wouldn't care what language was used.  All I 
can say is that everyone is confused until they learn something. 
But don't confuse your confusion with some innate scale of 
better/worse, it only leads to more confusion. The solution to 
all confusion is familiarity. Become familiar with your 
confusion(e.g., using ∩) and it won't be confusing anymore.


The reason the mathematical symbols don't phase me is because I 
spent years using them. In my mind ∩ = intersection of sets(which 
I have a non-verbal meaning in my own mind on what that means). I 
see no difference between ∩ and intersect. Hence I am not 
confused. If someone comes along and uses ∩ to mean what I call 
union. Then it won't confuse me either. Because I realize they 
have just relabeled stuff. Sure I have to keep track, but as long 
as they are logical(consistent) then I'll get used to(familiar) 
with their system and it won't be a problem. I won't get angry or 
upset that they are trying to pull the rug out from underneath 
me. I'll realize that they just speak French and if I want to 
communicate with them I'll learn French. No big deal, I'll be 
more enlightened from doing so. Sure it takes some time, but what 
else do we humans have to do? Everything we do just superficial 
anyways.













Re: Valid to assign to field of struct in union?

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 14:56, Johan Engelen wrote:

Hi all,
  I have a question about the validity of this code:
```
void main()
{
struct A {
int i;
}
struct S
{
union U
{
A first;
A second;
}
U u;

this(A val)
{
u.second = val;
assign(val);
}

void assign(A val)
{
u.first.i = val.i+1;
}
}
enum a = S(A(1));

assert(a.u.first.i == 2);
}
```

My question is: is it allowed to assign to a field of a struct inside a
union, without there having been an assignment to the (full) struct before?
...


I don't think so (although your case could be made to work easily 
enough). This seems to be accepts-invalid. Another case, perhaps 
demonstrating more clearly what is going on in the compiler:


float foo(){
union U{
int a;
float b;
}
U u;
u.b=1234;
u.a=3;
return u.b; // error
}
pragma(msg, foo());


float bar(){
struct A{ int a; }
struct B{ float b; }
union U{
A f;
B s;
}
U u;
u.s.b=1234;
u.f.a=0;
return u.s.b; // ok
}
pragma(msg, bar()); // 1234.00F


The compiler allows it, but it leads to a bug with CTFE of this code:
the assert fails.
(changing `enum` to `auto` moves the evaluation to runtime, and all
works fine)

Reported here: https://issues.dlang.org/show_bug.cgi?id=16471.





Re: CompileTime performance measurement

2016-09-06 Thread Stefan Koch via Digitalmars-d

On Tuesday, 6 September 2016 at 10:42:00 UTC, Martin Nowak wrote:

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

I recently implemented __ctfeWriteln.


Nice, is it only for your interpreter or can we move 
https://trello.com/c/6nU0lbl2/24-ctfewrite to done? I think 
__ctfeWrite would be a better primitive. And we could actually 
consider to specialize std.stdio.write* for CTFE.


It's only for the current engine and only for Strings!
See: https://github.com/dlang/druntime/pull/1643
and https://github.com/dlang/dmd/pull/6101


Re: DIP1001: DoExpression

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 17:23, Steven Schveighoffer wrote:

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you
want to evaluate to the result of some expression.

But why wouldn't a tuple of type (void, void, T) be valid?


Because 'void' is special. (Language design error imported from C.)

struct S{
void x; // does not work.
}

There can be no field (or variables) of type 'void'. (void,void,T) has 
two fields of type 'void'.


Just fixing the limitations is also not really possible, as e.g. void* 
and void[] exploit that 'void' is special and have a non-compositional 
meaning.



It could also auto-reduce to just (T).

-Steve


That would fix the limitation, but it is also quite surprising behaviour.


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 1:01 PM, Timon Gehr wrote:

On 06.09.2016 17:23, Steven Schveighoffer wrote:

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you
want to evaluate to the result of some expression.

But why wouldn't a tuple of type (void, void, T) be valid?


Because 'void' is special. (Language design error imported from C.)


builtin tuples can be special too...

-Steve


Re: Taking pipeline processing to the next level

2016-09-06 Thread Wyatt via Digitalmars-d

On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:


A central premise of performance-oriented programming which I've
employed my entire career, is "where there is one, there is 
probably

many", and if you do something to one, you should do it to many.


From a conceptual standpoint, this sounds like the sort of thing 
array languages like APL and J thrive on, so there's solid 
precedent for the concept.  I might suggest looking into 
optimising compilers in that space for inspiration and such; 
APEX, for example: http://www.snakeisland.com/apexup.htm


Of course, this comes with the caveat that this is (still!) some 
relatively heavily-academic stuff.  And I'm not sure to what 
extent that can help mitigate the problem of relaxing type 
requirements such that you can e.g. efficiently ,/⍉ your 4 2⍴"LR" 
vector for SIMD on modern processors.


-Wyatt


Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 17:26 Steven Schveighoffer via Digitalmars-d napsal(a):


On 9/6/16 11:00 AM, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I almost
always want float output in case of division. And once in a while I bump
into this issue.

I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;


auto c = float(a) / b;

-Steve


another solution is use own function for div operation

auto c = div(a,b);


Re: Promotion rules ... why no float?

2016-09-06 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 6 September 2016 at 15:00:48 UTC, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so 
I almost always want float output in case of division. And once 
in a while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined 
integer type with an overloaded division to return a float 
instead and use it everywhere in place of integers. I am 
curious how this will work out.


Exotic way:

import std.stdio;

float div(float a, float b) { return a / b; }

void main()
{

auto c = 3.div(4);
writeln(c);
}





Re: Promotion rules ... why no float?

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 11:00 AM, Sai wrote:

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I almost
always want float output in case of division. And once in a while I bump
into this issue.

I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;


auto c = float(a) / b;

-Steve


Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d

Dne 6.9.2016 v 17:00 Sai via Digitalmars-d napsal(a):


Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I 
almost always want float output in case of division. And once in a 
while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined integer type 
with an overloaded division to return a float instead and use it 
everywhere in place of integers. I am curious how this will work out.
Because of alias this it works quite well for me in many cases. However 
one unplesant situation is with method parameters


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/6/16 10:17 AM, Timon Gehr wrote:

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Let's first stipulate that z cannot be void here, as the context is you 
want to evaluate to the result of some expression.


But why wouldn't a tuple of type (void, void, T) be valid? It could also 
auto-reduce to just (T).


-Steve



Re: [GSoC] Precise GC

2016-09-06 Thread jmh530 via Digitalmars-d-announce

On Saturday, 3 September 2016 at 12:22:25 UTC, thedeemon wrote:


GC (and runtime in general) has no idea what code is safe and 
what code is system. GC works with data at run-time. All 
@safe-related stuff is about code (not data!) and happens at 
compile-time. They are completely orthogonal and independent, 
as I understand.


I don't see why you wouldn't be able to use compile-time 
information like __traits with the runtime.


In my head, I imagine that at compile-time you can figure out 
which unions are in @safe functions, add a UDA to each (so you're 
marking data, not code), and then read that information at 
run-time (like with __traits).





Re: Promotion rules ... why no float?

2016-09-06 Thread Sai via Digitalmars-d

Thanks for the replies.

I tend to use a lot of float math (robotics and automation) so I 
almost always want float output in case of division. And once in 
a while I bump into this issue.


I am wondering what are the best ways to work around it.

float c = a / b; // a and b could be integers.

Some solutions:

float c = cast!float(a) / b;
float c = 1f * a / b;


Any less verbose ways to do it?

Another solution I am thinking is to write a user defined integer 
type with an overloaded division to return a float instead and 
use it everywhere in place of integers. I am curious how this 
will work out.





[Issue 16469] Segmentation fault in bigAlloc with negative size

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16469

--- Comment #3 from Lodovico Giaretta  ---
(In reply to Cédric Picard from comment #2)
> Is it a duplicate? Judging only from gdb backtrace those are different
> issues. I haven't checked in druntime though.

As in the other issue, the problem is that a negative constant becomes a huge
size_t value, which should trigger an OutOfMemoryError, but segfaults instead.
So IMHO it's the same issue. It may well be that the druntime presents the
wrong code in two different places, but it is probably two copies of the same
logic, as enlarging (not in place) and allocating perform the same checks and
the same steps.

But of course anybody is free to reopen this if it's deemed necessary.

--


[Issue 16469] Segmentation fault in bigAlloc with negative size

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16469

--- Comment #2 from Cédric Picard  ---
Is it a duplicate? Judging only from gdb backtrace those are different issues.
I haven't checked in druntime though.

--


Re: Templates problem

2016-09-06 Thread ketmar via Digitalmars-d-learn
On Tuesday, 6 September 2016 at 14:50:17 UTC, Lodovico Giaretta 
wrote:
From a quick look, it looks like `results` is a 
`const(TickDuration[3])`, that is a fixed-length array. And 
fixed-length arrays aren't ranges. If you explicitly slice 
them, they become dynamic arrays, which are ranges.


So the solution is to call `map` with `results[]` instead of 
`results`.


exactly. static arrays doesn't have `popFront`, hence 
`isInputRange` fails. yet there is no way to tell that to user, 
so one should just learn what those cryptic error messages really 
means.


or just get used to always slice arrays, it's cheap. ;-)


Re: Templates problem

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d-learn

On Tuesday, 6 September 2016 at 14:38:54 UTC, Russel Winder wrote:

The code fragment:

const results = benchmark!(run_mean, run_mode, run_stdDev)(1);
	const times = map!((TickDuration t) { return 
(to!Duration(t)).total!"seconds"; })(results);


seems entirely reasonable to me. However rdmd 20160627 begs to 
differ:


run_checks.d(20): Error: template 
std.algorithm.iteration.map!(function (TickDuration t) => 
to(t).total()).map cannot deduce function from argument types 
!()(const(TickDuration[3])), candidates are:

/usr/include/dmd/phobos/std/algorithm/iteration.d(450):
std.algorithm.iteration.map!(function (TickDuration t) => 
to(t).total()).map(Range)(Range r) if (isInputRange!(Unqual!Range))
Failed: ["dmd", "-v", "-o-", "run_checks.d", "-I."]

and I have no idea just now why it is complaining, nor what to 
do to fix it.


From a quick look, it looks like `results` is a 
`const(TickDuration[3])`, that is a fixed-length array. And 
fixed-length arrays aren't ranges. If you explicitly slice them, 
they become dynamic arrays, which are ranges.


So the solution is to call `map` with `results[]` instead of 
`results`.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d

On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson wrote:

Suggestions?


Forgot to mention in OP that I had tried this( void* pArg = null 
); to no avail:


mutex.d(19): Deprecation: constructor mutex.Mutex.this all 
parameters have default arguments, but structs cannot have 
default constructors.


It's deprecated and the constructor doesn't get called. So no 
egregious sploits for me.


Re: Taking pipeline processing to the next level

2016-09-06 Thread Manu via Digitalmars-d
On 7 September 2016 at 00:26, finalpatch via Digitalmars-d
 wrote:
> On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
>>
>> Then some template magic will figure out the LCM of the 2 kernels' pixel
>> width is 3*4=12 and therefore they are fused together into a composite
>> kernel of pixel width 12.  The above line compiles down into a single
>> function invokation, with a main loop that reads the source buffer in 4
>> pixels step, call MySimpleKernel 3 times, then call AnotherKernel 4 times.
>
>
> Correction:
> with a main loop that reads the source buffer in *12* pixels step, call
> MySimpleKernel 3 times, then call AnotherKernel 4 times.

It's interesting thoughts. What did you do when buffers weren't
multiple of the kernels?


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:26:22 UTC, finalpatch wrote:

On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a 
main loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Correction:
with a main loop that reads the source buffer in *12* pixels 
step, call MySimpleKernel 3 times, then call AnotherKernel 4 
times.


And of course the key to the speed is all function calls get 
inlined by the compiler.


Templates problem

2016-09-06 Thread Russel Winder via Digitalmars-d-learn
The code fragment:

const results = benchmark!(run_mean, run_mode, run_stdDev)(1);
const times = map!((TickDuration t) { return 
(to!Duration(t)).total!"seconds"; })(results);

seems entirely reasonable to me. However rdmd 20160627 begs to differ:

run_checks.d(20): Error: template std.algorithm.iteration.map!(function 
(TickDuration t) => to(t).total()).map cannot deduce function from argument 
types !()(const(TickDuration[3])), candidates are:
/usr/include/dmd/phobos/std/algorithm/iteration.d(450):
std.algorithm.iteration.map!(function (TickDuration t) => 
to(t).total()).map(Range)(Range r) if (isInputRange!(Unqual!Range))
Failed: ["dmd", "-v", "-o-", "run_checks.d", "-I."]

and I have no idea just now why it is complaining, nor what to do to
fix it.


-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

signature.asc
Description: This is a digitally signed message part


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 14:27:49 UTC, Lodovico Giaretta 
wrote:
That's because it doesn't initialize (with static opCall) the 
fields of SomeOtherClass, right? I guess that could be solved 
once and for all with some template magic of the binding system.


Correct for the first part. The second part... not so much. Being 
all value types, there's nothing stopping you instantiating the 
example Mutex on the stack in a function in D - and no way of 
enforcing the user to go through a custom construction path 
either.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 14:10:43 UTC, Ethan Watson wrote:
@disable this() will hide the static opCall and the compiler 
will throw an error.


Yes, I realized that. My bad.

As @disable this is not actually defining a ctor, it should not 
be signaled as hiding the opCall. To me, this looks like an 
oversight in the frontend that should be fixed.


static opCall doesn't work for the SomeOtherClass example 
listed in OP.


That's because it doesn't initialize (with static opCall) the 
fields of SomeOtherClass, right? I guess that could be solved 
once and for all with some template magic of the binding system.


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 14:21:01 UTC, finalpatch wrote:
Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a 
main loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Correction:
with a main loop that reads the source buffer in *12* pixels 
step, call MySimpleKernel 3 times, then call AnotherKernel 4 
times.




Re: Return type deduction

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/5/16 5:59 AM, Andrea Fontana wrote:

I asked this some time (years?) ago. Time for a second try :)

Consider this:

---

T simple(T)() { return T.init; }


void main()
{
int test = simple!int(); // it compiles
int test2 = simple();// it doesn't


  auto test3 = simple!int();

Granted, you are still typing "auto", but only specify the type once.

-Steve


Re: Taking pipeline processing to the next level

2016-09-06 Thread finalpatch via Digitalmars-d

On Tuesday, 6 September 2016 at 03:08:43 UTC, Manu wrote:

I still stand by this, and I listed some reasons above.
Auto-vectorisation is a nice opportunistic optimisation, but it 
can't
be relied on. The key reason is that scalar arithmetic 
semantics are

different than vector semantics, and auto-vectorisation tends to
produce a whole bunch of extra junk code to carefully (usually
pointlessly) preserve the scalar semantics that it's trying to
vectorise. This will never end well.
But the vectorisation isn't the interesting problem here, I'm 
really
just interested in how to work these batch-processing functions 
into
our nice modern pipeline statements without placing an 
unreasonable
burden on the end-user, who shouldn't be expected to go out of 
their

way. If they even have to start manually chunking, I think we've
already lost; they won't know optimal chunk-sizes, or anything 
about

alignment boundaries, cache, etc.


In a previous job I had successfully created a small c++ library 
to perform pipelined SIMD image processing. Not sure how relevant 
it is but think I'd share the design here, perhaps it'll give you 
guys some ideas.


Basically the users of this library only need to write simple 
kernel classes, something like this:


// A kernel that processes 4 pixels at a time
struct MySimpleKernel : Kernel<4>
{
// Tell the library the input and output type
using InputVector  = Vector<__m128, 1>;
using OutputVector = Vector<__m128, 2>;

template
OutputVector apply(const T& src)
{
// T will be deduced to Vector<__m128, 1>
// which is an array of one __m128 element
// Awesome SIMD code goes here...
// And return the output vector
return OutputVector(...);
}
};

Of course the InputVector and OutputVector do not have to be 
__m128, they can totally be other types like int or float.


The cool thing is kernels can be chained together with >> 
operators.


So assume we have another kernel:

struct AnotherKernel : Kernel<3>
{
...
}

Then we can create a processing pipeline with these 2 kernels:

InputBuffer(...) >> MySimpleKernel() >> AnotherKernel() >> 
OutputBuffer(...);


Then some template magic will figure out the LCM of the 2 
kernels' pixel width is 3*4=12 and therefore they are fused 
together into a composite kernel of pixel width 12.  The above 
line compiles down into a single function invokation, with a main 
loop that reads the source buffer in 4 pixels step, call 
MySimpleKernel 3 times, then call AnotherKernel 4 times.


Any number of kernels can be chained together in this way, as 
long as your compiler doesn't explode.


At that time, my benchmarks showed pipelines generated in this 
way often rivals the speed of hand tuned loops.




Re: DIP1001: DoExpression

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 16:12, Steven Schveighoffer wrote:


I'm not sure I agree with the general principal of the DIP though. I've
never liked comma expressions, and this seems like a waste of syntax.
Won't tuples suffice here when they take over the syntax? e.g. (x, y,
z)[$-1]


(Does not work if x, y or z is of type 'void'.)


Re: DIP1001: DoExpression

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d

On 9/3/16 12:03 PM, Jonathan M Davis via Digitalmars-d wrote:

On Saturday, September 03, 2016 14:42:34 Cauterite via Digitalmars-d wrote:

On Saturday, 3 September 2016 at 14:25:49 UTC, rikki cattermole

wrote:

I propose a slight change:
do(x, y, return z)


Hmm, I suppose I should mention one other motivation behind this
DIP:

I really like to avoid using the 'return' keyword inside
expressions, because I find it visually confusing - hear me out
here -
When you're reading a function and trying to understand its
control flow, one of the main elements you're searching for is
all the places the function can return from.
If the code has a lot of anonymous functions with return
statements this can really slow down the process as you have to
more carefully inspect every return to see if it's a 'real'
return or inside an anonymous function.

Also, in case it wasn't obvious, the do() syntax was inspired by
Clojure:
http://clojure.github.io/clojure/clojure.core-api.html#clojure.core/do


So, instead of having the return statement which everyone knows to look for
and is easy to grep for, you want to add a way to return _without_ a return
statement?


No, the amendment is to show that z is the "return" of the do 
expression. It doesn't return from the function the do expression is in.


I also think that a) we shouldn't have a requirement, or support for, 
return inside the expression -- return is not actually an expression, 
it's a statement. This would be very confusing. b) I like the idea of 
the semicolon to show that the last expression is different.


I'm not sure I agree with the general principal of the DIP though. I've 
never liked comma expressions, and this seems like a waste of syntax. 
Won't tuples suffice here when they take over the syntax? e.g. (x, y, 
z)[$-1]


-Steve


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
On Tuesday, 6 September 2016 at 13:57:27 UTC, Lodovico Giaretta 
wrote:
Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


static opCall doesn't work for the SomeOtherClass example listed 
in OP. @disable this() will hide the static opCall and the 
compiler will throw an error.


Somewhat related: googling "factory method dlang" doesn't provide 
any kind of clarity on what exactly is a factory method. 
Documentation for factory methods/functions could probably be 
improved on this front.


Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d
On Tuesday, 6 September 2016 at 13:57:27 UTC, Lodovico Giaretta 
wrote:
On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson 
wrote:

[...]
Suggestions?


Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


It's not as good-looking as a true default ctor, but it doesn't 
provide any way to introduce bugs and it's not that bad (just a 
couple key strokes).


Correcting my answer. The following code compiles fine:

struct S
{
static S opCall()
{
S res = void;
// call C++ ctor
return res;
}
}

void main()
{
S s = S();
}

But introduces the possibility of using the default ctor 
inadvertitely.

Sadly, the following does not compile:

struct S
{
@disable this();
static S opCall()
{
S res = void;
// call C++ ctor
return res;
}
}

Making this compile would solve your issues.


[Issue 16474] New: CTFE pow

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16474

  Issue ID: 16474
   Summary: CTFE pow
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: blocker
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

enum x = 2^^(1.0/10);

Please, please, can we please have '^^' work at CTFE.
It's been forever. I have so many stalled branches of various projects
depending on this eventually being fixed. It's constantly blocking my work.

--


D Boston September Meetup

2016-09-06 Thread Steven Schveighoffer via Digitalmars-d-announce
Posted here: 
https://www.meetup.com/Boston-area-D-Programming-Language-Meetup/events/233871852/


In general, I'm still looking to see if we can host meetups in a private 
setting. Anyone who has info on any companies in or around Boston that 
would be willing to host, or might know of a place for us to use for 
talks and discussion, please email me schvei...@yahoo.com


-Steve



Re: Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d

On Tuesday, 6 September 2016 at 13:44:37 UTC, Ethan Watson wrote:

[...]
Suggestions?


Of course I don't know which level of usability you want to 
achieve, but I think that in this case your bind system, when 
binding a default ctor, could use @disable this() and define a 
factory method (do static opCall work?) that calls the C++ ctor.


It's not as good-looking as a true default ctor, but it doesn't 
provide any way to introduce bugs and it's not that bad (just a 
couple key strokes).


Struct default constructor - need some kind of solution for C++ interop

2016-09-06 Thread Ethan Watson via Digitalmars-d
Alright, so now I've definitely come up across something with 
Binderoo that has no easy solution.


For the sake of this example, I'm going to use the class I'm 
binary-matching with a C++ class and importing functionality with 
C++ function pointers to create a 100% functional match - our 
Mutex class. It doesn't have to be a mutex, it just needs to be 
any C++ class where a default constructor is non-trivial.


In C++, it looks much like what you'd expect:

class Mutex
{
public:
  Mutex();
  ~Mutex();
  void lock();
  bool tryLock();
  void unlock();

private:
  CRITICAL_SECTION  m_criticalSection;
};

Cool. Those functions call the exact library functions you'd 
expect, the constructor does an InitializeCriticalSection and the 
destructor does a DeleteCriticalSection.


Now, with Binderoo aiming to provide complete C++ matching to the 
point where it doesn't matter whether a class was allocated in 
C++ or D, this means I've chosen to make every C++-matching class 
a value type rather than a reference type. The reasoning is 
pretty simple:


class SomeOtherClass
{
private:
  SomeVitalObject m_object;
  Mutex   m_mutex;
};

This is a pretty common pattern. Other C++ classes will embed 
mutex instances inside them. A reference type for matching in 
this case is right out of the question. Which then leads to a 
major conundrum - default constructing this object in D.


D structs have initialisers, but you're only allowed constructors 
if you pass arguments. With a Binderoo matching struct 
declaration, it would basically look like this:


struct Mutex
{
  @BindConstructor void __ctor();
  @BindDestructor void __dtor();

  @BindMethod void lock();
  @BindMethod bool tryLock();
  @BindMethod void unlock();

  private CRITICAL_SECTION m_criticalSection;
}

After mixin expansion, it would look come out looking something 
like this:


struct Mutex
{
  pragma( inline ) this() { __methodTable.function0(); }
  pragma( inline ) ~this() { __methodTable.function1(); }

  pragma( inline ) void lock() { __methodTable.function2(); }
  pragma( inline ) bool tryLock() { return 
__methodTable.function3(); }

  pragma( inline ) void unlock() { __methodTable.function4(); }

  private CRITICAL_SECTION m_criticalSection;
}

(Imagine __methodTable is a gshared object with the relevant 
function pointers imported from C++.)


Of course, it won't compile. this() is not allowed for obvious 
reasons. But in this case, we need to call a corresponding 
non-trivial constructor in C++ code to get the functionality 
match.


Of course, given the simplicity of the class, I don't need to 
import C++ code to provide exact functionality at all. But I 
still need to call InitializeCriticalSection somehow whenever 
it's instantiated anywhere. This pattern of non-trivial default 
constructors is certainly not limited to mutexes, not in our 
codebase or wider C++ practices at all.


So now I'm in a bind. This is one struct I need to construct 
uniquely every time. And I also need to keep the usability up to 
not require calling some other function since this is matching a 
C++ class's functionality, including its ability to instantiate 
anywhere.


Suggestions?


Re: Unicode function name? ∩

2016-09-06 Thread Jesper Tholstrup via Digitalmars-d-learn

On Tuesday, 6 September 2016 at 02:22:50 UTC, Illuminati wrote:

It's concise and has a very specific meaning.


Well, only if we can agree on what the symbols mean. I'm not sure 
that every symbol is concise and specific across the fields of 
mathematics, statistics, and physics.


The worst part, however, is our (humans, that is) intrinsic 
desire to be "clever". There will be an endless incorrect use of 
symbols, which will render code very difficult to understand 
Friday afternoon when things break.


The whole point of symbols is to simplify, ∩ is more simple 
than intersect as the first requires 1 symbol and the second 
requires 8 symbols.


I don't buy that argument - fewer symbols is better? If so 
disagree, its a lot easier to make and a lot harder to catch a ∩ 
vs ∪ error compared to 'intersect()' vs 'union()' error.



How is Unicode normalization handled? It is my impression that 
certain symbols can be represented in more than one way. I could 
be wrong...




[Issue 16473] operator overloading is broken

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16473

Илья Ярошенко  changed:

   What|Removed |Added

Summary|operator overloading is |operator overloading is
   |brocken |broken

--


[Issue 16473] New: operator overloading is brocken

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16473

  Issue ID: 16473
   Summary: operator overloading is brocken
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: major
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: ilyayaroshe...@gmail.com

---
import std.experimental.ndslice;

void main()
{
auto sl = new double[10].sliced(2, 5);
auto d = -sl[0, 1];
}
---
/d921/f269.d(6): Error: template std.experimental.ndslice.slice.Slice!(2LU,
double*).Slice.opIndexUnary cannot deduce function from argument types
!("-")(int, int), candidates are:
/opt/compilers/dmd2/include/std/experimental/ndslice/slice.d(1980):   
std.experimental.ndslice.slice.Slice!(2LU, double*).Slice.opIndexUnary(string
op, Indexes...)(Indexes _indexes) if (isFullPureIndex!Indexes && (op == "++" ||
op == "--"))
/opt/compilers/dmd2/include/std/experimental/ndslice/slice.d(2008):   
std.experimental.ndslice.slice.Slice!(2LU, double*).Slice.opIndexUnary(string
op, Slices...)(Slices slices) if (isFullPureSlice!Slices && (op == "++" || op
== "--"))
-

The are two prototypes in Slice. Both with if (op == "++" || op == "--")
condition.
So, I don't think we need to add all opIndexUnary variants.
Instead, if expression -sl[0, 1] can not be expanded with opIndexUnary, then is
should be expanded with -(sl.opIndex(0, 1)).

--


Re: Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

On Tuesday, 6 September 2016 at 12:56:24 UTC, Johan Engelen wrote:


The compiler allows it, but it leads to a bug with CTFE of this 
code: the assert fails.


Before someone smart tries it, yes the code works with LDC, but 
wait... swap the order of `first` and `second` in the union, and 
BOOM!
Internally, CTFE of the code leads to a corrupt union initializer 
array. LDC and DMD do things a little differently in codegen. 
Oversimplified: LDC will use the first member of the union, DMD 
the last.


Valid to assign to field of struct in union?

2016-09-06 Thread Johan Engelen via Digitalmars-d

Hi all,
  I have a question about the validity of this code:
```
void main()
{
struct A {
int i;
}
struct S
{
union U
{
A first;
A second;
}
U u;

this(A val)
{
u.second = val;
assign(val);
}

void assign(A val)
{
u.first.i = val.i+1;
}
}
enum a = S(A(1));

assert(a.u.first.i == 2);
}
```

My question is: is it allowed to assign to a field of a struct 
inside a union, without there having been an assignment to the 
(full) struct before?


The compiler allows it, but it leads to a bug with CTFE of this 
code: the assert fails.
(changing `enum` to `auto` moves the evaluation to runtime, and 
all works fine)


Reported here: https://issues.dlang.org/show_bug.cgi?id=16471.



Re: Taking pipeline processing to the next level

2016-09-06 Thread Jerry via Digitalmars-d

On Monday, 5 September 2016 at 05:08:53 UTC, Manu wrote:

I mostly code like this now:
  data.map!(x => transform(x)).copy(output);


So you basicly want to make the lazy computation eager and store 
the result?


data.map!(x => transform(x)).array

Will allocate a new array and fill it with the result of map.
And if you want to recycle the buffer I guess writing a buffer 
function would be trivial.






[Issue 16471] [CTFE] Incorrect CTFE when assigning to union struct fields

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16471

--- Comment #1 from Johan Engelen  ---
A simpler testcase:
```
void main()
{
struct A {
int i;
}
struct S
{
union U
{
A first;
A second;
}
U u;

this(A val)
{
u.second = val;
assign(val);
}

void assign(A val)
{
u.first.i = val.i+1;
}
}
enum a = S(A(1));

assert(a.u.first.i == 2);
}
```

--


[Issue 16469] Segmentation fault in bigAlloc with negative size

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16469

Lodovico Giaretta  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 CC||lodov...@giaretart.net
 Resolution|--- |DUPLICATE

--- Comment #1 from Lodovico Giaretta  ---


*** This issue has been marked as a duplicate of issue 16470 ***

--


Re: Why is this?

2016-09-06 Thread Manu via Digitalmars-d
On 6 September 2016 at 21:28, Timon Gehr via Digitalmars-d
 wrote:
> On 06.09.2016 08:07, Manu via Digitalmars-d wrote:
>>
>> I have weird thing:
>>
>> template E(F){
>> enum E {
>> K = F(1)
>> }
>> }
>>
>> struct S(F = float, alias e_ = E!double.K) {}
>> S!float x; // Error: E!double.K is used as a type
>>
>> alias T = E!double.K;
>> struct S2(F = float, alias e_ = T) {}
>> S2!float y; // alias makes it okay...
>>
>> struct S3(F = float, alias e_ = (E!double.K)) {}
>> S3!float z; // just putting parens make it okay as well... wat!?
>>
>>
>> This can't be right... right?
>>
>> No problem if E is not a template.
>>
>
> Bug.

https://issues.dlang.org/show_bug.cgi?id=16472


[Issue 16472] New: template alias parameter bug

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16472

  Issue ID: 16472
   Summary: template alias parameter bug
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: normal
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: turkey...@gmail.com

template E(F){
enum E {
K = F(1)
}
}

struct S(F = float, alias e_ = E!double.K) {}
S!float x; // Error: E!double.K is used as a type

alias T = E!double.K;
struct S2(F = float, alias e_ = T) {}
S2!float y; // alias makes it okay...

struct S3(F = float, alias e_ = (E!double.K)) {}
S3!float z; // just putting parens make it okay as well... wat!?


This can't be right... right?

No problem if E is not a template.

--


[Issue 16470] Segfault with negative array length

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16470

--- Comment #2 from Lodovico Giaretta  ---
*** Issue 16469 has been marked as a duplicate of this issue. ***

--


[Issue 16467] templated function default argument take into account when not needed

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16467

Jonathan M Davis  changed:

   What|Removed |Added

 CC||issues.dl...@jmdavisprog.co
   ||m

--- Comment #1 from Jonathan M Davis  ---
I suspect that this will be declared to be "not a bug." While, I can understand
your reasoning, the problem is that when you pass a string to your identity
function, IFTI instantiates it with string. It's only called _after_ the
function has been compiled, and the default argument does not work with the
type in question.

Remember that templatizing a function directly is just shorthand for declaring
an eponymous template that it's a function. So, this would be equivalent to
what you declared:

template identity(T)
{
T identity(T t = 0)
{
return t;
}
}

And if the template is instantiated with string, then the default argument is
not valid. Also consider the case where you do

auto result = identity!string();

It's exactly the same template instantiation as identity("hello"), but it would
need the default argument, which is the wrong type.

--


[Issue 16471] New: [CTFE] Incorrect CTFE when assigning to union struct fields

2016-09-06 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16471

  Issue ID: 16471
   Summary: [CTFE] Incorrect CTFE when assigning to union struct
fields
   Product: D
   Version: D2
  Hardware: All
OS: All
Status: NEW
  Severity: major
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: jbc.enge...@gmail.com

The following code asserts with DMD 2.071:
```
struct StructA
{
this(int a)
{
i = a;
}

import std.typecons;
mixin Proxy!i;

int i;
}

struct TestValue
{
union UnionType
{
int first;
StructA second;
StructA third;
}

UnionType value;

this(StructA val)
{
assign(val);
}

void assign(StructA val)
{
value.third = val;
assignSecond(7);
}

void assignSecond(int val)
{
value.second = StructA(val);
}
}

void main()
{
enum ctObj = TestValue(StructA(1));
// The last assigment in TestValue's construction is to value.second.
assert(ctObj.value.second.i == 7); //
// Note: assert(ctObj.value.third.i == 1) passes, but shouldn't.
}
```

When `enum` is changed to `auto` it all works fine.

The problem lies in a corrupt `StructLiteralExp` array of initializers
(`StructLiteralExp.elements`). In e2ir `toElemStructLit` there is a comment
saying:
"If a field has explicit initializer (*sle->elements)[i] != NULL), any other
overlapped fields won't have initializer."
This however, is not true. When assigning to union fields in `assign` and
`assignSecond`, multiple union fields will get an initializer
(`(*sle->elements)[i] != NULL`). Even without `assignSecond`, the `first` and
`second` fields will both have an initializer (this bugs LDC,
https://github.com/ldc-developers/ldc/issues/1324, where DMD's codegen
(accidentally) gets it right).

--


Re: Why is this?

2016-09-06 Thread Timon Gehr via Digitalmars-d

On 06.09.2016 08:07, Manu via Digitalmars-d wrote:

I have weird thing:

template E(F){
enum E {
K = F(1)
}
}

struct S(F = float, alias e_ = E!double.K) {}
S!float x; // Error: E!double.K is used as a type

alias T = E!double.K;
struct S2(F = float, alias e_ = T) {}
S2!float y; // alias makes it okay...

struct S3(F = float, alias e_ = (E!double.K)) {}
S3!float z; // just putting parens make it okay as well... wat!?


This can't be right... right?

No problem if E is not a template.



Bug.


Re: CompileTime performance measurement

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 10:46:11 Martin Nowak via Digitalmars-d wrote:
> On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:
> > Hi Guys.
> >
> > I recently implemented __ctfeWriteln.
> > Based on that experience I have now implemented another pseudo
> > function called __ctfeTicksMs.
> > That evaluates to a uint representing the number of
> > milliseconds elapsed between the start of dmd and the time of
> > semantic evaluation of this expression.
>
> For bigger CTFE programs it might be helpful.
> Milliseconds are a fairly low resolution, would think with hnsec
> or so makes a better unit. Using core.time.TickDuration for that
> would make sense.

If you're doing to do that use core.time.Duration. TickDuration is slated to
be deprecated once the functionality in Phobos that uses it has been
deprecated. Duration replaces its functionality as a duration, and MonoTime
replaces its functionality as a timestamp of the monotonic clock.

- Jonathan M Davis



Re: C# 7 Features - Tuples

2016-09-06 Thread Nick Treleaven via Digitalmars-d
On Monday, 5 September 2016 at 15:50:31 UTC, Lodovico Giaretta 
wrote:
On Monday, 5 September 2016 at 15:43:43 UTC, Nick Treleaven 
wrote:

We can already (almost do that):


import std.stdio, std.typecons;

void unpack(T...)(Tuple!T tup, out T decls)
{
static if (tup.length > 0)
{
decls[0] = tup[0];
tuple(tup[1..$]).unpack(decls[1..$]);
}
}

void main()
{
auto t = tuple(1, "a", 3.0);
int i;
string s;
double d;
t.unpack(i, s, d);
writeln(i);
writeln(s);
writeln(d);
}


The main benefit of supporting tuple syntax is unpacking into new 
declarations (writing Tuple!(...) or tuple!(...) isn't that 
significant IMO). I was suggesting that out argument 
*declarations* actually provides this and is a more general 
feature.


Re: D Meetup in Hamburg?

2016-09-06 Thread Dgame via Digitalmars-d
On Tuesday, 6 September 2016 at 09:42:12 UTC, Martin Tschierschke 
wrote:

Hi All,
anybody interested to meet in Hamburg, Germany?

Time and location will be found!

Regards mt.


Yes, I would be interested.


Re: [OT] LLVM 3.9 released - you can try the release already with LDC!

2016-09-06 Thread eugene via Digitalmars-d-announce
On Tuesday, 6 September 2016 at 09:42:11 UTC, Lodovico Giaretta 
wrote:


There are lot of projects using LLVM [1]. The fact that LDC if 
often cited in the release notes means that it's one of the 
best. This is free advertisement, as the LLVM release notes are 
read by PL people that may not know D. The fact that LDC is 
recognized as one of the most important LLVM projects also 
means that the LLVM folks will try to help the LDC folks when 
needed.


[1] http://llvm.org/ProjectsWithLLVM/


i dont think counting each time when ldc and d are mentioned in 
llvm community will help ldc and d to become popular)))


Re: dlang-vscode

2016-09-06 Thread Andrej Mitrovic via Digitalmars-d
On 9/6/16, John Colvin via Digitalmars-d  wrote:
> I've used it a bit. See also:

VS code is pretty solid!

I'm too used to Sublime to start using it now, but the fact it's
open-source is a huge plus. Some of its addons are pretty great, for
example you can run an opengl shader and have its output display in
the editor.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

Hi Guys.

I recently implemented __ctfeWriteln.
Based on that experience I have now implemented another pseudo 
function called __ctfeTicksMs.
That evaluates to a uint representing the number of 
milliseconds elapsed between the start of dmd and the time of 
semantic evaluation of this expression.


For bigger CTFE programs it might be helpful.
Milliseconds are a fairly low resolution, would think with hnsec 
or so makes a better unit. Using core.time.TickDuration for that 
would make sense.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d

On Sunday, 4 September 2016 at 00:04:16 UTC, Stefan Koch wrote:

I recently implemented __ctfeWriteln.


Nice, is it only for your interpreter or can we move 
https://trello.com/c/6nU0lbl2/24-ctfewrite to done? I think 
__ctfeWrite would be a better primitive. And we could actually 
consider to specialize std.stdio.write* for CTFE.


Re: CompileTime performance measurement

2016-09-06 Thread Martin Nowak via Digitalmars-d
On Sunday, 4 September 2016 at 00:08:14 UTC, David Nadlinger 
wrote:

Please don't. This makes CTFE indeterministic.


Well we already have __TIMESTAMP__, though I think it doesn't 
change during compilation.




Re: CompileTime performance measurement

2016-09-06 Thread Martin Tschierschke via Digitalmars-d

On Sunday, 4 September 2016 at 19:36:16 UTC, Stefan Koch wrote:
On Sunday, 4 September 2016 at 12:38:05 UTC, Andrei 
Alexandrescu wrote:

On 9/4/16 6:14 AM, Stefan Koch wrote:
writeln and __ctfeWriteln are to be regarded as completely 
different

things.
__ctfeWriteln is a debugging tool only!
It should not be used in any production code.


Well I'm not sure how that would be reasonably enforced. -- 
Andrei


One could enforce it by defining it inside a version or debug 
block.
The reason I do not want to see this in production code is as 
follows:


In the engine I am working on, communication between it and the 
rest of dmd is kept to a minimum, because :


"The new CTFE engine abstracts away everything into bytecode,
there is no guarantee that the bytecode-evaluator is run in the 
same process or even on the same machine."


An alternative might be, to save your ctfe values in an static 
array and output them on startup of the compiled program. Same 
idea is used in vibe.d to make a caching of the templates 
evaluation possible. See: http://code.dlang.org/packages/diet-ng 
Experimental HTML template caching





Simplifying conversion and formatting code in Phobos

2016-09-06 Thread Andrei Alexandrescu via Digitalmars-d
We've learned a lot about good D idioms since std.conv was initiated. 
And of course it was always the case that writing libraries is quite 
different from writing code to be used within one sole application. 
Consider:


* to!T(x) must work for virtually all types T and typeof(x) that are 
sensible. The natural way of doing so is to have several 
semi-specialized overloads.


* Along the way it makes sense to delegate work from the user-level 
convenient syntax to a more explicit but less convenient syntax. Hence 
the necessity of toImpl.


* The need to "please format all arguments as a string" was a natural 
necessity e.g. as a second argument to assert or enforce. Hence the 
necessity of text(x, y, z) as the concatenation of to!string(x), 
to!string(y), and to!string(z).


* FormattedWrite was necessary for fwriteln and related.

* All of these have similarities and distinctions so they may use one 
another opportunistically. The alternative is to write the same code in 
different parts for the sake of artificially separating things that in 
fact are related.


The drawback of this is taking this in as a reader and maintainer. We 
have the 'text' template which calls the 'textImpl' template which calls 
the 'to' template which calls the 'toImpl' template which calls the 
'parse' template which calls the 'FormattedWrite' template which calls 
the 'to' template. Not easy to find where the work is ultimately done.


It is a challenge to find the right balance among everything. But I'm 
sure we can do better than what we have now because of the experience 
we've gained.


If anyone would like to take a fresh look at simplifying the code 
involved, it would be quite interesting. The metrics here are simpler 
code, fewer names, simpler documentation (both internal and external), 
and less code.



Andrei


Re: [OT] LLVM 3.9 released - you can try the release already with LDC!

2016-09-06 Thread Lodovico Giaretta via Digitalmars-d-announce

On Tuesday, 6 September 2016 at 09:21:06 UTC, eugene wrote:

On Sunday, 4 September 2016 at 17:18:10 UTC, Kai Nacke wrote:


This is the 9th time that LDC and D are mentioned in the LLVM 
release notes!




lol, how does it help?)))


There are lot of projects using LLVM [1]. The fact that LDC if 
often cited in the release notes means that it's one of the best. 
This is free advertisement, as the LLVM release notes are read by 
PL people that may not know D. The fact that LDC is recognized as 
one of the most important LLVM projects also means that the LLVM 
folks will try to help the LDC folks when needed.


[1] http://llvm.org/ProjectsWithLLVM/


D Meetup in Hamburg?

2016-09-06 Thread Martin Tschierschke via Digitalmars-d

Hi All,
anybody interested to meet in Hamburg, Germany?

Time and location will be found!

Regards mt.


Re: dlang-vscode

2016-09-06 Thread mogu via Digitalmars-d

On Tuesday, 6 September 2016 at 05:38:28 UTC, Manu wrote:
On 6 September 2016 at 14:22, Daniel Kozak via Digitalmars-d 
 wrote:

Dne 6.9.2016 v 03:41 Manu via Digitalmars-d napsal(a):


On 6 September 2016 at 09:51, John Colvin via Digitalmars-d 
 wrote:


On Monday, 5 September 2016 at 22:17:59 UTC, Andrei 
Alexandrescu wrote:


Google Alerts just found this:

https://marketplace.visualstudio.com/items?itemName=dlang-vscode.dlang

Is anyone here familiar with that work?


Andrei



I've used it a bit. See also:


https://marketplace.visualstudio.com/search?term=Dlang=VSCode=Relevance


I used it, but then switched to code-d which seemed more 
mature (see

John's link above).
The problem with code-d is it's dependency workspace-d has a 
really
painful installation process. It needs to be available in 
d-apt.


It is OK on ArchLinux ;)

pacaur -S workspace-d

or

packer -S workspace-d

or

yaourt -S workspace-d


Yup, it works well on my arch machine, but it doesn't work on 
my ubuntu machines at work. Ubuntu is the most popular distro; 
it really needs to work well ;)


clone and use workspaced-installer in that repo, worked for me in 
ubuntu 16.04lts.


Re: Usability of D for Visually Impaired Users

2016-09-06 Thread Chris via Digitalmars-d

On Monday, 5 September 2016 at 20:59:46 UTC, Walter Bright wrote:

On 9/5/2016 2:14 AM, Chris wrote:
A blind user I worked with used D for a term paper and he 
could find his way
around on dlang.org. So it seems to be pretty ok already. We 
should only be

careful with new stuff like language tours and tutorials.


This is good to hear. But with constant changes to dlang.org, 
it can be very easy to slip away from that, especially with all 
the pressure to "modernize" the look-and-feel with crap. We'll 
need constant vigilance!


I agree, the webpage should comply with accessibility rules 
consistently and not fall foul of flashy design when adding to 
the homepage.


As regards the zoom: most browsers handle the zoom very well 
(Ctrl+'+') these days. Visually impaired users often use the zoom 
that comes with the screen reading software or the inbuilt, 
OS-specific zoom (cf. Apple's Voice Over, Windows also a zoom as 
far as I know).


Re: [OT] LLVM 3.9 released - you can try the release already with LDC!

2016-09-06 Thread eugene via Digitalmars-d-announce

On Sunday, 4 September 2016 at 17:18:10 UTC, Kai Nacke wrote:


This is the 9th time that LDC and D are mentioned in the LLVM 
release notes!




lol, how does it help?)))



Re: Quality of errors in DMD

2016-09-06 Thread Laeeth Isharc via Digitalmars-d
On Monday, 5 September 2016 at 15:55:16 UTC, Dominikus Dittes 
Scherkl wrote:
On Sunday, 4 September 2016 at 20:14:37 UTC, Walter Bright 
wrote:

On 9/4/2016 10:56 AM, David Nadlinger wrote:
The bug report I need is the assert location, and a test case 
that causes it. Users do not need to supply any other 
information.


So, if we assume the user cannot debug if he hit an compiler 
bug, I as a compiler developer would at least like to receive a 
report containing a simple number, to identify which of the 830 
assert(0)'s in the code that I deemed to be unreachable was 
actually hit.


Because even if I don't receive a reduced testcase, I have a 
strong hint what assumption I should re-think, now that I know 
that it is effectively NOT unreachable.


Could we agree so far?

SO what problem would it be to give the assert(0)'s a number 
each and print out a message:

"Compiler bug: assert #xxx was hit, please send a bug report"
?


I wonder what people think of opt in automatic statistic 
collecting.  Not a substitute for a bug report, as one doesn't 
want source code being shipped off, but suppose a central server 
at dlang.org tracks internal compiler errors for those who have 
opted in. At least it will be more obvious more quickly which 
parts of code seem to be asserting.




Re: ADL

2016-09-06 Thread Guillaume Boucher via Digitalmars-d

On Monday, 5 September 2016 at 23:50:33 UTC, Timon Gehr wrote:
One hacky way is to provide a mixin template to create a 
wrapper type within each module that needs it, with 
std.typecons.Proxy. Proxy picks up UFCS functions in addition 
to member functions and turns them into member functions. But 
this leads to a lot of template bloat, because callers that 
share the same added UFCS functions don't actually share the 
instantiation. Also, it only works one level deep and 
automatically generated Wrapper types are generally prone to be 
somewhat brittle.


I don't think cloning a type just to add functionality can 
possibly be the right way.


A C++-style of customizing behavior is using traits. Those traits 
would be a compile time argument to the algorithm function.  
Instead of arg.addone() one would use trait.addone(arg).  It is 
not hard to write a proxy that merges trait and arg into one 
entity, but this should to be done from the callee.


The default trait would be type.addone_trait if it exists, or 
else some default trait that uses all available functions and 
member function from the module of the type.  In most of the 
cases this is enough, but it enables adding traits to existing 
types and also different implementations of the same traits.


This gets really bloaty in C++, and that's why usually ADL is 
preferred, but D has the capability to reduce the overhead to a 
minimum.


It doesn't quite make it possible to separate the implementation 
of types, algorithms and traits (UFCS) into different modules 
such that they don't know each other.  Either the user has to 
specify the trait each call or either the type's module or the 
algorithm's module has to import the traits.


What I call traits is very similar to type classes in other 
languages where (among other features) the traits are 
automatically being attached to the type.  (Type classes are also 
what C++ concepts originally wanted to be.)


Re: Promotion rules ... why no float?

2016-09-06 Thread Jonathan M Davis via Digitalmars-d
On Tuesday, September 06, 2016 07:26:37 Andrea Fontana via Digitalmars-d 
wrote:
> On Tuesday, 6 September 2016 at 07:04:24 UTC, Sai wrote:
> > Consider this:
> >
> > import std.stdio;
> > void main()
> > {
> >
> > byte a = 6, b = 7;
> > auto c = a + b;
> > auto d = a / b;
> > writefln("%s, %s", typeof(c).stringof, c);
> > writefln("%s, %s", typeof(d).stringof, d);
> >
> > }
> >
> > Output :
> > int, 13
> > int, 0
> >
> > I really wish d gets promoted to a float. Besides C
> > compatibility, any reason why d got promoted only to int even
> > at the risk of serious bugs and loss of precision?
> >
> > I know I could have typed "auto a = 6.0" instead, but still it
> > feels like an half-baked promotion rules.
>
> Integer division and modulo are not bugs.

Indeed. There are a lot of situations where they are exactly what you want.
Floating point adds imprecision to calculations that you often want to
avoid. Also, if I understand correctly, floating point operations are more
expensive that the corresponding integer operations. So, it would be very
bad if dividing integers resulted in a floating point value by default. And
if you _do_ want a floating point value, then you can cast one of the
operands to the floating point type you want, and you'll get floating point
arithmetic. Personally, I find that I rarely need floating point values at
all.

Another thing to take into account is that you don't normally want stuff
like arithmetic operations to do type conversion. We already have enough
confusion about byte and short being converted to int when you do arithmetic
on them. Having other types suddenly start changing depending on the
operations involved would just make things worse.

- Jonathan M Davis



Re: Promotion rules ... why no float?

2016-09-06 Thread Daniel Kozak via Digitalmars-d


Dne 6.9.2016 v 09:04 Sai via Digitalmars-d napsal(a):

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C compatibility, any 
reason why d got promoted only to int even at the risk of serious bugs 
and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it feels 
like an half-baked promotion rules.
No, it is really important rule. If there will be automatic promotion to 
float for auto it will hurt performance

in cases when you want int and it will break things.

But maybe in below case it could make more sense:

float d = a / b; // or it could print a warning because there is a high 
probability this is an error


ok maybe something like linter could be use to find those places



Re: dependency analysis for makefile construction

2016-09-06 Thread Basile B. via Digitalmars-d-learn

On Monday, 5 September 2016 at 18:49:25 UTC, Basile B. wrote:

On Monday, 5 September 2016 at 18:22:08 UTC, ag0aep6g wrote:

On 09/04/2016 12:07 AM, dan wrote:
Are there any FOSS tools for doing dependency analysis of 
[...]

[...]
I'm not aware of a standalone tool that does something like 
this. If you want to write one, you could do like rdmd and use 
`dmd -deps`/`dmd -v`, or you could use a standalone D parser 
like libdparse.


http://code.dlang.org/packages/libdparse


I have one in dastworx, based on dparse:

https://github.com/BBasile/Coedit/blob/master/dastworx/src/imports.d#L64

It would be very easy to make it a standalone tool (dastworx is 
a standalone tool but its main() is specific to Coedit) or to 
add such an anlayzer to Dscanner.


about 200 SLOCs not more.


Oops, big mouth syndrome here, it would be actually more complex 
because a persistent associative array is needed to link 
filenames to modules, projects to filename, date stamps to 
filename, etc...Otherwise at each execution there are a lot of 
stuff to parse. dastworx does not implement these features 
because they are done in an IDE module (the "libman").


Re: Promotion rules ... why no float?

2016-09-06 Thread Andrea Fontana via Digitalmars-d

On Tuesday, 6 September 2016 at 07:04:24 UTC, Sai wrote:

Consider this:

import std.stdio;
void main()
{
byte a = 6, b = 7;
auto c = a + b;
auto d = a / b;
writefln("%s, %s", typeof(c).stringof, c);
writefln("%s, %s", typeof(d).stringof, d);
}

Output :
int, 13
int, 0

I really wish d gets promoted to a float. Besides C 
compatibility, any reason why d got promoted only to int even 
at the risk of serious bugs and loss of precision?


I know I could have typed "auto a = 6.0" instead, but still it 
feels like an half-baked promotion rules.


Integer division and modulo are not bugs.

Andrea


  1   2   >