Re: Always false float comparisons

2016-05-13 Thread jmh530 via Digitalmars-d

On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:


An anecdote: a colleague of mine was once doing a chained 
calculation. At every step, he rounded to 2 digits of precision 
after the decimal point, because 2 digits of precision was 
enough for anybody. I carried out the same calculation to the 
max precision of the calculator (10 digits). He simply could 
not understand why his result was off by a factor of 2, which 
was a couple hundred times his individual roundoff error.





I'm sympathetic to this. Some of my work deals with statistics 
and you see people try to use formula that are faster but less 
accurate and it can really get you in to trouble. Var(X) = E(X^2) 
- E(X)^2 is only true for real numbers, not floating point 
arithmetic. It can also lead to weird results when dealing with 
matrix inverses.


I like the idea of a float type that is effectively the largest 
precision on your machine (the D real type). However, I could be 
convinced by the argument that you should have to opt-in for this 
and that internal calculations should not implicitly use it. 
Mainly because I'm sympathetic to the people who would prefer 
speed to precision. Not everybody needs all the precision all the 
time.


Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Saturday, 14 May 2016 at 01:26:18 UTC, Walter Bright wrote:
BTW, I once asked Prof Kahan about this. He flat out told me 
that the only reason to downgrade precision was if storage was 
tight or you needed it to run faster. I am not making this up.


He should have been aware of reproducibility since people use 
fixed point to achieve it, if he wasn't then shame on him.


In Java all compile time constants are done using strict settings 
and it provides a keyword «strictfp» to get strict behaviour for 
a particular class/function.


In C++ template parameters cannot be floating point, you use 
std::ratio so you get exact rational number instead. This is to 
avoid inaccuracy problems in the type system.


In interval-arithmetics you need to round up and down correctly 
on the bounds-computations to get correct results. (It is ok for 
the interval to be larger than the real result, but the opposite 
is a disaster).


With reproducible arithmetics you can do advanced accurate static 
analysis of programs using floating point code.


With reproducible arithmetics you can sync nodes in a cluster 
based on "time" alone, saving exchanges of data in simulations.


There are lots of reasons to default to well defined floating 
point arithmetics.




Re: Github names & avatars

2016-05-13 Thread H. S. Teoh via Digitalmars-d
On Sat, May 14, 2016 at 08:09:51AM +0300, Andrei Alexandrescu via Digitalmars-d 
wrote:
> On 5/14/16 12:01 AM, Meta wrote:
> >So many careers have been lost over some flippant tweet or Github
> >comment that complete anonymity is the only sane option, whenever
> >possible.
> 
> Could you bring some evidence or list a few anecdotes over the careers
> lost over a tweet or github comment? Thx! -- Andrei

Not sure how reliable this is, but a realtor friend of mine had a
colleague who got fired from the realtor company because of a remark
made IIRC on Facebook (or one of those social media things) about his
personal values that somebody in power in the company didn't agree with.

Not every employer cuts you slack the way we net-savvy people expect
reasonable people would. Personally, I think this kind of occurrence is
relatively rare, but still, it's very real.


T

-- 
Тише едешь, дальше будешь.


Re: Github names & avatars

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/14/16 12:01 AM, Meta wrote:

So many careers have been lost over some flippant tweet or Github
comment that complete anonymity is the only sane option, whenever possible.


Could you bring some evidence or list a few anecdotes over the careers 
lost over a tweet or github comment? Thx! -- Andrei


[OT] Re: Github names & avatars

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/13/16 11:54 PM, Xinok wrote:

On Friday, 13 May 2016 at 18:56:15 UTC, Walter Bright wrote:

If some company won't hire you because you contributed code to D, I'd
say you dodged a bullet working for such!


I've known a couple people who had to apply for over 200-300 positions
before they finally got a job in their field. Life isn't so convenient
that we can pick and choose which job we want. Sometimes, you've gotta
take what you can get. But suppose one of these people was a member of
the D community and they get turned down for every job they apply for
because the employer discovered something dumb they posted in this thread:

http://forum.dlang.org/thread/gpcyapiqlkpfahrzf...@forum.dlang.org

The internet never forgets so a little anonymity is a good thing.


I honestly think this concern is overrated, sometimes to the extent it 
becomes a fallacy. The converse benefits of anonymity are also 
exaggerated in my opinion. My own experience is evidence. A simple 
pattern I followed throughout is:


1. Do good work
2. Put your name next to it
3. Goto 1

I've written a large number of things by my name that I shouldn't have, 
the most epic being probably 
http://lists.boost.org/Archives/boost/2002/01/23189.php. But if the 
prevalent pattern is good work under your name, then you stand to gain a 
_lot_. People understand the occasional fluke - and this community is a 
prime example.


Your name is your brand. (In the US quite literally anybody can do 
business using their name as the company name with no extra paperwork.) 
You have the option to build your brand and walk into a room and just 
say it to earn instantly everyone's respect and attention. Or you can 
introduce yourself and then awkwardly list the various handles under you 
might also be known. I was repeatedly surprised (this week most 
recently) at the brand power my name has in the most unexpected 
circumstances.



Andrei



Re: Command line parsing

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/13/16 2:27 PM, Russel Winder via Digitalmars-d wrote:

On Thu, 2016-05-12 at 18:25 +, Jesse Phillips via Digitalmars-d
wrote:
[…]

unknown flags harder and displaying help challenging. So I'd like
to see getopt merge with another getopt


getopt is a 1970s C solution to the problem of command line parsing.
Most programming languages have moved on from getopt and created
language-idiomatic solutions to the problem. Indeed there are other,
better solution in C now as well.


What are those and how are they better? -- Andrei



Re: Researcher question – what's the point of semicolons and curly braces?

2016-05-13 Thread Joe Duarte via Digitalmars-d

On Tuesday, 3 May 2016 at 12:47:42 UTC, qznc wrote:


The parser needs information about "blocks". Here is an example:

  if (x)
foo();
bar();

Is bar() always executed or only if (x) is true? In other 
words, is bar() part of the block, which is only entered 
conditionally?


There are three methods to communicate blocks to the compiler: 
curly braces, significant whitespace (Python, Haskell), or an 
"end" keyword (Ruby, Pascal). Which one you prefer is 
subjective.


You mention Facebook and face recognition. I have not seen 
anyone try machine learning for parsing. It would probably be a 
fun project, but not a practical one.


You wonder that understanding structured text should be a 
solved problem. It is. You need to use a formal language, which 
programming languages are. English for example is much less 
structured. There easily are ambiguities. For example:


  I saw a man on a hill with a telescope.

Who has the telescope? You or the man you saw? Who is on the 
hill?


As a programmer, I do not want to write ambiguous programs. We 
produce more than enough bugs without ambiguity.


Thanks for the example! So you laid out the three options for 
signifying blocks. Then you said which one you prefer is 
subjective, but that you don't want to write ambiguous programs. 
Do you think that the curly braces and semicolons help with that?


So in your example, I figure bar's status is language-defined, 
and programmers will be trained in the language in the same way 
they are now. I've been sketching out a new language, and there 
are a couple of ways I could see implementing this.


First, blocks of code are separated by one or more blank lines. 
No blank lines are allowed in a block. An if block would have to 
terminate in an else statement, so I think this example just 
wouldn't compile. Now if we wanted two things to happen on an if 
hit, we could leave it the way you gave where the two things are 
at the same level of indentation. That's probably what I'd settle 
on, contingent on a lot of research, including my own studies and 
other researchers', though this probably isn't one of the big 
issues. If we wanted to make the second thing conditional on 
success on the first task, then I would require another indent. 
Either way the block wouldn't compile without an else.


I've been going through a lot of Unicode, icon fonts, and the 
Noun Project, looking for clean and concise representations for 
program logic. One of the ideas I've been working with is to 
leverage Unicode arrows. In most cases it's trivial aesthetic 
clean-up, like → instead of ->, and a lot of it could be simple 
autoreplace/autocomplete in tools. For if logic, you can an 
example of bent arrows, and how I'd express the alternatives for 
your example here: 
http://i1376.photobucket.com/albums/ah13/DuartePhotos/if%20block%20with%20Unicode%20arrows_zpsnuigkkxz.png





Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 5:49 PM, Timon Gehr wrote:

Nonsense. That might be true for your use cases. Others might actually depend on
IEE 754 semantics in non-trivial ways. Higher precision for temporaries does not
imply higher accuracy for the overall computation.


Of course it implies it.

An anecdote: a colleague of mine was once doing a chained calculation. At every 
step, he rounded to 2 digits of precision after the decimal point, because 2 
digits of precision was enough for anybody. I carried out the same calculation 
to the max precision of the calculator (10 digits). He simply could not 
understand why his result was off by a factor of 2, which was a couple hundred 
times his individual roundoff error.




E.g., correctness of double-double arithmetic is crucially dependent on correct
rounding semantics for double:
https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic


Double-double has its own peculiar issues, and is not relevant to this 
discussion.



Also, it seems to me that for e.g.
https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher precision
and truncations back to lower precision at appropriate places in the code.


I don't see any support for your claim there.



And even if higher precision helps, what good is a "precision-boost" that e.g.
disappears on 64-bit builds and then creates inconsistent results?


That's why I was thinking of putting in 128 bit floats for the compiler 
internals.



Sometimes reproducibility/predictability is more important than maybe making
fewer rounding errors sometimes. This includes reproducibility between CTFE and
runtime.


A more accurate answer should never cause your algorithm to fail. It's like 
putting better parts in your car causing the car to fail.




Just actually comply to the IEEE floating point standard when using their
terminology. There are algorithms that are designed for it and that might stop
working if the language does not comply.


Conjecture. I've written FP algorithms (from Cody+Waite, for example), and none 
of them degraded when using more precision.



Consider that the 8087 has been operating at 80 bits precision by default for 30 
years. I've NEVER heard of anyone getting actual bad results from this. They 
have complained about their test suites that tested for less accurate results 
broke. They have complained about the speed of x87. And Intel has been trying to 
get rid of the x87 forever. Sometimes I wonder if there's a disinformation 
campaign about more accuracy being bad, because it smacks of nonsense.


BTW, I once asked Prof Kahan about this. He flat out told me that the only 
reason to downgrade precision was if storage was tight or you needed it to run 
faster. I am not making this up.


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread Andrew Edwards via Digitalmars-d-learn

On 5/14/16 12:35 AM, Steven Schveighoffer wrote:

On 5/13/16 12:59 AM, Andrew Edwards wrote:

On 5/13/16 8:40 AM, Andrew Edwards wrote:

That seems wrong. You can't assign to an enum. Besides, doesn't your
declaration of MIN shadow whatever other definitions may be
currently in
effect?


Okay, got it. It seams I just hadn't hit that bug yet because of other
unresolved issues.


Perhaps what you meant is something like this?

static if (!is(typeof(MIN) : int))
enum MIN = 99;


This seems to do the trick.


But not exactly the way it's expected to. In the snippets below, C
outputs 10 while D outputs 100;

min.c
=
 #define MIN 10 // [1]

 #include "mild.h"

 int main()
 {
 print();
 return 0;
 }

min.h
=
 #include 

 #ifndef MIN
 #define MIN 100
 #endif

 void print()
 {
 printf("%d\n", MIN);
 }

minA.d
=
 enum MIN = 10; // [1]

 import minB;

 void main()
 {
 print();
 }

minB.d
=
 static if (!is(typeof(MIN) : int))
 enum MIN = 100;

 void print()
 {
 import std.stdio: writeln;
 writeln(MIN);
 }

Is there a way to reproduce the same behavior? Are there reason's for
not allowing this functionality or am I just misunderstanding and going
about things the wrong way?


Code like this is FUBAR.

I have seen abuse of pre-processor in many places, and it never
justifies the cleverness of how it is done.


This may be the case, but since I am not yet at a level of understanding 
where I can discern what is justifiable or not. At the moment I'm simply 
trying to port over 15k LOC so that I can play with it in D and improve 
my understanding of what's going on.



Note that min.h is providing an inlined function. Essentially, min.h is
like a template with the definition of the template parameter defined by
the including file. But you can only ever include min.h ONCE in your
entire project, or you will get linker errors.


This was an extremely simplified example. There is far more going than 
than this. Just trying not to lose any of the functionality until I 
understand what why things are done the way they are and how to better 
do it in D.



D will always compile a module without external configuration. That is,
print is compiled ONCE and only in the context that minA.d defines.
Inlining can replace the print call with inline functions, but it will
still be compiled according to the module's definitions, not external.

TL;DR: there isn't a good way to port this code, because it's shit code,
and D doesn't do that :)

-Steve




Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 14.05.2016 02:49, Timon Gehr wrote:

result can actually be made less precise


less accurate. I need to go to sleep.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 14.05.2016 02:49, Timon Gehr wrote:

IEE


IEEE.


Re: Always false float comparisons

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 13.05.2016 23:35, Walter Bright wrote:

On 5/13/2016 12:48 PM, Timon Gehr wrote:

IMO the compiler should never be allowed to use a precision different
from the one specified.


I take it you've never been bitten by accumulated errors :-)
...


If that was the case it would be because I explicitly ask for high 
precision if I need it.


If the compiler using or not using a higher precision magically fixes an 
actual issue with accumulated errors, that means the correctness of the 
code is dependent on something hidden, that you are not aware of, and 
that could break any time, for example at a time when you really don't 
have time to track it down.



Reduced precision is only useful for storage formats and increasing
speed.  If a less accurate result is desired, your algorithm is wrong.


Nonsense. That might be true for your use cases. Others might actually 
depend on IEE 754 semantics in non-trivial ways. Higher precision for 
temporaries does not imply higher accuracy for the overall computation.


E.g., correctness of double-double arithmetic is crucially dependent on 
correct rounding semantics for double:

https://en.wikipedia.org/wiki/Quadruple-precision_floating-point_format#Double-double_arithmetic

Also, it seems to me that for e.g. 
https://en.wikipedia.org/wiki/Kahan_summation_algorithm,
the result can actually be made less precise by adding casts to higher 
precision and truncations back to lower precision at appropriate places 
in the code.


And even if higher precision helps, what good is a "precision-boost" 
that e.g. disappears on 64-bit builds and then creates inconsistent results?


Sometimes reproducibility/predictability is more important than maybe 
making fewer rounding errors sometimes. This includes reproducibility 
between CTFE and runtime.


Just actually comply to the IEEE floating point standard when using 
their terminology. There are algorithms that are designed for it and 
that might stop working if the language does not comply.


Then maybe add additional built-in types with a given storage size that 
additionally /guarantee/ a certain amount of additional scratch space 
when used for function-local computations.


Re: Always false float comparisons

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 2:42 PM, Ola Fosheim Grøstad wrote:

On Friday, 13 May 2016 at 21:36:52 UTC, Walter Bright wrote:

On 5/13/2016 1:57 PM, Ola Fosheim Grøstad wrote:

It should in C++ with the right strict-settings,


Consider what the C++ Standard says, not what the endless switches to tweak
the compiler do.


The C++ standard cannot even require IEEE754. Nobody relies only on what the C++
standard says in real projects. They rely on what the chosen compiler(s) on
concrete platform(s) do.



Nevertheless, C++ is what the Standard says it is. If Brand X compiler does 
something else, you should call it "Brand X C++".


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread jmh530 via Digitalmars-d-announce
On Friday, 13 May 2016 at 15:24:22 UTC, Steven Schveighoffer 
wrote:


You only *need* the slides for when you can watch the talk. 
Having the slides beforehand may be confusing and unhelpful. 
This means there is no reason to provide the slides until the 
talk actually happens.


I almost want to say, if you need to see the presentation live, 
attend the conference :) In some past conferences, there were 
no live streams, and people survived. It's nice to have, but it 
somewhat removes the incentive to support the conference by 
attending.




Have you checked out some of the older DConf pages where the 
slides aren't still available? That's a little frustrating. In my 
opinion, it reflects poor organizational skills.


The best time to get the slides is before the presenter speaks. 
No slides, no speak. Also, if they need to make changes, they can 
always upload a newer version.


Re: Potential issue with DMD where the template constrains are not evaluated early enough to prevent type recursion

2016-05-13 Thread Timon Gehr via Digitalmars-d

On 13.05.2016 23:21, Georgi D wrote:

Hi,

I have the following code which should compile in my opinion:

struct Foo {}

import std.range.primitives;
import std.algorithm.iteration : map, joiner;

auto toChars(R)(R r) if (isInputRange!R)
{
return r.map!(toChars).joiner(", ");
}

auto toChars(Foo f)
{
import std.range : chain;
return chain("foo", "bar");
}

void main()
{
import std.range : repeat;
Foo f;
auto r = f.repeat(3);
auto chars = r.toChars();
}

But fails to compile with the following error:

Error: template instance std.algorithm.iteration.MapResult!(toChars,
Take!(Repeat!(Foo))) forward reference of function toChars

The reason it fails to compile in my opinion is that the template
constraint fails to remove the generic toChars from the list possible
matches early enough so the compiler thinks there is a recursive call
and cannot deduce the return type.


It's tricky. The reason it fails to compile is that the template 
argument you are passing does not actually refer to the overload set.


return r.map!(.toChars).joiner(", "); works.



Consider:

int foo()(){
pragma(msg, typeof()); // int function()
return 2;
}
double foo(){
return foo!();
}

The reason for this behavior is that the first declaration is syntactic 
sugar for:


template foo(){
int foo(){
pragma(msg, typeof());
return 2;
}
}

Since template foo() introduces a scope, the inner 'int foo()' shadows 
the outer 'double foo()'. There are special cases in the compiler that 
reverse eponymous lookup before overload resolution (i.e. go from 
foo!().foo back to foo) in case some identifier appears in the context 
ident() or ident!(), so one does not usually run into this. This is not 
done for alias parameters.


The error message is bad though. Also, I think it is not unreasonable to 
expect the code to work. Maybe reversal of eponymous lookup should be 
done for alias parameters too.


[Issue 15999] Inline assembly incorrect sign extension instead of error

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15999

safety0ff.bugz  changed:

   What|Removed |Added

   Keywords||pull

--- Comment #1 from safety0ff.bugz  ---
https://github.com/dlang/dmd/pull/5739

--


[Issue 13954] (D1 only) Compiler allows implementing float return method with a real return type

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13954

Stewart Gordon  changed:

   What|Removed |Added

   Keywords||accepts-invalid

--


[Issue 15780] [REG2.069] CTFE foreach fails with tuple

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15780

Kenji Hara  changed:

   What|Removed |Added

Summary|CTFE foreach fails with |[REG2.069] CTFE foreach
   |tuple   |fails with tuple

--- Comment #2 from Kenji Hara  ---
Introduced in: https://github.com/dlang/dmd/pull/4797

--


[Issue 16022] [REG2.069] dmd assertion failure due to misplaced comma operator

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16022

Kenji Hara  changed:

   What|Removed |Added

   Keywords||pull

--- Comment #2 from Kenji Hara  ---
https://github.com/dlang/dmd/pull/5774

--


[Issue 16024] New: More struct/class/interface introspection helpers

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16024

  Issue ID: 16024
   Summary: More struct/class/interface introspection helpers
   Product: D
   Version: D2
  Hardware: x86
OS: Linux
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: phobos
  Assignee: nob...@puremagic.com
  Reporter: erikas.aub...@gmail.com

Having std.traits.Fields and FieldNameTuple is a very nice thing for part of
the work in introspecting a struct or class, but a few new ones would be very
handy:

1) templates for listing all static or non-static member functions 
2) a template for listing static fields
3) templates for nested types
4) templates that can be used to with std.meta templates to filter according to
protection level (ideally, ones that won't trip the new 2.071 deprecations)

--


[Issue 16021] Template constraint bug

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16021

Kenji Hara  changed:

   What|Removed |Added

 Status|NEW |RESOLVED
 Resolution|--- |INVALID

--- Comment #1 from Kenji Hara  ---
This is expected behavior. While testing a template constraint, the template is
not yet instantiated.

When A!B is started to instantiation, its template constraint is(B : A!B) will
be tested. BUT the instance A!B is not yet instantiated, so compiler cannot
know it will be a class.

--


Re: Battle-plan for CTFE

2016-05-13 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, May 11, 2016 07:06:59 maik klein via Digitalmars-d-announce 
wrote:
> What is the current problem with ctfe?

The biggest problem is that it uses up a _lot_ of memory and is generally
slow. For instance, as I understand it, every time it mutates a variable, it
actually allocates a new one to hold the new state. This combined with the
fact that the compiler doesn't actually ever free memory (since it's
normally more efficient for it to work that way), and you risk running out
of memory while compiling. CTFE is a fantastic feature, but it evolved over
time rather than being designed up front, and it's suffered a lot because of
that. Don did a _lot_ of work to improve it, but he wasn't able to continue
working on it, and until now, no one has ever really stepped up to finish
the job. Don's post gives a good background on why CTFE is the way it is
and some of what he did to make it as solid as it is now:

http://forum.dlang.org/post/jmvsbhdpsjgeykpuk...@forum.dlang.org

But having someone like Stefan reimplement will be _huge_, and the D
community will be _very_ grateful.

- Jonathan M Davis



[Issue 16013] [REG2.072a] ICE with mutually dependent structs and alias this

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16013

Kenji Hara  changed:

   What|Removed |Added

   Keywords||pull
   Hardware|x86_64  |All
 OS|Linux   |All

--- Comment #1 from Kenji Hara  ---
https://github.com/dlang/dmd/pull/5773

--


[Issue 16023] New: Add template or trait to find the importable symbol name for a type

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16023

  Issue ID: 16023
   Summary: Add template or trait to find the importable symbol
name for a type
   Product: D
   Version: D2
  Hardware: x86
OS: Linux
Status: NEW
  Severity: enhancement
  Priority: P1
 Component: phobos
  Assignee: nob...@puremagic.com
  Reporter: erikas.aub...@gmail.com

DMD 2.071's new visibility rules make it so that if you're introspecting a type
from inside a template, you need to locally import the module that it comes
from; best practices would dictate you only import the specific symbol you're
attempting to introspect. Unfortunately, none of the ways I could come up
worked in every case

mixin("import " ~ moduleName!T ~": " ~ T.stringof ~ ";");

fails with templated types, as the instantiation causes the parsing to fail.

Substituting __traits(identifier, T) fails on nested types

Using a selective TemplateOf like so:

static if (__traits(compiles, TemplateOf!T)) {
private alias symb = TemplateOf!T;
} else {
private alias symb = T;
}

enum importableName = __traits(identifier, symb);

fails on non-templated classes that inherit from templated ones.

Basically, the long and short of it is that this seems like it should be a
common enough use case to warrant something to help us do this without having
to do a ton of CTFE string parsing and substitution.

--


[Issue 16012] [REG2.070] forward reference with alias this

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16012

Kenji Hara  changed:

   What|Removed |Added

   Keywords||pull
   Hardware|x86_64  |All
 OS|Linux   |All

--- Comment #2 from Kenji Hara  ---
https://github.com/dlang/dmd/pull/5773

--


[Issue 16011] [REG2.068] recursive RefCounted used to work

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16011

Kenji Hara  changed:

   What|Removed |Added

   Keywords||pull
  Component|phobos  |dmd
   Hardware|x86_64  |All
 OS|Linux   |All

--- Comment #2 from Kenji Hara  ---
I concluded this is a compiler issue.

https://github.com/dlang/dmd/pull/5773

--


Re: Defining member fuctions of a class or struct out side of the class/struct body?

2016-05-13 Thread Ali Çehreli via Digitalmars-d-learn

On 05/13/2016 11:41 AM, Jamal wrote:

Warning D newb here.

Is it possible to define a member function outside of the class/struct
like in C++;

class x { body
 void foo(int* i);
};

void x::foo(int* i){
 *i++;
}

Or is it just D-like to define everything inside the class/struct body?


Also check out the feature called UFCS. It is not the exact answer to 
your question but it may be more applicable in some designs.


Ali



Re: Defining member fuctions of a class or struct out side of the class/struct body?

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/13/16 2:41 PM, Jamal wrote:

Warning D newb here.

Is it possible to define a member function outside of the class/struct
like in C++;


Not within the same file.

You can have an "interface file", extension .di, which hides the bodies 
of functions. But inside the implementation file, you must repeat the 
class/struct definition. So it's a lot of extra work. The compiler can 
spit out a .di file based on your implementation with the bodies hidden, 
but it's not perfect.


You also can't hide template function bodies.

What is your use case, or is it just that you prefer doing it that way?

-Steve


[Issue 16022] [REG2.069] dmd assertion failure due to misplaced comma operator

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16022

ag0ae...@gmail.com changed:

   What|Removed |Added

   Keywords||ice
 CC||ag0ae...@gmail.com
   Hardware|x86 |All
Summary|dmd assertion failure due   |[REG2.069] dmd assertion
   |to misplaced comma operator |failure due to misplaced
   ||comma operator
 OS|Mac OS X|All
   Severity|major   |regression

--- Comment #1 from ag0ae...@gmail.com ---
Complete test case:

enum Type {Colon, Comma}
Type type;

bool foo()
{
return type == Type.Colon, type == Type.Comma;
}


Compiles with 2.068. Fails with ICE since 2.069.

--


Defining member fuctions of a class or struct out side of the class/struct body?

2016-05-13 Thread Jamal via Digitalmars-d-learn

Warning D newb here.

Is it possible to define a member function outside of the 
class/struct like in C++;


class x { body
void foo(int* i);
};

void x::foo(int* i){
*i++;
}

Or is it just D-like to define everything inside the class/struct 
body?


[Issue 16022] New: dmd assertion failure due to misplaced comma operator

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=16022

  Issue ID: 16022
   Summary: dmd assertion failure due to misplaced comma operator
   Product: D
   Version: D2
  Hardware: x86
OS: Mac OS X
Status: NEW
  Severity: major
  Priority: P1
 Component: dmd
  Assignee: nob...@puremagic.com
  Reporter: m...@skoppe.eu

I changed something in my code to:

bool foo()
{
return (
token.type == Type.Colon, // Typo: wanted logical operator instead
of comma
token.type == Type.Comma);
}

And I suddenly got this back from dmd:

linkage = 0
Assertion failed: (0), function visit, file tocsym.c, line 246.
dmd failed with exit code -6.

In the function foo I wanted to type || instead of the comma. Regardless, it
shouldn't fail with an assertion.

--


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread Ali Çehreli via Digitalmars-d-announce

On 05/13/2016 08:24 AM, Steven Schveighoffer wrote:

> But then the slides provided online don't match the slides you are
> showing. That is not good.

Thanks to you, I've fixed my silly binary-tree mistake in one of my 
slides. My slides on dconf.org don't match the ones in the video. :)


Ali



Re: Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 10:19 AM, Adam D. Ruppe wrote:

It is nice to have a consistent pseudonym for matching up forum posts with irc
with github etc., but let's not make this a requirement.


It's a suggestion, not a requirement. I respect that some people have good 
reasons for anonymity.


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread sanjayss via Digitalmars-d-learn

On Thursday, 12 May 2016 at 22:51:17 UTC, Andrew Edwards wrote:
The following preprocessor directives are frequently 
encountered in C code, providing a default constant value where 
the user of the code has not specified one:


#ifndef MIN
#define MIN 99
#endif

#ifndef MAX
#define MAX 999
#endif

I'm at a loss at how to properly convert it to D. I've tried 
the following:


enum MIN = 0;
static if(MIN <= 0)
{
MIN = 99;
}

it works as long as the static if is enclosed in a static 
this(), otherwise the compiler complains:


mo.d(493): Error: no identifier for declarator MIN
mo.d(493): Error: declaration expected, not '='

This however, does not feel like the right way to do thinks but 
I cannot find any documentation that provides an alternative. 
Is there a better way to do this?


Thanks,
Andrew


One thing you could try is compile the C code without the 
#ifndef's and see if it compiles without issues. If it does, 
simply use a D enum and don't worry about translating the 
#ifndef's.


(Alternately check the C code to see if there really are 
differing definitions of MIN/MAX -- if there are, then the code 
is already messed up and you need to implement your D code taking 
into consideration the implications of that -- maybe use 
different variable names in each section of the code for the 
MIN/MAX that takes the different MIN/MAX values. I've typically 
seen this kind of ifdef'ing to quick-fix compile issues in new 
code without having to rewrite a whole bunch of existing code or 
code-structure and it is never to have differing values for the 
defines -- if differing values are used with the same name, then 
the code is bad and it's best to cleanup as you migrate to D.)


[Issue 13954] (D1 only) Compiler allows implementing float return method with a real return type

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13954

Walter Bright  changed:

   What|Removed |Added

 CC||bugzi...@digitalmars.com
   Hardware|x86_64  |All
 OS|Linux   |All

--- Comment #3 from Walter Bright  ---
https://github.com/dlang/dmd/pull/5187

--


[Issue 13954] (D1 only) Compiler allows implementing float return method with a real return type

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=13954

--- Comment #2 from github-bugzi...@puremagic.com ---
Commits pushed to dmd-1.x at https://github.com/dlang/dmd

https://github.com/dlang/dmd/commit/8728fdd029d1f6765dd634fef2a7ed25b6539f4d
Fix Issue 13954 - Disallow non-covariant overrides

https://github.com/dlang/dmd/commit/2cce6488ade684c473316b3defa5f438bb4b0655
Merge pull request #5187 from AndrejMitrovic/fix-override

[D1] Issue 13954 - Disallow non-covariant overrides

--


Re: Github names & avatars

2016-05-13 Thread Adam D. Ruppe via Digitalmars-d

On Friday, 13 May 2016 at 17:02:20 UTC, Walter Bright wrote:
In today's surveillance state, the government already knows 
your name and what you look like, so being anonymous on github 
is a bit pointless, as if anyone cares that you are interested 
in D. I can understand if you're a celebrity or want nobody to 
know you're a dog, but that doesn't apply to most of us.


Actually, given the blatant misogyny frequently on display on 
this forum, about 51% of the world's population - literally most 
of us - have a perfectly understandable reason to maintain some 
level of anonymity in this community.


It is nice to have a consistent pseudonym for matching up forum 
posts with irc with github etc., but let's not make this a 
requirement.


Re: Github names & avatars

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/13/16 1:02 PM, Walter Bright wrote:

I'll ask again that the active Github users use their own name, and add
to that if you could have a selfie as your github image.


Sorry, this isn't going to happen :) @schveiguy is much better than 
@StevenSchveighoffer. Some of us are not so short-name-blessed. I 
actually don't mind if people call me schveiguy!


In fact, I have a counter-proposal. Instead of putting people's real 
names on their dconf name tags, let's just have their github handles :P.



It avoids when people who post as "Fred" on the newsgroup submit PRs as
"HorseWrangler" and get annoyed when I don't realize they are the same
person, and then I overlook them at the conference because I have no
idea what they look like.


Please don't make me learn @dicebot's real name :)

-Steve


Github names & avatars

2016-05-13 Thread Walter Bright via Digitalmars-d
I'll ask again that the active Github users use their own name, and add to that 
if you could have a selfie as your github image.


It avoids when people who post as "Fred" on the newsgroup submit PRs as 
"HorseWrangler" and get annoyed when I don't realize they are the same person, 
and then I overlook them at the conference because I have no idea what they look 
like.


In today's surveillance state, the government already knows your name and what 
you look like, so being anonymous on github is a bit pointless, as if anyone 
cares that you are interested in D. I can understand if you're a celebrity or 
want nobody to know you're a dog, but that doesn't apply to most of us.


https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you%27re_a_dog


Re: Reproducible builds of D compilers

2016-05-13 Thread Pjotr Prins via Digitalmars-d

On Saturday, 7 May 2016 at 17:56:07 UTC, Johan Engelen wrote:
On Saturday, 7 May 2016 at 16:22:34 UTC, Vladimir Panteleev 
wrote:


https://blog.thecybershadow.net/2015/05/05/is-d-slim-yet/


Thanks for repeating the link to that blog article. I was 
reminded of it at DConf. Would be great if results from GDC and 
LDC could be added to the graphs, plus more tests!


Yes, nice read!




Re: Version block "conditions" with logical operators

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 1:57 AM, Joakim wrote:

I'm trying, but Daniel seems against it, care to chip in?

https://github.com/dlang/dmd/pull/5772

Specifically, do you want the changes in that PR?  If so, do you prefer the use
of TARGET_POSIX as a runtime variable or listing each TARGET_OS separately?


I know there's some controversy in that thread, I guess I need to check in.


Re: Battle-plan for CTFE

2016-05-13 Thread Stefan Koch via Digitalmars-d-announce

On Friday, 13 May 2016 at 13:59:57 UTC, Don Clugston wrote:


I think I need to explain the history of CTFE.
Originally, we had constant-folding. Then constant-folding was 
extended to do things like slicing a string at compile time. 
Constant folding leaks memory like the Exxon Valdez leaks oil, 
but that's OK because it only ever happens once.
Then, the constant folding was extended to include function 
calls, for loops, etc. All using the existing constant-folding 
code. Now the crappy memory usage is a problem. But it's OK 
because the CTFE code was kind of proof-of-concept thing anyway.


[...]


Thanks for the explanation, and for doing so much work on CTFE.


I would like to work on a solution that does scale.
The Problem is not making a byteCode-interpreter.
That part is relatively easy. Currently I am trying to get a 
detailed understanding of dmd and it's data-structures. (mainly 
it's AST.)


Generating the byte-code seems to be non-trivial.

I wonder in how far the glue layer can be of help...



Re: imports && -run [Bug?]

2016-05-13 Thread zabruk70 via Digitalmars-d-learn

On Friday, 13 May 2016 at 06:33:40 UTC, Jacob Carlborg wrote:
Even better is to use "rdmd" which will automatically track and 
compile dependencies.


but i should warn about annoing bug with local import
http://forum.dlang.org/post/mailman.1984.1373610213.13711.digitalmar...@puremagic.com
https://issues.dlang.org/show_bug.cgi?id=7016


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread Kagamin via Digitalmars-d-learn

On Thursday, 12 May 2016 at 22:51:17 UTC, Andrew Edwards wrote:
The following preprocessor directives are frequently 
encountered in C code, providing a default constant value where 
the user of the code has not specified one:


#ifndef MIN
#define MIN 99
#endif

#ifndef MAX
#define MAX 999
#endif


If you're ok with runtime values

int MIN=99, MAX=999;

And let user assign different values to them.


Re: The Case Against Autodecode

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d

On 5/12/16 4:15 PM, Walter Bright wrote:


10. Autodecoded arrays cannot be RandomAccessRanges, losing a key
benefit of being arrays in the first place.


I'll repeat what I said in the other thread.

The problem isn't auto-decoding. The problem is hijacking the char[] and 
wchar[] (and variants) array type to mean autodecoding non-arrays.


If you think this code makes sense, then my definition of sane varies 
slightly from yours:


static assert(!hasLength!R && is(typeof(R.init.length)));
static assert(!is(ElementType!R == R.init[0]));
static assert(!isRandomAccessRange!R && is(typeof(R.init[0])) && 
is(typeof(R.init[0 .. $])));


I think D would be fine if string meant some auto-decoding struct with 
an immutable(char)[] array backing. I can accept and work with that. I 
can transform that into a char[] that makes sense if I have no use for 
auto-decoding. As of today, I have to use byCodePoint, or 
.representation, etc. and it's very unwieldy.


If I ran D, that's what I would do.

-Steve


Re: Battle-plan for CTFE

2016-05-13 Thread Timon Gehr via Digitalmars-d-announce

On 13.05.2016 15:59, Don Clugston wrote:

All that's needed is a very simple bytecode interpreter.


Here is the one I have hacked together:
https://github.com/tgehr/d-compiler/blob/master/interpret.d

This file does both constant folding and byte-code interpretation for 
most of the language. I still need to implement exception handling.


I'll let you know when it passes interpret3.d. :)


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d-learn

On 5/13/16 12:59 AM, Andrew Edwards wrote:

On 5/13/16 8:40 AM, Andrew Edwards wrote:

That seems wrong. You can't assign to an enum. Besides, doesn't your
declaration of MIN shadow whatever other definitions may be currently in
effect?


Okay, got it. It seams I just hadn't hit that bug yet because of other
unresolved issues.


Perhaps what you meant is something like this?

static if (!is(typeof(MIN) : int))
enum MIN = 99;


This seems to do the trick.


But not exactly the way it's expected to. In the snippets below, C
outputs 10 while D outputs 100;

min.c
=
 #define MIN 10 // [1]

 #include "mild.h"

 int main()
 {
 print();
 return 0;
 }

min.h
=
 #include 

 #ifndef MIN
 #define MIN 100
 #endif

 void print()
 {
 printf("%d\n", MIN);
 }

minA.d
=
 enum MIN = 10; // [1]

 import minB;

 void main()
 {
 print();
 }

minB.d
=
 static if (!is(typeof(MIN) : int))
 enum MIN = 100;

 void print()
 {
 import std.stdio: writeln;
 writeln(MIN);
 }

Is there a way to reproduce the same behavior? Are there reason's for
not allowing this functionality or am I just misunderstanding and going
about things the wrong way?


Code like this is FUBAR.

I have seen abuse of pre-processor in many places, and it never 
justifies the cleverness of how it is done.


Note that min.h is providing an inlined function. Essentially, min.h is 
like a template with the definition of the template parameter defined by 
the including file. But you can only ever include min.h ONCE in your 
entire project, or you will get linker errors.


D will always compile a module without external configuration. That is, 
print is compiled ONCE and only in the context that minA.d defines. 
Inlining can replace the print call with inline functions, but it will 
still be compiled according to the module's definitions, not external.


TL;DR: there isn't a good way to port this code, because it's shit code, 
and D doesn't do that :)


-Steve


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d-announce

On 5/12/16 6:21 PM, Daniel Kozak via Digitalmars-d-announce wrote:

Dne 12.5.2016 v 22:55 Steven Schveighoffer via Digitalmars-d-announce
napsal(a):


On 5/12/16 4:13 PM, Seb wrote:

On Wednesday, 11 May 2016 at 09:17:54 UTC, Dicebot wrote:

To do the editing of HD videos we need presentation slides which are
currently scattered over different places. It would help a lot to have
them all in github.com/dlang/dlang.org repo - please submit pull
requests asap!


Just a minor complaint - would it be possible for the next dconf to have
the slides (or a link to them) on dconf.org before the talk starts?
Thanks for the great work!


I think it's better to not have the slides available until the talk
starts.


No, you are wrong. It would be really nice to have them. I would say it
was one of the biggest failures of dconf for people who can't attend.


You only *need* the slides for when you can watch the talk. Having the 
slides beforehand may be confusing and unhelpful. This means there is no 
reason to provide the slides until the talk actually happens.


I almost want to say, if you need to see the presentation live, attend 
the conference :) In some past conferences, there were no live streams, 
and people survived. It's nice to have, but it somewhat removes the 
incentive to support the conference by attending.



There may be jokes/surprises in the slides that you don't want to give
away before the talk happens :)



If there is anything you do not want to make available before talks. You
don't have to ;).


But then the slides provided online don't match the slides you are 
showing. That is not good.


-Steve


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread Steven Schveighoffer via Digitalmars-d-announce

On 5/12/16 8:31 PM, Vladimir Panteleev wrote:

On Thursday, 12 May 2016 at 23:14:05 UTC, Leandro Lucarella wrote:

Steven Schveighoffer, el 12 de May a las 16:55 me escribiste:

On 5/12/16 4:13 PM, Seb wrote:
>On Wednesday, 11 May 2016 at 09:17:54 UTC, Dicebot wrote:
>>To do the editing of HD videos we need presentation slides >>which
are currently scattered over different places. It >>would help a lot
to have them all in >>github.com/dlang/dlang.org repo - please submit
pull >>requests asap!
>
>Just a minor complaint - would it be possible for the next >dconf to
have the slides (or a link to them) on dconf.org >before the talk
starts? Thanks for the great work!

I think it's better to not have the slides available until the talk
starts. There may be jokes/surprises in the slides that you don't
want to give away before the talk happens :)


Exactly, I would say it depends on the talk, for my talk I didn't want
to provide the slides beforehand ;-)


Here's a crazy idea: provide a simple/short URL to the slides as the
second slide of your talk (and speak out loud the URL just in case the
camera doesn't get it) :)


This is actually exactly what I did :)

-Steve


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 14:06:28 UTC, Vladimir Panteleev wrote:

On Friday, 13 May 2016 at 13:41:30 UTC, Chris wrote:
PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


https://twitter.com/StopForumSpam


I don't understand. Does that mean we have to solve CAPTCHAs 
every time we post? Annoying CAPTCHAs at that.


Re: The Case Against Autodecode

2016-05-13 Thread Vladimir Panteleev via Digitalmars-d

On Friday, 13 May 2016 at 13:41:30 UTC, Chris wrote:
PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


https://twitter.com/StopForumSpam


Re: Always false float comparisons

2016-05-13 Thread Iain Buclaw via Digitalmars-d
On 13 May 2016 at 07:12, Manu via Digitalmars-d
 wrote:
> On 13 May 2016 at 11:03, Walter Bright via Digitalmars-d
>  wrote:
>> On 5/12/2016 4:32 PM, Marco Leise wrote:
>>>
>>> - Unless CTFE uses soft-float implementation, depending on
>>>   compiler and flags used to compile a D compiler, resulting
>>>   executable produces different CTFE floating-point results
>>
>>
>> I've actually been thinking of writing a 128 bit float emulator, and then
>> using that in the compiler internals to do all FP computation with.
>
> No. Do not.
> I've worked on systems where the compiler and the runtime don't share
> floating point precisions before, and it was a nightmare.

I have some bad news for you about CTFE then. This already happens in
DMD even though float is not emulated.  :-o


Re: Battle-plan for CTFE

2016-05-13 Thread Don Clugston via Digitalmars-d-announce

On Monday, 9 May 2016 at 16:57:39 UTC, Stefan Koch wrote:

Hi Guys,

I have been looking into the DMD now to see what I can do about 
CTFE.

Unfortunately It is a pretty big mess to untangle.
Code responsible for CTFE is in at least 3 files.
[dinterpret.d, ctfeexpr.d, constfold.d]
I was shocked to discover that the PowExpression actually 
depends on phobos! (depending on the exact codePath it may or 
may not compile...)


Yes. This is because of lowering. Walter said in his DConf talk 
that lowering was a success; actually, it's a quick-and-dirty 
hack that inevitably leads to a disaster.

Lowering always needs to be reverted.

which let to me prematurely stating that it worked at ctfe 
[http://forum.dlang.org/thread/ukcoibejffinknrbz...@forum.dlang.org]


My Plan is as follows.

Add a new file for my ctfe-interpreter and update it gradually 
to take more and more of the cases the code in the files 
mentioned above was used for.


Do Dataflow analysis on the code that is to be ctfe'd so we can 
tell beforehand if we need to store state in the ctfe stack or 
not.


You don't need dataflow analysis. The CtfeCompile pass over the 
semantic tree was intended to determine how many variables are 
required by each function.


Or baring proper data-flow analysis: RefCouting the variables 
on the ctfe-stack could also be a solution.


I will post more details as soon as I dive deeper into the code.


The current implementation stores persistent state for every 
ctfe incovation.
While caching nothing. Not even the compiled for of a function 
body.

Because it cannot relax purity.


No. Purity is not why it doesn't save the state. It's because of 
history.


I think I need to explain the history of CTFE.
Originally, we had constant-folding. Then constant-folding was 
extended to do things like slicing a string at compile time. 
Constant folding leaks memory like the Exxon Valdez leaks oil, 
but that's OK because it only ever happens once.
Then, the constant folding was extended to include function 
calls, for loops, etc. All using the existing constant-folding 
code. Now the crappy memory usage is a problem. But it's OK 
because the CTFE code was kind of proof-of-concept thing anyway.


Now, everyone asks, why doesn't it use some kind of byte-code 
interpreter or something?
Well, the reason is, it just wasn't possible. There was actually 
no single CTFE entry point. Instead, it was a complete mess. For 
example, with template arguments, the compiler would first try to 
run CTFE on the argument, with error messages suppressed. If that 
succeeded, it was a template value argument. If it generated 
errors, it would then see if was a type. If that failed as well, 
it assumed it was a template alias argument.
The other big problem was that CTFE was also often called on a 
function which had semantic errors.


So, here is what I did with CTFE:
(1) Implement all the functionality, so that CTFE code can be 
developed. The valuable legacy of this, which I am immensely 
proud of, is the file "interpret3.d" in the test suite. It is 
very comprehensive. If an CTFE implementation passes the test 
suite, it's good to go.
The CTFE implementation itself is practically worthless. It's 
value was to get the test suite developed.


(2) Created a single entry point for CTFE. This involved working 
out rules for every place that CTFE is actually required, 
removing the horrid speculative execution of CTFE.
It made sure that functions had actually been semantically 
analyzed before they were executed (there were really horrific 
cases where the function had its semantic tree modified while it 
was being executed!!)
Getting to this point involved about 60 pull requests and six 
months of nearly full-time work. Finally it was possible to 
consider a byte-code interpreter or JITer.


We reached this point around Nov 2012.

(3) Added a 'ctfeCompile' step which runs over the semantic tree 
the first time the function is executed at compile time. Right 
now it does nothing much except that check that the semantic tree 
is valid. This detected many errors in the rest of the compiler.


We reached this point around March 2013.

My intention was to extend the cfteCompile step to a byte-code 
generator. But then I had to stop working on it and concentrate 
on looking after my family.


Looking at the code without knowing the history, you'll think, 
the obvious way to do this would be with a byte-code generator or 
JITer, and wonder why the implementation is so horrible. But for 
most of the history, that kind of implementation was just not 
possible.
People come up with all these elaborate schemes to speed up CTFE. 
It's totally not necessary. All that's needed is a very simple 
bytecode interpreter.






Re: Adventures in D Programming

2016-05-13 Thread Laeeth Isharc via Digitalmars-d-announce

On Thursday, 12 May 2016 at 21:08:58 UTC, Matthias Klumpp wrote:
To elaborate a bit more on the version incompatibilities thing: 
E.g. me as a new user reads about std.concurrency.Generator, 
wants to use it, and it turns out that the standard library 
doesn't contain it yet (in GDC). Same for 
std.experimental.logger.

Okay, means I can't use these.


I haven't tried myself for these, but it might turn out to be not 
so much work just to copy the relevant files over and clean up 
the rough edges if you want to use these in GDC/LDC.  But I know 
that it's harder and a nuisance if you aren't that familiar with 
the language and just want to get your job done.


Then, I want to use D-YAML, which depends on std.stream. But 
std.stream is completely deprecated, with no clear path for me 
to see to replace it. That's really bad, and it also means I 
can't compile my code with making the use of deprecated stuff 
fail the compilation.


https://github.com/DigitalMars/undeaD/blob/master/src/undead/stream.d

You might want to submit a pull request so D-YAML depends on this 
(where removed parts of Phobos go to live) rather than std.stream.


That was by far the most frustrating things I experienced in D. 
So ideally the docs would be split for different Phobos 
versions, that would already be a great help. Then, when 
deprecating stuff, showing a thing that replaces it or the 
proper way to write code using it would also be really nice.


I agree.

It would actually be really awesome if Phobos wasn't tied to a 
compiler, and all D compilers which are standard-compliant 
could compile it. Then, one could assume that people have the 
most recent Phobos. But it looks like it will take a longer 
time to get there, if at all.


A matter of maturity and resources.  It's quite astonishing the 
value that the small number of people working on LDC and GDC have 
been able to create.  (DMD too, but there are more people).  
Maybe there ought to be a way to express concrete appreciation 
for their work.





Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 13:17:44 UTC, Walter Bright wrote:

On 5/13/2016 2:12 AM, Chris wrote:
If autodecode is killed, could we have a test version asap? 
I'd be willing to
test my programs with autodecode turned off and see what 
happens. Others should
do likewise and we could come up with a transition strategy 
based on what happened.


You can avoid autodecode by using .byChar


Hm. It would be difficult to make sure that my whole code base 
doesn't do something, somewhere that doesn't trigger auto decode.


PS Why does do I get a "StopForumSpam error" every time I post 
today? Has anyone else experienced the same problem:


"StopForumSpam error: Socket error: Lookup error: getaddrinfo 
error: Name or service not known. Please solve a CAPTCHA to 
continue."


Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 3:43 AM, Marc Schütz wrote:

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:

7. Autodecode cannot be used with unicode path/filenames, because it is legal
(at least on Linux) to have invalid UTF-8 as filenames. It turns out in the
wild that pure Unicode is not universal - there's lots of dirty Unicode that
should remain unmolested, and autocode does not play with that.


This just means that filenames mustn't be represented as strings; it's unrelated
to auto decoding.


It means much more than that, filenames are just an example. I recently fixed 
MicroEmacs (my text editor) to assume the source is UTF-8, and display Unicode 
characters. But it still needs to work with dirty UTF-8 without throwing 
exceptions, modifying the text in-place, or other tantrums.


Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/12/2016 11:50 PM, Bill Hicks wrote:

And I get called a troll and
other names when I list half a dozen things wrong with D, my posts get
removed/censored, etc, all because I try to inform people not to waste time with
D because it's a broken and failed language.


Posts that engage in personal attacks and bring up personal issues about other 
forum members get removed.


You're welcome to post here in a reasonably professional manner.



Re: The Case Against Autodecode

2016-05-13 Thread Walter Bright via Digitalmars-d

On 5/13/2016 2:12 AM, Chris wrote:

If autodecode is killed, could we have a test version asap? I'd be willing to
test my programs with autodecode turned off and see what happens. Others should
do likewise and we could come up with a transition strategy based on what 
happened.


You can avoid autodecode by using .byChar


Re: The Case Against Autodecode

2016-05-13 Thread Kagamin via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
IIRC, Andrei talked in TDPL about how Java's choice to go with 
UTF-16 was worse than the choice to go with UTF-8, because it 
was correct in many more cases


UTF-16 was a migration from UCS-2, and UCS-2 was superior at the 
time.


Re: Killing the comma operator

2016-05-13 Thread Nick Treleaven via Digitalmars-d

On Thursday, 12 May 2016 at 02:51:33 UTC, Lionello Lunesu wrote:
I'm trying to think of a case where changing a single value 
into a tuple with 2 (or more) values would silently change the 
behavior, but I can't think of any. Seems to me it would always 
cause an error, iff the result of the comma operator gets used.


int x,y;
auto f() {return (x=4,y);}
...
auto z = f();
static if (!is(typeof(z) == int)
  voteForTrump();

;-)

In practice, this is more plausible with function overloading - 
i.e. z.overload() calling a different function. If the comma 
operator returns void, the `auto z` line and f().overload() both 
fail.


[Issue 15662] Cannot move struct with defined opAssign due to @disabled post-blit

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15662

--- Comment #12 from Martin Nowak  ---
(In reply to Martin Nowak from comment #11)
>   static if (!hasElaborateAssign!T && isAssignable!T)
> chunk = T.init;

That needs to be `value = T.init;`. Direct assignment is an optional
optimization over using memcpy.

>   else
>   {
> import core.stdc.string : memcpy;
> static immutable T init = T.init;
> memcpy(, , T.sizeof);
>   }

--


Re: Adventures in D Programming

2016-05-13 Thread Kagamin via Digitalmars-d-announce

On Thursday, 12 May 2016 at 22:01:04 UTC, David Nadlinger wrote:

Are there any bug reports for this, by the way? Thanks!


I believe, it was the atomicOp bug. You can see the link to 
discussion there.


Re: The Case Against Autodecode

2016-05-13 Thread Nick Treleaven via Digitalmars-d

On Friday, 13 May 2016 at 00:47:04 UTC, Jack Stouffer wrote:
If you're serious about removing auto-decoding, which I think 
you and others have shown has merits, you have to the THE 
SIMPLEST migration path ever, or you will kill D. I'm talking a 
simple press of a button.


char[] is always going to be unsafe for UTF-8. I don't think we 
can remove it or auto-decoding, only discourage use of it. We 
need a String struct IMO, without length or indexing. Its front 
can do autodecoding, and it has a ubyte[] raw() property too. 
(Possibly the byte length of front can be cached for use in 
popFront, assuming it was faster). This would be a gradual 
transition.


Re: imports && -run [Bug?]

2016-05-13 Thread Jacob Carlborg via Digitalmars-d-learn

On 2016-05-13 08:27, Andrew Edwards wrote:


I fail to see why the compiler would be less capable at this task than
rdmd. Since it is already build to accept multiple input files and knows
more about what's going on during compilation than rdmd will ever know,
in does not make sense that it should inferior in this regard: yet rdmd
takes one imput file and sorts out all dependencies.


There's no technical reason why the compiler is not doing this, as far 
as I know.


--
/Jacob Carlborg


Re: DMD flag -gs and -gx

2016-05-13 Thread Rene Zwanenburg via Digitalmars-d-learn

On Friday, 13 May 2016 at 10:19:04 UTC, Nordlöw wrote:

  -gsalways emit stack frame


IIRC, not emitting a stack frame is an optimization which 
confuses debuggers. So I think this can be used to make optimized 
builds a bit easier to debug.



  -gxadd stack stomp code


After a function returns the stack normally still contains the 
local variables of that function, but they can be overwritten at 
any time (which is why it's unsafe to escape references to stack 
variables from a function). Using this switch will cause the 
compiler to overwrite the stack with bogus values before 
returning, which will help with early detection of bugs like the 
above. It can also be useful in security contexts where a 
function operates on sensitive data.


Re: Command line parsing

2016-05-13 Thread Russel Winder via Digitalmars-d
On Thu, 2016-05-12 at 18:25 +, Jesse Phillips via Digitalmars-d
wrote:
[…]
> unknown flags harder and displaying help challenging. So I'd like 
> to see getopt merge with another getopt

getopt is a 1970s C solution to the problem of command line parsing.
Most programming languages have moved on from getopt and created
language-idiomatic solutions to the problem. Indeed there are other,
better solution in C now as well.

D should have one (or more maybe) D idiomatic command line processing
libraries *NOT* called getopt.
 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder

signature.asc
Description: This is a digitally signed message part


Re: How to split a string/array with multiple separators?

2016-05-13 Thread Thorsten Sommer via Digitalmars-d-learn

Wow, thanks Steve :)


Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:
Ideally, algorithms would be Unicode aware as appropriate, but 
the default would be to operate on code units with wrappers to 
handle decoding by code point or grapheme. Then it's easy to 
write fast code while still allowing for full correctness. 
Granted, it's not necessarily easy to get correct code that 
way, but anyone who wants fully correctness without caring 
about efficiency can just use ranges of graphemes. Ranges of 
code points are rare regardless.


char[], wchar[] etc. can simply be made non-ranges, so that the 
user has to choose between .byCodePoint, .byCodeUnit (or 
.representation as it already exists), .byGrapheme, or even 
higher-level units like .byLine or .byWord. Ranges of char, wchar 
however stay as they are today. That way it's harder to 
accidentally get it wrong.




Based on what I've seen in previous conversations on 
auto-decoding over the past few years (be it in the newsgroup, 
on github, or at dconf), most of the core devs think that 
auto-decoding was a major blunder that we continue to pay for. 
But unfortunately, even if we all agree that it was a huge 
mistake and want to fix it, the question remains of how to do 
that without breaking tons of code - though since AFAIK, Andrei 
is still in favor of auto-decoding, we'd have a hard time going 
forward with plans to get rid of it even if we had come up with 
a good way of doing so. But I would love it if we could get rid 
of auto-decoding and clean up string handling in D.


There is a simple deprecation path that's already been suggested. 
`isInputRange` and friends can output a helpful deprecation 
warning when they're called with a range that currently triggers 
auto-decoding.


[Issue 15959] core.sys.windows modules should be modified for x64

2016-05-13 Thread via Digitalmars-d-bugs
https://issues.dlang.org/show_bug.cgi?id=15959

--- Comment #5 from j...@red.email.ne.jp ---
https://github.com/dlang/druntime/pull/1574

--


Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Thursday, 12 May 2016 at 23:16:23 UTC, H. S. Teoh wrote:
Therefore, autodecoding actually only produces intuitively 
correct results when your string has a 1-to-1 correspondence 
between grapheme and code point. In general, this is only true 
for a small subset of languages, mainly a few common European 
languages and a handful of others.  It doesn't work for Korean, 
and doesn't work for any language that uses combining 
diacritics or other modifiers.  You need byGrapheme to have the 
correct results.


In fact, even most European languages are affected if NFD 
normalization is used, which is the default on MacOS X.


And this is actually the main problem with it: It was introduced 
to make unicode string handling correct. Well, it doesn't, 
therefore it has no justification.


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 10:38:09 UTC, Jonathan M Davis wrote:


Based on what I've seen in previous conversations on 
auto-decoding over the past few years (be it in the newsgroup, 
on github, or at dconf), most of the core devs think that 
auto-decoding was a major blunder that we continue to pay for. 
But unfortunately, even if we all agree that it was a huge 
mistake and want to fix it, the question remains of how to do 
that without breaking tons of code - though since AFAIK, Andrei 
is still in favor of auto-decoding, we'd have a hard time going 
forward with plans to get rid of it even if we had come up with 
a good way of doing so. But I would love it if we could get rid 
of auto-decoding and clean up string handling in D.


- Jonathan M Davis


Why not just try it in a separate test release? Only then can we 
know to what extent it actually breaks code, and what remedies we 
could come up with.




Re: The Case Against Autodecode

2016-05-13 Thread Marc Schütz via Digitalmars-d

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:
7. Autodecode cannot be used with unicode path/filenames, 
because it is legal (at least on Linux) to have invalid UTF-8 
as filenames. It turns out in the wild that pure Unicode is not 
universal - there's lots of dirty Unicode that should remain 
unmolested, and autocode does not play with that.


This just means that filenames mustn't be represented as strings; 
it's unrelated to auto decoding.


Re: synchronized with multiple arguments

2016-05-13 Thread ZombineDev via Digitalmars-d

On Friday, 13 May 2016 at 09:17:06 UTC, Andrei Alexandrescu wrote:
A reader reminded me (thanks!) that in TDPL synchronized with 
multiple argument does the right thing - locks objects in 
increasing order of address.


So now to everyone's unpleasant surprise, the sample code in 
TDPL compiles and runs, it just has difficult to detect 
problems.


So regardless of the discussion of the comma operator, 
synchronized with multiple arguments should just work.


+1


Is synchronized being lowered to some function calls?


Here's the relevant code:
https://github.com/dlang/dmd/blob/master/src/statement.d#L4974

IIUC, the code assumes that there is a single object that needs 
to be locked. Which is definitely wrong.


BTW, should synchronized (obj) allow calling non-shared methods 
of obj inside the block?





Re: The Case Against Autodecode

2016-05-13 Thread Jonathan M Davis via Digitalmars-d
On Thursday, May 12, 2016 13:15:45 Walter Bright via Digitalmars-d wrote:
> On 5/12/2016 9:29 AM, Andrei Alexandrescu wrote:
>  > I am as unclear about the problems of autodecoding as I am about the
>  > necessity to remove curl. Whenever I ask I hear some arguments that work
>  > well emotionally but are scant on reason and engineering. Maybe it's
>  > time to rehash them? I just did so about curl, no solid argument seemed
>  > to come together. I'd be curious of a crisp list of grievances about
>  > autodecoding. -- Andrei
>
> Here are some that are not matters of opinion.
>
> 1. Ranges of characters do not autodecode, but arrays of characters do. This
> is a glaring inconsistency.
>
> 2. Every time one wants an algorithm to work with both strings and ranges,
> you wind up special casing the strings to defeat the autodecoding, or to
> decode the ranges. Having to constantly special case it makes for more
> special cases when plugging together components. These issues often escape
> detection when unittesting because it is convenient to unittest only with
> arrays.
>
> 3. Wrapping an array in a struct with an alias this to an array turns off
> autodecoding, another special case.
>
> 4. Autodecoding is slow and has no place in high speed string processing.
>
> 5. Very few algorithms require decoding.
>
> 6. Autodecoding has two choices when encountering invalid code units - throw
> or produce an error dchar. Currently, it throws, meaning no algorithms
> using autodecode can be made nothrow.
>
> 7. Autodecode cannot be used with unicode path/filenames, because it is
> legal (at least on Linux) to have invalid UTF-8 as filenames. It turns out
> in the wild that pure Unicode is not universal - there's lots of dirty
> Unicode that should remain unmolested, and autocode does not play with
> that.
>
> 8. In my work with UTF-8 streams, dealing with autodecode has caused me
> considerably extra work every time. A convenient timesaver it ain't.
>
> 9. Autodecode cannot be turned off, i.e. it isn't practical to avoid
> importing std.array one way or another, and then autodecode is there.
>
> 10. Autodecoded arrays cannot be RandomAccessRanges, losing a key benefit of
> being arrays in the first place.
>
> 11. Indexing an array produces different results than autodecoding, another
> glaring special case.

It also results in constantly special-casing algorithms for narrow strings
in order to avoid auto-decoding. Phobos does this all over the place. We
have a ridiculous amount of code in Phobos just to avoid auto-decoding, and
anyone who wants high performance will have to do the same.

And it's not like auto-decoding is even correct. It would be one thing if
auto-decoding were fully correct but slow, but to be fully correct, it would
need to operate at the grapheme level, not the code point level. So, by
default, we get slower code without actually getting fully correct code.

So, we're neither fast nor correct. We _are_ correct in more cases than we'd
be if we simply acted like ASCII was all there was, but what we end up with
is the illusion that we're correct when we're not. IIRC, Andrei talked in
TDPL about how Java's choice to go with UTF-16 was worse than the choice to
go with UTF-8, because it was correct in many more cases to operate on the
code unit level as if a code unit were a character, and it was therefore
harder to realize that what you were doing was wrong, whereas with UTF-8,
it's obvious very quickly. We currently have that same problem with
auto-decoding except that it's treating UTF-32 code units as if they were
full characters rather than treating UTF-16 code units as if they were full
characters.

Ideally, algorithms would be Unicode aware as appropriate, but the default
would be to operate on code units with wrappers to handle decoding by code
point or grapheme. Then it's easy to write fast code while still allowing
for full correctness. Granted, it's not necessarily easy to get correct code
that way, but anyone who wants fully correctness without caring about
efficiency can just use ranges of graphemes. Ranges of code points are rare
regardless.

Based on what I've seen in previous conversations on auto-decoding over the
past few years (be it in the newsgroup, on github, or at dconf), most of the
core devs think that auto-decoding was a major blunder that we continue to
pay for. But unfortunately, even if we all agree that it was a huge mistake
and want to fix it, the question remains of how to do that without breaking
tons of code - though since AFAIK, Andrei is still in favor of
auto-decoding, we'd have a hard time going forward with plans to get rid of
it even if we had come up with a good way of doing so. But I would love it
if we could get rid of auto-decoding and clean up string handling in D.

- Jonathan M Davis



DMD flag -gs and -gx

2016-05-13 Thread Nordlöw via Digitalmars-d-learn

What role does the DMD flags -gs and -gx play?

The documentation says

  -gsalways emit stack frame
  -gxadd stack stomp code

which I don't know what it means.


Re: The Case Against Autodecode

2016-05-13 Thread Kagamin via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:
not to waste time with D because it's a broken and failed 
language.


D is a better broken thing among all the broken things in this 
broken world, so it's to be expected to be preferred to spend 
time on.


Re: The backlash against scripting languages has begun

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 07:31:26 UTC, Joakim wrote:
He mentions Swift, Rust, and Go as his hopes at the end, too 
bad he doesn't include D:


https://medium.com/@deathdisco/today-i-accept-that-rails-is-yesterday-s-software-b5af35c9af39

He'd probably be happy with D, particularly given Walter's 
stance on the monkey-patching that guy now rues:


"Monkey-patching has, in Ruby, been popular and powerful. It 
has also turned out to be a disaster. It does not scale, and is 
not conducive to more than one person/team working on the code 
base."

http://forum.dlang.org/post/jsat48$ujt$1...@digitalmars.com

That blogger probably wishes he read that quote from Walter 
four years ago. ;)


"basing themselves on interpreted, slow languages that favoured 
‘easy to learn’ over ‘easy to maintain’."


Yep. Frustration kicks in sooner or later. I always tell people 
not to use scripting languages for bigger or real world projects.


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread Rory McGuire via Digitalmars-d-announce
On Thu, May 12, 2016 at 10:55 PM, Steven Schveighoffer via
Digitalmars-d-announce  wrote:

> On 5/12/16 4:13 PM, Seb wrote:
>
>> On Wednesday, 11 May 2016 at 09:17:54 UTC, Dicebot wrote:
>>
>>> To do the editing of HD videos we need presentation slides which are
>>> currently scattered over different places. It would help a lot to have
>>> them all in github.com/dlang/dlang.org repo - please submit pull
>>> requests asap!
>>>
>>
>> Just a minor complaint - would it be possible for the next dconf to have
>> the slides (or a link to them) on dconf.org before the talk starts?
>> Thanks for the great work!
>>
>
> I think it's better to not have the slides available until the talk
> starts. There may be jokes/surprises in the slides that you don't want to
> give away before the talk happens :)
>
> -Steve
>

Surely we should make it a requirement that slides are uploaded on the
dconf site, then make sure they only show once the talk starts.

I'd imagine its easier to get most of the admin stuff done before the
conference, because those involved are already busy with it.


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:


Wow, that's eleven things wrong with just one tiny element of 
D, with the potential to cause problems, whether fixed or not.  
And I get called a troll and other names when I list half a 
dozen things wrong with D, my posts get removed/censored, etc, 
all because I try to inform people not to waste time with D 
because it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


Is there any PL that doesn't have multiple issues? Look at Swift. 
They keep changing it, although it started out as _the_ big 
thing, because, you know, it's Apple. C#, Java, Go and of course 
the chronically ill C++. There is no such thing as the perfect 
PL, and as hardware is changing, PLs are outdated anyway and have 
to catch up. The question is not whether a language sucks or not, 
the question is which language sucks the least for the task at 
hand.


PS I wonder does Bill Hicks know you're using his name? But I 
guess he's lost interest in this planet and happily lives on Mars 
now.




synchronized with multiple arguments

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d
A reader reminded me (thanks!) that in TDPL synchronized with multiple 
argument does the right thing - locks objects in increasing order of 
address.


So now to everyone's unpleasant surprise, the sample code in TDPL 
compiles and runs, it just has difficult to detect problems.


So regardless of the discussion of the comma operator, synchronized with 
multiple arguments should just work.


Is synchronized being lowered to some function calls?


Andrei


Re: The Case Against Autodecode

2016-05-13 Thread Chris via Digitalmars-d

On Friday, 13 May 2016 at 01:00:54 UTC, Walter Bright wrote:

On 5/12/2016 5:47 PM, Jack Stouffer wrote:
D is much less popular now than was Python at the time, and 
Python 2 problems
were more straight forward than the auto-decoding problem.  
You'll need a very
clear migration path, years long deprecations, and automatic 
tools in order to
make the transition work, or else D's usage will be 
permanently damaged.


I agree, if it is possible at all.


I don't know to which extent my problems with string handling are 
related to autodecode. However, I had to write some utility 
functions to get around issues with code points, graphemes and 
the like. While it is not a huge issue in terms of programming 
time, it does slow down my program, because even simple 
operations may be referred to a utility function to make sure the 
result is correct (.length for example). But that might be an 
issue related to Unicode in general (or D's handling of it).


If autodecode is killed, could we have a test version asap? I'd 
be willing to test my programs with autodecode turned off and see 
what happens. Others should do likewise and we could come up with 
a transition strategy based on what happened.


Re: Version block "conditions" with logical operators

2016-05-13 Thread Joakim via Digitalmars-d

On Thursday, 12 May 2016 at 01:58:33 UTC, Walter Bright wrote:

On 5/11/2016 6:52 PM, Joakim wrote:
That example is misleading, as that was translated from C++ 
and the host half of

it was removed a couple months ago:

https://github.com/dlang/dmd/pull/5549/files

I'll submit a PR for the rest: I'm sick of this argument that 
"ddmd is using

static if, so why shouldn't I?"


Please do. That code is an abomination.


I'm trying, but Daniel seems against it, care to chip in?

https://github.com/dlang/dmd/pull/5772

Specifically, do you want the changes in that PR?  If so, do you 
prefer the use of TARGET_POSIX as a runtime variable or listing 
each TARGET_OS separately?


Re: Command line parsing

2016-05-13 Thread Andrei Alexandrescu via Digitalmars-d

On 5/12/16 8:21 PM, Nick Sabalausky wrote:

You may want to ask Sonke about his specific reasons and experiences
with that design.


Yes please! -- Andrei


Re: Casting Pointers?

2016-05-13 Thread Rene Zwanenburg via Digitalmars-d

On Thursday, 12 May 2016 at 22:23:38 UTC, Marco Leise wrote:
The pointer cast solution is specifically supported at CTFE, 
because /unions/ don't work there. :p


Well that's a problem ^^

I remember a discussion quite a while ago where Walter stated D 
should have strict aliasing rules, let me see if I can find it.. 
Ah here:


http://forum.dlang.org/post/jg3f21$1jqa$1...@digitalmars.com

On Saturday, 27 July 2013 at 06:58:04 UTC, Walter Bright wrote:
Although it isn't in the spec, D should be "strict aliasing". 
This is because:


1. it enables better code generation

2. there are ways, such as unions, to get the other aliasing 
that doesn't break strict aliasing


On Saturday, 27 July 2013 at 08:59:54 UTC, Walter Bright wrote:

On 7/27/2013 1:57 AM, David Nadlinger wrote:

We need to carefully formalize this then, and quickly.

[...]

I agree. Want to do an enhancement request on bugzilla for it?


Re: To all DConf speakers: please upload slides!

2016-05-13 Thread Adrian Matoga via Digitalmars-d-announce

On Wednesday, 11 May 2016 at 17:31:32 UTC, dilkROM wrote:
And also, if anyone can identify all lightning speakers, that 
would be terrific. We do need their slides and their names / 
desired nicknames / contact info as well. :)


I didn't use slides, but only a few code examples, collected in a 
text file that Andrei pasted here: 
http://dpaste.dzfl.pl/36e4c089d0d6




Re: Always false float comparisons

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 05:12:14 UTC, Manu wrote:

No. Do not.
I've worked on systems where the compiler and the runtime don't 
share

floating point precisions before, and it was a nightmare.


Use reproducible cross platform IEEE754-2008 and use exact 
rational numbers. All other representations are just painful. 
Nothing wrong with supporting 16, 32, 64 and 128 bit, but stick 
to the reproducible standard. If people want "non-reproducible 
fast math", then they should specify it.




Re: The Case Against Autodecode

2016-05-13 Thread Ola Fosheim Grøstad via Digitalmars-d

On Friday, 13 May 2016 at 00:47:04 UTC, Jack Stouffer wrote:
D is much less popular now than was Python at the time, and 
Python 2 problems were more straight forward than the 
auto-decoding problem.  You'll need a very clear migration 
path, years long deprecations, and automatic tools in order to 
make the transition work, or else D's usage will be permanently 
damaged.


Python 2 is/was deployed at a much larger scale and with far more 
library dependencies, so I don't think it is comparable. It is 
easier for D to get away with breaking changes.


I am still using Python 2.7 exclusively, but now I use:
from __future__ import division, absolute_import, with_statement, 
unicode_literals


D can do something similar.

C++ is using a comparable solution. Use switches to turn on 
different compatibility levels.




The backlash against scripting languages has begun

2016-05-13 Thread Joakim via Digitalmars-d
He mentions Swift, Rust, and Go as his hopes at the end, too bad 
he doesn't include D:


https://medium.com/@deathdisco/today-i-accept-that-rails-is-yesterday-s-software-b5af35c9af39

He'd probably be happy with D, particularly given Walter's stance 
on the monkey-patching that guy now rues:


"Monkey-patching has, in Ruby, been popular and powerful. It has 
also turned out to be a disaster. It does not scale, and is not 
conducive to more than one person/team working on the code base."

http://forum.dlang.org/post/jsat48$ujt$1...@digitalmars.com

That blogger probably wishes he read that quote from Walter four 
years ago. ;)


Re: The Case Against Autodecode

2016-05-13 Thread poliklosio via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:

(...)
Wow, that's eleven things wrong with just one tiny element of 
D, with the potential to cause problems, whether fixed or not.  
And I get called a troll and other names when I list half a 
dozen things wrong with D, my posts get removed/censored, etc, 
all because I try to inform people not to waste time with D 
because it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


You get banned because there is a difference between torpedoing a 
project and having constructive criticism.
Also, you are missing the point by claiming that a technical 
problem is sure to kill D. Note that very successful languages 
like C++, python and so on also have undergone heated discussions 
about various features, and often live design mistakes for many 
years. The real reason why languages are successful is what they 
enable, not how many quirks they have.

Quirks are why they get replaced by others 20 years later. :)


Re: The Case Against Autodecode

2016-05-13 Thread Ethan Watson via Digitalmars-d

On Friday, 13 May 2016 at 06:50:49 UTC, Bill Hicks wrote:

*rant*


Actually, chap, it's the attitude that's the turn-off in your 
post there. Listing problems in order to improve them, and 
listing problems to convince people something is a waste of time 
are incompatible mindsets around here.


Re: The end of curl (in phobos)

2016-05-13 Thread Johannes Pfau via Digitalmars-d
Am Sun, 8 May 2016 11:33:07 +0300
schrieb Andrei Alexandrescu :

> On 5/8/16 11:05 AM, Jonathan M Davis via Digitalmars-d wrote:
> > On Sunday, May 08, 2016 02:44:48 Adam D. Ruppe via Digitalmars-d
> > wrote:  
> >> On Saturday, 7 May 2016 at 20:50:53 UTC, Jonas Drewsen wrote:  
> >>> But std.net.curl supports not just HTTP but also FTP etc. so i
> >>> guess that won't suffice.  
> >>
> >> We can always implement ftp too, it isn't that complicated of a
> >> protocol.
> >>
> >> Though, I suspect its users are a tiny minority and they might
> >> not mind depending on a separate curl package.  
> >
> > An alternative would be to move std.net.curl into a dub package.  
> 
> That would still be a breaking change, is that correct?
> 
> I'm unclear on what the reasons are for removing libcurl so I'd love
> to see them stated clearly. Walter's argumentation was vague - code
> that we don't control etc. There have been past reports of issues
> with libcurl on windows, have those not been permanently solved?
> 
> I even see a plus: dealing with libcurl is a good exercise in eating
> our dogfood regarding "interfacing with C libraries is trivial"
> stance. Having to deal with it is a reflection of what other projects
> have to do on an ongoing basis.
> 
> 
> Andrei
> 

The curl problems are more or less solved now, but it has caused
quite some trouble:

As long as we were statically linking curl:
 * We can't use curl when producing cross compilers for GDC as the
   minimal builds used by crosstool do not include curl. They do not
   even include zlib, we're just lucky that zlib is in GCC and we
   compile it statically into druntime. OTOH I'm not sure if we can get
   conflicts between our statically compiled zlib and libraries which
   link against zlib.
 * For static libraries, we don't need curl at link time, but for
   dynamic libraries we do need it.
 * There was the library versioning issue which made DMD builds
   unusable on some distributions.
 * http://bugzilla.gdcproject.org/show_bug.cgi?id=202 Even programs not
   using libcurl will sometimes require linking curl (This is because
   common templates such as std.conv.to might be instatiated in curl,
   so curl.o is pulled in, which depends on libcurl)
 * Library order when linking is important nowadays, so you need a way
   to specify -lcurl in the correct location relative to -lphobos

Still open issues:
 * Even when dynamically loading curl, it introduces a new dependency:
   libdl for dynamic loading. This is not an issue for shared
   libraries, but the list of libraries which need to be hard coded when
   linking a static libphobos is already quite long:
   -lc -lrt -lm -ldl -lz -lstdc++ -luuid -lws2_32
   In fact GDC doesn't link some of these yet and Iain doesn't want to
   add more special cases to our linking code
(https://github.com/D-Programming-GDC/GDC/pull/182
https://github.com/D-Programming-GDC/GDC/pull/181).


Additionally the complete API, integration with D features and
performance is not really up to phobos standards. This is because of
libcurl API limitations though, so there's nothing we can do about it.
As long as we don't have a D replacement though, it's still the best
HTTP client available...


Re: The Case Against Autodecode

2016-05-13 Thread Bill Hicks via Digitalmars-d

On Thursday, 12 May 2016 at 20:15:45 UTC, Walter Bright wrote:


Here are some that are not matters of opinion.

1. Ranges of characters do not autodecode, but arrays of 
characters do. This is a glaring inconsistency.


2. Every time one wants an algorithm to work with both strings 
and ranges, you wind up special casing the strings to defeat 
the autodecoding, or to decode the ranges. Having to constantly 
special case it makes for more special cases when plugging 
together components. These issues often escape detection when 
unittesting because it is convenient to unittest only with 
arrays.


3. Wrapping an array in a struct with an alias this to an array 
turns off autodecoding, another special case.


4. Autodecoding is slow and has no place in high speed string 
processing.


5. Very few algorithms require decoding.

6. Autodecoding has two choices when encountering invalid code 
units - throw or produce an error dchar. Currently, it throws, 
meaning no algorithms using autodecode can be made nothrow.


7. Autodecode cannot be used with unicode path/filenames, 
because it is legal (at least on Linux) to have invalid UTF-8 
as filenames. It turns out in the wild that pure Unicode is not 
universal - there's lots of dirty Unicode that should remain 
unmolested, and autocode does not play with that.


8. In my work with UTF-8 streams, dealing with autodecode has 
caused me considerably extra work every time. A convenient 
timesaver it ain't.


9. Autodecode cannot be turned off, i.e. it isn't practical to 
avoid importing std.array one way or another, and then 
autodecode is there.


10. Autodecoded arrays cannot be RandomAccessRanges, losing a 
key benefit of being arrays in the first place.


11. Indexing an array produces different results than 
autodecoding, another glaring special case.


Wow, that's eleven things wrong with just one tiny element of D, 
with the potential to cause problems, whether fixed or not.  And 
I get called a troll and other names when I list half a dozen 
things wrong with D, my posts get removed/censored, etc, all 
because I try to inform people not to waste time with D because 
it's a broken and failed language.


*sigh*

Phobos, a piece of useless rock orbiting a dead planet ... the 
irony.


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread Andrew Edwards via Digitalmars-d-learn

On 5/13/16 3:23 PM, tsbockman wrote:

On Friday, 13 May 2016 at 06:05:14 UTC, Andrew Edwards wrote:

Additionally, what's the best way to handle nested #ifdef's? Those
that appear inside structs, functions and the like... I know that
global #ifdef's are turned to version blocks but versions blocks
cannot be used inside classes, stucts, functions, etc.


`static if` and `version()` can be nested, and both work just fine
inside classes, structs, functions, etc.:

module app;

version = withPrint;

struct A {
   version(withPrint) {
 class B {
   static if(size_t.sizeof == 4) {
 static void print() {
   import std.stdio : writeln;
   version(unittest) {
 writeln("Hello, 32-bit world of unit tests!");
   } else {
 writeln("Hello, 32-bit world!");
   }
 }
   } else {
 static void print() {
   import std.stdio : writeln;
   version(unittest) {
 writeln("Hello, presumably 64-bit world of unit tests!");
   } else {
 writeln("Hello, presumably 64-bit world!");
   }
 }
   }
 }
   }
}

void main() {
   A.B.print();
}


Not sure what I was doing wrong earlier. Works perfectly fine now. Glad 
I asked because I usually just get frustrated and put it aside and 
usually never return to it. Thanks for the assist.



(Try it on DPaste: https://dpaste.dzfl.pl/0fafe316f739)




Re: Command line parsing

2016-05-13 Thread Jacob Carlborg via Digitalmars-d

On 2016-05-02 14:52, Andrei Alexandrescu wrote:

I found this in https://peter.bourgon.org/go-best-practices-2016/:

"I said it in 2014 but I think it’s important enough to say again:
define and parse your flags in func main. Only func main has the right
to decide the flags that will be available to the user. If your library
code wants to parameterize its behavior, those parameters should be part
of type constructors. Moving configuration to package globals has the
illusion of convenience, but it’s a false economy: doing so breaks code
modularity, makes it more difficult for developers or future maintainers
to understand dependency relationships, and makes writing independent,
parallelizable tests much more difficult."

This is interesting because it's what std.getopt does but the opposite
of what GFLAGS (http://gflags.github.io/gflags/) does. GFLAGS allows any
module in a project to define flags. I was thinking of adding
GFLAGS-like capabilities to std.getopt but looks like there's no need
to... thoughts?


I can see it being useful for a tool like git which has sub 
commands/actions to parse the global flags in the main function and 
parse the sub command specific flags in the module handling the sub command.


I've also built a library that does some of the boilerplate to setup a 
tool/application that parsers generic flags (like "help" and "version") 
and allows to add application specific flags as well.


--
/Jacob Carlborg


Re: Command line parsing

2016-05-13 Thread Jacob Carlborg via Digitalmars-d

On 2016-05-12 19:21, Nick Sabalausky wrote:


Vibe.d uses a system (built on top of getopt, IIRC) that allows
different modules to define and handle their own flags. It seems to be
useful for framework-style libraries where there are certain common
flags automatically provided and handled by the framework, and then
individual app developers can add their own program-specific flags. You
may want to ask Sonke about his specific reasons and experiences with
that design.


I had to add several new flags when I worked with vibe.d, which I think 
should have been included from the start: the port to use, the address 
to bind, if worker threads should be used, number of threads to use.


--
/Jacob Carlborg


Re: imports && -run [Bug?]

2016-05-13 Thread Jacob Carlborg via Digitalmars-d-learn

On 2016-05-13 08:10, tsbockman wrote:


According to the DMD compiler manual, the -run switch only accepts a
single source file:
 -run srcfile args...

After the first source file, any further arguments passed to DMD will be
interpreted as arguments to be passed to the program being run.


To elaborate, since the second file "inc" is not passed to the compiler, 
it will not compile that file, therefore you get linker errors. The 
solution is to pass all extra files before the -run flag.


Even better is to use "rdmd" which will automatically track and compile 
dependencies. Using rdmd it's enough to pass a single file to compile 
all dependencies and run the resulting binary: "rdmd mod".



Have you tried using DUB? It has lots of convenient features, including
a `run` command that supports multiple source files:
 http://code.dlang.org/docs/commandline#run


Dub is a great tool when a project grows larger than a few files.

--
/Jacob Carlborg


Re: imports && -run [Bug?]

2016-05-13 Thread Andrew Edwards via Digitalmars-d-learn

On 5/13/16 3:10 PM, tsbockman wrote:

On Friday, 13 May 2016 at 01:16:36 UTC, Andrew Edwards wrote:

command: dmd -run mod inc

output:

Undefined symbols for architecture x86_64:
  "_D3inc5printFZv", referenced from:
  __Dmain in mod.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see
invocation)
--- errorlevel 1

None of the variations of imports work when compiled with the -run
switch but all work perfectly well without it.


According to the DMD compiler manual, the -run switch only accepts a
single source file:
 -run srcfile args...

After the first source file, any further arguments passed to DMD will be
interpreted as arguments to be passed to the program being run.


Thanks, I guess I just expected it to work the same as rdmd does and 
didn't even bother trying to look in the manual regarding this.


I fail to see why the compiler would be less capable at this task than 
rdmd. Since it is already build to accept multiple input files and knows 
more about what's going on during compilation than rdmd will ever know, 
in does not make sense that it should inferior in this regard: yet rdmd 
takes one imput file and sorts out all dependencies.



Have you tried using DUB? It has lots of convenient features, including
a `run` command that supports multiple source files:
 http://code.dlang.org/docs/commandline#run


I've used dub before but it is not desired for what I'm trying to do. 
rdmd does the trick. Thank you.


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread tsbockman via Digitalmars-d-learn

On Friday, 13 May 2016 at 06:05:14 UTC, Andrew Edwards wrote:
Additionally, what's the best way to handle nested #ifdef's? 
Those that appear inside structs, functions and the like... I 
know that global #ifdef's are turned to version blocks but 
versions blocks cannot be used inside classes, stucts, 
functions, etc.


`static if` and `version()` can be nested, and both work just 
fine inside classes, structs, functions, etc.:


module app;

version = withPrint;

struct A {
  version(withPrint) {
class B {
  static if(size_t.sizeof == 4) {
static void print() {
  import std.stdio : writeln;
  version(unittest) {
writeln("Hello, 32-bit world of unit tests!");
  } else {
writeln("Hello, 32-bit world!");
  }
}
  } else {
static void print() {
  import std.stdio : writeln;
  version(unittest) {
writeln("Hello, presumably 64-bit world of unit 
tests!");

  } else {
writeln("Hello, presumably 64-bit world!");
  }
}
  }
}
  }
}

void main() {
  A.B.print();
}

(Try it on DPaste: https://dpaste.dzfl.pl/0fafe316f739)


Re: imports && -run [Bug?]

2016-05-13 Thread tsbockman via Digitalmars-d-learn

On Friday, 13 May 2016 at 01:16:36 UTC, Andrew Edwards wrote:

command: dmd -run mod inc

output:

Undefined symbols for architecture x86_64:
  "_D3inc5printFZv", referenced from:
  __Dmain in mod.o
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to 
see invocation)

--- errorlevel 1

None of the variations of imports work when compiled with the 
-run switch but all work perfectly well without it.


According to the DMD compiler manual, the -run switch only 
accepts a single source file:

-run srcfile args...

After the first source file, any further arguments passed to DMD 
will be interpreted as arguments to be passed to the program 
being run.


Have you tried using DUB? It has lots of convenient features, 
including a `run` command that supports multiple source files:

http://code.dlang.org/docs/commandline#run


Re: Request assistance converting C's #ifndef to D

2016-05-13 Thread Andrew Edwards via Digitalmars-d-learn

On 5/13/16 7:51 AM, Andrew Edwards wrote:

The following preprocessor directives are frequently encountered in C
code, providing a default constant value where the user of the code has
not specified one:

 #ifndef MIN
 #define MIN 99
 #endif

 #ifndef MAX
 #define MAX 999
 #endif

I'm at a loss at how to properly convert it to D. I've tried the following:

 enum MIN = 0;
 static if(MIN <= 0)
 {
 MIN = 99;
 }

it works as long as the static if is enclosed in a static this(),
otherwise the compiler complains:

 mo.d(493): Error: no identifier for declarator MIN
 mo.d(493): Error: declaration expected, not '='

This however, does not feel like the right way to do thinks but I cannot
find any documentation that provides an alternative. Is there a better
way to do this?

Thanks,
Andrew


Additionally, what's the best way to handle nested #ifdef's? Those that 
appear inside structs, functions and the like... I know that global 
#ifdef's are turned to version blocks but versions blocks cannot be used 
inside classes, stucts, functions, etc.


  1   2   >