Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 11:03 PM, Nick Sabalausky wrote:

On Tue, 18 Sep 2012 18:02:10 -0400
Andrei Alexandrescu  wrote:


On 9/18/12 5:07 PM, "Øivind" wrote:

* For all tests, the best run is selected, but would it not be
reasonable in some cases to get the average value? Maybe excluding
the runs that are more than a couple std. deviations away from the
mean value..


After extensive tests with a variety of aggregate functions, I can
say firmly that taking the minimum time is by far the best when it
comes to assessing the speed of a function.



*Ahem*: http://zedshaw.com/essays/programmer_stats.html


I'm not sure I figure how this applies to the discussion at hand.


Your claim that the minimum time is sufficient is...ummm...extremely
unorthodox, to say the least.


What would be the orthodoxy? If orthodoxy is what google finds, it's 
good we're not orthodox.



As such, you're going to need a far more
convincing argument than "It worked well for me."


Sure. I have just detailed the choices made by std.benchmark in a couple 
of posts.


At Facebook we measure using the minimum, and it's working for us. We've 
tried other approaches (such as taking the mode of the distribution). 
Turns out the minimum is better every time. Take a look at the early 
return in estimateTime():


https://github.com/facebook/folly/blob/master/folly/Benchmark.cpp#L136


I assume I don't need to preach that "Extraordinary claims require
extraordinary evidence". But, condensing benchmarks and statistics down
to "take the minimum" and saying that's sufficient is one heck of an
extraordinary claim. If you feel that you can provide sufficiently
extraordinary justification, then please do.


My claim is unremarkable. All I'm saying is the minimum running time of 
an algorithm on a given input is a stable and indicative proxy for the 
behavior of the algorithm in general. So it's a good target for 
optimization.


There might be some confusion that std.benchmark does profiling. That's 
not part of its charter.



Otherwise, I think we'll need richer results. At the very least there
should be an easy way to get at the raw results programmatically
so we can run whatever stats/plots/visualizations/output-formats we
want. I didn't see anything like that browsing through the docs, but
it's possible I may have missed it.


Currently std.benchmark does not expose raw results for the sake of 
simplicity. It's easy to expose such, but I'd need a bit more convincing 
about their utility.



That brings up another question too: I like the idea of a
one-stop-benchmarking-shop, much like we have for unittests, but maybe
reporting shouldn't be so tightly integrated and left more open for
integration with a proper statistics lib and more generalized
output abilities? But of course, that doesn't preclude having a nice
built-in, but optional, default report. (Again though, maybe I'm
overlooking something already in the module?)


That's pretty much what's happening. There's an API for collecting 
timings, and then there's an API for printing those with a default format.



One other nitpick: My initial impression is that the
"benchmark_relative_file read" stuff seems a bit kludgey (and
confusing to visually parse). Is there maybe a better way to handle
that? For example, inspired by getopt:

printBenchmarks!(
 "file write", { std.file.write("/tmp/deleteme", "hello, world!"); },
 BenchmarkOption.relative,
 "file read", { std.file.read("/tmp/deleteme"); },
 "array creation", { new char[32]; })
 ();


The issue here is automating the benchmark of a module, which would 
require some naming convention anyway.



Andrei


Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 3:01 PM, foobar wrote:

On Thursday, 20 September 2012 at 12:35:15 UTC, Andrei Alexandrescu wrote:

Let's use the minimum. It is understood it's not what you'll see in
production, but it is an excellent proxy for indicative and
reproducible performance numbers.


Andrei


 From the responses on the thread clearly there isn't a "best way".


I don't quite agree. This is a domain in which intuition is having a 
hard time, and at least some of the responses come from an intuitive 
standpoint, as opposed from hard data.


For example, there's this opinion that taking the min, max, and average 
is the "fair" thing to do and the most informative. However, all noise 
in measuring timing is additive. Unless you talk about performance of 
entire large systems with networking, I/O, and the such, algorithms 
running in memory are inevitably spending time doing work, to which 
various sources of noise (system interrupts, clock quantization, 
benchmarking framework) just _add_ some time. Clearly these components 
do affect the visible duration of the algorithm, but if you want to 
improve it you need to remove the noise.



There are different use-cases with different tradeoffs so why not allow
the user to choose the policy best suited for their use-case?
I'd suggest to provide a few reasonable common choices to choose from,
as well as a way to provide a user defined calculation (function
pointer/delegate?)


Reasonable choices are great, but in this case it's a bit difficult to 
figure what's reasonable.



Andrei


Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 2:37 PM, Jacob Carlborg wrote:

On 2012-09-20 14:36, Andrei Alexandrescu wrote:


Let's use the minimum. It is understood it's not what you'll see in
production, but it is an excellent proxy for indicative and reproducible
performance numbers.


Why not min, max and average?


For a very simple reason: unless the algorithm under benchmark is very 
long-running, max is completely useless, and it ruins average as well.


For virtually all benchmarks I've run, the distribution of timings is a 
half-Gaussian very concentrated around the minimum. Say you have a 
minimum of e.g. 73 us. Then there would be a lot of results close to 
that; the mode of the distribution would be very close, e.g. 75 us, and 
the more measurements you take, the closer the mode is to the minimum. 
Then you have a few timings up to e.g. 90 us. And finally you will 
inevitably have a few outliers at some milliseconds. Those are orders of 
magnitude larger than anything of interest and are caused by system 
interrupts that happened to fall in the middle of the measurement.


Taking those into consideration and computing the average with those 
outliers simply brings useless noise into the measurement process.



Andrei




Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 10:05 AM, Manu wrote:

On 20 September 2012 15:36, Andrei Alexandrescu
mailto:seewebsiteforem...@erdani.org>>
wrote:
Let's use the minimum. It is understood it's not what you'll see in
production, but it is an excellent proxy for indicative and
reproducible performance numbers.


If you do more than a single iteration, the minimum will virtually
always be influenced by ideal cache pre-population, which is
unrealistic.


To measure performance against cold cache, you could always clear the 
cache using one of the available methods, see 
http://stackoverflow.com/questions/1756825/cpu-cache-flush. Probably 
std.benchmark could include a routine that does that. But performance on 
cold would actually be most unrealistic and uninformative, as loading 
the memory into cache will dominate the work that the algorithm is 
doing, so essentially the benchmark would evaluate the memory bandwidth 
against the working set of the algorithm. That may be occasionally 
useful, but I'd argue that most often the interest in benchmarking is to 
measure repeated application of a function, not occasional use of it.



Memory locality is often the biggest contributing
performance hazard in many algorithms, and usually the most
unpredictable. I want to know about that in my measurements.
Reproducibility is not important to me as accuracy. And I'd rather be
conservative(/pessimistic) with the error.

>

What guideline would you apply to estimate 'real-world' time spent when
always working with hyper-optimistic measurements?


The purpose of std.benchmark is not to estimate real-world time. (That 
is the purpose of profiling.) Instead, benchmarking measures and 
provides a good proxy of that time for purposes of optimizing the 
algorithm. If work is done on improving the minimum time given by the 
benchmark framework, it is reasonable to expect that performance in-situ 
will also improve.



Andrei


GDC Explorer - an online disassembler for D

2012-09-20 Thread Andrei Alexandrescu
I've met Matt Goldbolt, the author of the GCC Explorer at 
http://gcc.godbolt.org - a very handy online disassembler for GCC.


We got to talk a bit about D and he hacked together support for D by 
using gdc. Take a look at http://d.godbolt.org, I think it's pretty darn 
cool! I'm talking to him about integrating his work with our servers.



Andrei


Re: Review of Andrei's std.benchmark

2012-09-20 Thread Nick Sabalausky
On Tue, 18 Sep 2012 18:02:10 -0400
Andrei Alexandrescu  wrote:

> On 9/18/12 5:07 PM, "Øivind" wrote:
> > * For all tests, the best run is selected, but would it not be
> > reasonable in some cases to get the average value? Maybe excluding
> > the runs that are more than a couple std. deviations away from the
> > mean value..
> 
> After extensive tests with a variety of aggregate functions, I can
> say firmly that taking the minimum time is by far the best when it
> comes to assessing the speed of a function.
> 

*Ahem*: http://zedshaw.com/essays/programmer_stats.html

Your claim that the minimum time is sufficient is...ummm...extremely
unorthodox, to say the least. As such, you're going to need a far more
convincing argument than "It worked well for me."

I assume I don't need to preach that "Extraordinary claims require
extraordinary evidence". But, condensing benchmarks and statistics down
to "take the minimum" and saying that's sufficient is one heck of an
extraordinary claim. If you feel that you can provide sufficiently
extraordinary justification, then please do.

Otherwise, I think we'll need richer results. At the very least there
should be an easy way to get at the raw results programmatically
so we can run whatever stats/plots/visualizations/output-formats we
want. I didn't see anything like that browsing through the docs, but
it's possible I may have missed it.

That brings up another question too: I like the idea of a
one-stop-benchmarking-shop, much like we have for unittests, but maybe
reporting shouldn't be so tightly integrated and left more open for
integration with a proper statistics lib and more generalized
output abilities? But of course, that doesn't preclude having a nice
built-in, but optional, default report. (Again though, maybe I'm
overlooking something already in the module?)

One other nitpick: My initial impression is that the
"benchmark_relative_file read" stuff seems a bit kludgey (and
confusing to visually parse). Is there maybe a better way to handle
that? For example, inspired by getopt:

printBenchmarks!(
"file write", { std.file.write("/tmp/deleteme", "hello, world!"); },
BenchmarkOption.relative,
"file read", { std.file.read("/tmp/deleteme"); },
"array creation", { new char[32]; })
();



Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 1:37 PM, Jacob Carlborg wrote:

On 2012-09-20 14:36, Andrei Alexandrescu wrote:


Let's use the minimum. It is understood it's not what you'll see in
production, but it is an excellent proxy for indicative and reproducible
performance numbers.


Why not min, max and average?


Because max and average are misleading and uninformative, as I explained.

Andrei



Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Timon Gehr

On 09/21/2012 12:29 AM, Jonathan M Davis wrote:

...
In order for your foo function to be called, it must be fully compiled first
(including its entire body, since CTFE needs the full definition of the
function, not just its signature). The body cannot be fully compiled until the
template that it's using is instantiated. But that template can't be compiled
until foo has been compiled, because you're passing a call to foo to it as a
template argument. So, you have a circular dependency.

Normal recursion avoids this, because it only depends on the function's
signature, but what you're doing requires that the function be _run_ as part
of the process of defining it. That's an unbreakable circular dependency and
will never work. ...


It is not exactly unbreakable. This can work. Just delay spitting out
an error to the point the execution actually depends upon its own
result.


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Timon Gehr

On 09/20/2012 11:22 PM, Jens Mueller wrote:

Hi,

I do not understand the following error message given the code:

string foo(string f)
{
 if (f == "somestring")
 {
 return "got somestring";
 }
 return bar!(foo("somestring"));
}

template bar(string s)
{
 enum bar = s;
}

I'll with dmd v2.060 get:
test.d(7):called from here: foo("somestring")
test.d(7):called from here: foo("somestring")
test.d(7):called from here: foo("somestring")
test.d(7): Error: expression foo("somestring") is not a valid template value 
argument
test.d(12):called from here: foo("somestring")
test.d(12):called from here: foo("somestring")
test.d(7): Error: template instance test.bar!(foo("somestring")) error 
instantiating

In line 7 I call the template bar. But I call with the string that is
returned by the CTFE of foo("somestring") which should return "got
somestring" but instead it seems that an expression is passed. How do I
force the evaluation foo("somestring")?
I haven't found a bug on this.

Jens



You can file a diagnostics-bug.

The issue is that CTFE can only interpret functions that are fully
analyzed and therefore the analysis of foo depends circularly on
itself. The compiler should spit out an error that indicates the
issue.

You could post an enhancement request to allow interpretation of
incompletely-analyzed functions, if you think it is of any use.

http://d.puremagic.com/issues


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jonathan M Davis
On Friday, September 21, 2012 00:29:48 Jonathan M Davis wrote:
> As far as a function's behavior goes,
> it's identical regardless of whether it's run at compile time or runtime
> (save that __ctfe is true at compile time but not runtime).

Actually, that's not quite true (though it's very close). There are a couple 
of quirks such as the precision of floating point arithmetic differing and the 
exact value of NaN potentially being different (since there are multiple NaN 
values), but it's almost entirely true.

- Jonathan M Davis


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jonathan M Davis
On Friday, September 21, 2012 00:11:51 Jens Mueller wrote:
> I thought foo is interpreted at compile time.
> There seems to be a subtle difference I'm not getting.
> Because you can do the factorial using CTFE even though you have
> recursion. I.e. there you have a call to the function itself. I.e. it
> can be compiled because you just insert a call to the function. But for
> a template you cannot issue something like call for instantiation.
> Have to think more about it. But your answer helps a lot. Pushes me in
> the right direction.

Okay. Straight up recursion works. So, with this code

int func(int value)
{
 if(value < 10)
 return func(value + 1);
 return value;
}

enum var = func(5);

var would be 10. The problem is that you're trying to pass the result of a 
recursive call as a template argument. As far as a function's behavior goes, 
it's identical regardless of whether it's run at compile time or runtime (save 
that __ctfe is true at compile time but not runtime). To quote the docs:

--
Any func­tions that ex­e­cute at com­pile time must also be ex­e­cutable at 
run time. The com­pile time eval­u­a­tion of a func­tion does the equiv­a­lent 
of run­ning the func­tion at run time. This means that the se­man­tics of a 
func­tion can­not de­pend on com­pile time val­ues of the func­tion. For ex­
am­ple:

int foo(char[] s) {
 return mixin(s);
}

const int x = foo("1");

is il­le­gal, be­cause the run­time code for foo() can­not be gen­er­ated. A 
func­tion tem­plate would be the ap­pro­pri­ate method to im­ple­ment this 
sort of thing.
--

You're doing something very similar to passing a function argument to a mixin 
statement, but in this case, it's passing the result of calling a function 
which doesn't exist yet (since it hasn't been fully compiled) to a template.

In order for your foo function to be called, it must be fully compiled first 
(including its entire body, since CTFE needs the full definition of the 
function, not just its signature). The body cannot be fully compiled until the 
template that it's using is instantiated. But that template can't be compiled 
until foo has been compiled, because you're passing a call to foo to it as a 
template argument. So, you have a circular dependency.

Normal recursion avoids this, because it only depends on the function's 
signature, but what you're doing requires that the function be _run_ as part 
of the process of defining it. That's an unbreakable circular dependency and 
will never work. You need to redesign your code so that you don't require a 
function to call itself while it's being defined. Being called at compile time 
is fine, but being called while it's being compiled is not.

- Jonathan M Davis


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jens Mueller
Jonathan M Davis wrote:
> On Thursday, September 20, 2012 23:22:36 Jens Mueller wrote:
> > Hi,
> > 
> > I do not understand the following error message given the code:
> > 
> > string foo(string f)
> > {
> > if (f == "somestring")
> > {
> > return "got somestring";
> > }
> > return bar!(foo("somestring"));
> > }
> > 
> > template bar(string s)
> > {
> > enum bar = s;
> > }
> > 
> > I'll with dmd v2.060 get:
> > test.d(7): called from here: foo("somestring")
> > test.d(7): called from here: foo("somestring")
> > test.d(7): called from here: foo("somestring")
> > test.d(7): Error: expression foo("somestring") is not a valid template value
> > argument test.d(12): called from here: foo("somestring")
> > test.d(12): called from here: foo("somestring")
> > test.d(7): Error: template instance test.bar!(foo("somestring")) error
> > instantiating
> > 
> > In line 7 I call the template bar. But I call with the string that is
> > returned by the CTFE of foo("somestring") which should return "got
> > somestring" but instead it seems that an expression is passed. How do I
> > force the evaluation foo("somestring")?
> > I haven't found a bug on this.
> 
> Template arguments must be known at compile time. And even if you use foo at 
> compile time, it has to be compiled before you use it, so you can't call it 
> inside itself and pass that as a template argument. foo must be fully 
> compiled 
> before it can be called, and as it stands, it can't be fully compiled until 
> it's called. So... Yeah. Not going to work.

I thought foo is interpreted at compile time.
There seems to be a subtle difference I'm not getting.
Because you can do the factorial using CTFE even though you have
recursion. I.e. there you have a call to the function itself. I.e. it
can be compiled because you just insert a call to the function. But for
a template you cannot issue something like call for instantiation.
Have to think more about it. But your answer helps a lot. Pushes me in
the right direction.

Jens


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jens Mueller
Simen Kjaeraas wrote:
> On Thu, 20 Sep 2012 23:22:36 +0200, Jens Mueller
>  wrote:
> 
> >string foo(string f)
> >{
> >if (f == "somestring")
> >{
> >return "got somestring";
> >}
> >return bar!(foo("somestring"));
> >}
> >
> >template bar(string s)
> >{
> >enum bar = s;
> >}
> 
> >In line 7 I call the template bar. But I call with the string that is
> >returned by the CTFE of foo("somestring") which should return "got
> >somestring"
> 
> When's it gonna get around to doing that? In order to figure out the
> return value of foo("somestring"), it will have to figure out the
> return value of foo("somestring"), and to do that...

There is no endless recursion. Note that foo("somestring") returns "got
somestring".
Am I wrong?

Jens


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Peter Alexander
I'm guessing the problem is that it's trying to call CTFE on a 
function whose full AST isn't known yet (because it requires the 
CTFE param, which requires the function etc.)


This could work in theory, but I'm guessing the implementation is 
tricky.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jens Mueller
Jonathan M Davis wrote:
> On Thursday, September 20, 2012 22:55:23 Jens Mueller wrote:
> > You say that JUnit silently runs all unittests before the first
> > specified one, don't you?
> 
> Yes. At least, that was its behavior the last time that I used it (which was 
> admittedly a few years ago).
> 
> > If that is done silently that's indeed strange.
> 
> It could have been a quirk of their implementation, but I expect that it's to 
> avoid issues where a unit test relies on previous unit tests in the same 
> file. 
> If your unit testing functions (or unittest blocks in the case of D) have 
> _any_ dependencies on external state, then skipping any of them affects the 
> ones that you don't skip, possibly changing the result of the unit test (be 
> it 
> to success or failure).
> 
> Running more unittest blocks after a failure is similarly flawed, but at 
> least 
> in that case, you know that had a failure earlier in the module, which should 
> then tell you that you may not be able to trust further tests (but if you 
> still run them, it's at least then potentially possible to fix further 
> failures 
> at the same time - particularly if your tests don't rely on external state). 
> So, while not necessarily a great idea, it's not as bad to run subsequent 
> unittest blocks after a failure (especially if programmers are doing what 
> they're supposed to and making their unit tests independent).
> 
> However, what's truly insane IMHO is continuing to run a unittest block after 
> it's already had a failure in it. Unless you have exceedingly simplistic unit 
> tests, the failures after the first one mean pretty much _nothing_ and simply 
> clutter the results.

I sometimes have unittests like

assert(testProperty1());
assert(testProperty2());
assert(testProperty3());

And in these cases it will be useful if I got all of the assertion
failures. But you are very right that you should use it with very much
care and knowing what you do. You may even get lost not seeing the
actual problem because of so many subsequent failures.

> > When has this been merged? It must have been after v2.060 was released.
> > Because I noticed some number at the end of the unittest function names.
> > But it was not the line number.
> 
> A couple of weeks ago IIRC. I'm pretty sure that it was after 2.060 was 
> released.

I just checked.
It was merged in on Wed Sep 5 19:46:50 2012 -0700 (commit d3669f79813)
and v2.060 was released 2nd of August.
Meaning I could try calling these functions myself now that I know their
names.

Jens


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 23:22:36 Jens Mueller wrote:
> Hi,
> 
> I do not understand the following error message given the code:
> 
> string foo(string f)
> {
> if (f == "somestring")
> {
> return "got somestring";
> }
> return bar!(foo("somestring"));
> }
> 
> template bar(string s)
> {
> enum bar = s;
> }
> 
> I'll with dmd v2.060 get:
> test.d(7): called from here: foo("somestring")
> test.d(7): called from here: foo("somestring")
> test.d(7): called from here: foo("somestring")
> test.d(7): Error: expression foo("somestring") is not a valid template value
> argument test.d(12): called from here: foo("somestring")
> test.d(12): called from here: foo("somestring")
> test.d(7): Error: template instance test.bar!(foo("somestring")) error
> instantiating
> 
> In line 7 I call the template bar. But I call with the string that is
> returned by the CTFE of foo("somestring") which should return "got
> somestring" but instead it seems that an expression is passed. How do I
> force the evaluation foo("somestring")?
> I haven't found a bug on this.

Template arguments must be known at compile time. And even if you use foo at 
compile time, it has to be compiled before you use it, so you can't call it 
inside itself and pass that as a template argument. foo must be fully compiled 
before it can be called, and as it stands, it can't be fully compiled until 
it's called. So... Yeah. Not going to work.

- Jonathan M Davis


Re: [OT] Was: totally satisfied :D

2012-09-20 Thread Nick Sabalausky
On Tue, 18 Sep 2012 00:29:11 -0700
Walter Bright  wrote:
> 
> I tend to snicker at companies that insist they only hire the top 1%.
> It seems that about 90% of the engineers out there must be in that
> top 1% .
> 

I bet that's marketing-speak for "Our applicant-to-hire ratio is 100:1,
and naturally we pick the one we like best instead of the one we like
least."

(Either that or it's just a claim pulled right out of their ass.)



Re: Infer function template parameters

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 21:57:47 Jonas Drewsen wrote:
> In foreach statements the type can be inferred:
> 
> foreach (MyFooBar fooBar; fooBars) writeln(fooBar);
> same as:
> foreach (foobar; fooBars) writeln(fooBar);
> 
> This is nice and tidy.
> Wouldn't it make sense to allow the same for function templates
> as well:
> 
> auto min(L,R)(L a, R b)
> {
> return a < b;
> }
> 
> same as:
> 
> auto min(a,b)
> {
> return a < b;
> }
> 
> What am I missing (except some code that needs chaging because
> only param type and not name has been specified in t?

You don't want everything templated. Templated functions are fundamentally 
different. They don't exist until they're instantiated, and they're only 
instantiated because you call them. Sometimes, you want functions to always 
exist regardless of whether any of your code is calling them (particularly 
when dealing with libraries).

Another result of all of this is that templated functions can't be virtual, so 
your proposal would be a _huge_ problem for classes. Not to mention, errors 
with templated functions tend to be much nastier than with non-templated 
functions even if it's not as bad as C++. Also, your prosposal then means that 
we'd up with templated functions without template constraints as a pretty 
normal thing, which would mean that such functions would frequently get called 
with types that don't work with them. To fix that, you'd have to add template 
constraints to such functions, which would be even more verbose than just 
giving the types like we do now.

You really need to be able to control when something is templated or not. And 
your proposal is basically just a terser template syntax. Is it really all 
that more verbose to do

auto min(L, R)(L a, R b) {...}

rather than

auto min(a, b) {...}

And even if we added your syntax, we'd still need the current syntax, because 
you need to able to indicate which types go with which parameters even if it's 
just to say that two parameters have the same type.

Also, what happens if you put types on some parameters but not others? Are 
those parameters given templated types? If so, a simple type could silently 
turn your function into a templated function without you realizing it.

Then there's function overloading. If you wanted to overload a function in 
your proposal, you'd have to either still give the types or use template 
constraints, meaning that it can't be used with overloaded functions.

Another thing to consider is that in languages like Haskell where all 
parameter types are inferred, it's often considered good practice to give the 
types anyway (assuming that the language lets you - Haskell does), because the 
functions are then not only easier to understand, but the error messages are 
more sane.

So, I really don't think that this is a good idea. It's just a terser, less 
flexible, and more error-prone syntax for templates.

- Jonathan M Davis


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Simen Kjaeraas
On Thu, 20 Sep 2012 23:22:36 +0200, Jens Mueller   
wrote:



string foo(string f)
{
if (f == "somestring")
{
return "got somestring";
}
return bar!(foo("somestring"));
}

template bar(string s)
{
enum bar = s;
}



In line 7 I call the template bar. But I call with the string that is
returned by the CTFE of foo("somestring") which should return "got
somestring"


When's it gonna get around to doing that? In order to figure out the
return value of foo("somestring"), it will have to figure out the
return value of foo("somestring"), and to do that...



I haven't found a bug on this.


That's because there is no bug.

--
Simen


CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-20 Thread Jens Mueller
Hi,

I do not understand the following error message given the code:

string foo(string f)
{
if (f == "somestring")
{
return "got somestring";
}
return bar!(foo("somestring"));
}

template bar(string s)
{
enum bar = s;
}

I'll with dmd v2.060 get:
test.d(7):called from here: foo("somestring")
test.d(7):called from here: foo("somestring")
test.d(7):called from here: foo("somestring")
test.d(7): Error: expression foo("somestring") is not a valid template value 
argument
test.d(12):called from here: foo("somestring")
test.d(12):called from here: foo("somestring")
test.d(7): Error: template instance test.bar!(foo("somestring")) error 
instantiating

In line 7 I call the template bar. But I call with the string that is
returned by the CTFE of foo("somestring") which should return "got
somestring" but instead it seems that an expression is passed. How do I
force the evaluation foo("somestring")?
I haven't found a bug on this.

Jens


Re: [OT] Was: totally satisfied :D

2012-09-20 Thread Nick Sabalausky
On Thu, 20 Sep 2012 08:46:00 -0400
"Steven Schveighoffer"  wrote:

> On Wed, 19 Sep 2012 17:05:35 -0400, Nick Sabalausky  
>  wrote:
> 
> > On Wed, 19 Sep 2012 10:11:50 -0400
> > "Steven Schveighoffer"  wrote:
> >
> >> I cannot argue that Apple's audio volume isn't too simplistic for
> >> its own good.  AIUI, they have two "volumes", one for the ringer,
> >> and one for playing audio, games, videos, etc.
> >>
> >
> > There's also a separate one for alarms/alerts:
> > http://www.ipodnn.com/articles/12/01/13/user.unaware.that.alarm.going.off.was.his/
> 
> This makes sense.  Why would you ever want your alarm clock to
> "alarm silently"

I don't carry around my alarm clock everywhere I go.

Aside from that, if it happens to be set wrong, I damn sure don't want
it going off in a library, in a meeting, at the front row of a show,
etc.

> How would you wake up?

By using a real alarm clock?

Besides, we can trivially both have our own ways thanks to the simple
invention of "options". Unfortunately, Apple apparently seems to think
somebody's got that patented or something.

> This is another case of
> someone using the wrong tool for the job

Apparently so ;)

> 
> I don't know any examples of sounds that disobey the silent switch

There is no silent switch. The switch only affects *some* sounds, and
I'm not interested in memorizing which ones just so I can try to avoid
the others.

The only "silent switch" is the one I use: Just leave the fucking thing
in the car.

> except for the "find my iPhone" alert,

That's about the only one that actually does make any sense at all.

> > It's just unbelievably convoluted, over-engineered, and as far from
> > "simple" as could possibly be imagined. Basically, you have "volume
> > up" and "volume down", but there's so much damn modality (something
> > Apple *loves*, but it almost universally bad for UI design) that
> > they work pretty much randomly.
> 
> I think you exaggerate.  Just a bit.
> 

Not really (and note I said "pretty much randomly" not "truly
randomly").

Try listing out all the different volume rules (that you're *aware* of -
who knows what other hidden quirks there might be), all together, and I
think you may be surprised just how much complexity there is.

Then compare that to, for example, a walkman or other portable music
player (iTouch doesn't count, it's a PDA) which is 100% predictable and
trivially simple right from day one. You never even have to think about
it, the volume **just works**, period. The fact that the ijunk has
various other uses besides music is immaterial: It could have been
simple and easy and worked well, and they instead chose to make it
complex.

Not only that, but it would have been trivial to just offer an *option*
to turn that "smart" junk off. But then allowing a user to configure
their own property to their own liking just wouldn't be very "Apple",
now would it?

> >> BTW, a cool feature I didn't know for a long time is if you double
> >> tap the home button, your audio controls appear on the lock screen
> >> (play/pause, next previous song, and audio volume).  But I think
> >> you have to unlock to access ringer volume.
> >>
> >
> > That's good to know (I didn't know).
> >
> > Unfortunately, it still only eliminates one, maybe two, swipes from
> > an already-complex procedure, that on any sensible device would
> > have been one step: Reach down into the pocket to adjust the volume.
> 
> Well, for music/video, the volume buttons *do* work in locked mode.
> 

More complexity and modality! Great.

> >
> > How often has anyone ever had a volume POT go bad? I don't think
> > I've *ever* even had it happen. It's a solid, well-established
> > technology.
> 
> I have had several sound systems where the volume knob started  
> misbehaving, due to corrosion, dust, whatever.  You can hear it
> mostly when you turn the knob, and it has a scratchy sound coming
> from the speakers.
> 

Was that before or after the "three year old" mark?

> >
> > I don't use a mac, and I never will again. I spent about a year or
> > two with OSX last decade and I'll never go back for *any* reason.
> > Liked it at first, but the more I used it the more I hated it.
> 
> It's a required thing for iOS development :)

Uhh, like I said, it *isn't*. I've *already* built an iOS package on my
Win machine (again, using Marmalade, although I'd guess Corona and
Unity are likely the same story), which a co-worker has *already*
successfully run on his jailbroken iTouches and iPhone.

And the *only* reason they needed to be jailbroken is because we
haven't yet paid Apple's ransom for a signing certificate. Once we have
that, I can sign the .ipa right here on Win with Marmalade's deployment
tool.

The *only* thing unfortunately missing without a mac is submission to
the Big Brother store.

> I have recently
> experienced the exact opposite.  I love my mac, and I would never go
> back to Windows.

Not trying to "convert" you, just FWIW:

You might like Win7. It's very Mac-l

Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 22:55:23 Jens Mueller wrote:
> You say that JUnit silently runs all unittests before the first
> specified one, don't you?

Yes. At least, that was its behavior the last time that I used it (which was 
admittedly a few years ago).

> If that is done silently that's indeed strange.

It could have been a quirk of their implementation, but I expect that it's to 
avoid issues where a unit test relies on previous unit tests in the same file. 
If your unit testing functions (or unittest blocks in the case of D) have 
_any_ dependencies on external state, then skipping any of them affects the 
ones that you don't skip, possibly changing the result of the unit test (be it 
to success or failure).

Running more unittest blocks after a failure is similarly flawed, but at least 
in that case, you know that had a failure earlier in the module, which should 
then tell you that you may not be able to trust further tests (but if you 
still run them, it's at least then potentially possible to fix further failures 
at the same time - particularly if your tests don't rely on external state). 
So, while not necessarily a great idea, it's not as bad to run subsequent 
unittest blocks after a failure (especially if programmers are doing what 
they're supposed to and making their unit tests independent).

However, what's truly insane IMHO is continuing to run a unittest block after 
it's already had a failure in it. Unless you have exceedingly simplistic unit 
tests, the failures after the first one mean pretty much _nothing_ and simply 
clutter the results.

> When has this been merged? It must have been after v2.060 was released.
> Because I noticed some number at the end of the unittest function names.
> But it was not the line number.

A couple of weeks ago IIRC. I'm pretty sure that it was after 2.060 was 
released.

- Jonathan M Davis


Re: Infer function template parameters

2012-09-20 Thread Timon Gehr

On 09/20/2012 10:52 PM, Peter Alexander wrote:

On Thursday, 20 September 2012 at 19:56:48 UTC, Jonas Drewsen wrote:

Wouldn't it make sense to allow the same for function templates as
well: 

What am I missing (except some code that needs chaging because only
param type and not name has been specified in t?


I can't see any implementation issues with it, but I think templates
should be an explicit choice when writing functions.



Leaving out the parameter type is an explicit choice.


Like it or not, templates still cause a lot of code bloat, complicate
linking, cannot be virtual, increase compilation resources, and generate
difficult to understand messages. They are a powerful tool, but need to
be used wisely.



The proposal does not make wise usage harder. It only makes usage more
concise in some cases.


Re: LDC blacklisted in Ubuntu

2012-09-20 Thread David Nadlinger
On Thursday, 20 September 2012 at 20:07:56 UTC, Jonas Drewsen 
wrote:
I've done some debs before and might be able to find some time 
to do it depending on how complex the package is.


I haven't tried LDC before though. Can you provide some info on 
how to get started with the LDC building/packaging?


This would be great!

The build process really shouldn't be more complicated than 
fetching sources and submodules from 
https://github.com/ldc-developers/ldc, running CMake and then 
"make install". See the README for a short description and a link 
to a longer one.


Packaging shouldn't really be any more difficult than that 
either, as we are using a pretty standard build system and have 
no exotic dependencies. Here are the Fedora and Arch package 
sources, maybe they are helpful:


https://admin.fedoraproject.org/pkgdb/acls/name/ldc
https://projects.archlinux.org/svntogit/community.git/tree/trunk/PKGBUILD?h=packages/ldc

David


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Tobias Pankrath
On Thursday, 20 September 2012 at 16:52:40 UTC, Johannes Pfau 
wrote:

snip


It should be possible to generate test cases programmatically [at 
compile time].


For instance if I have a program that reads files in format A and 
produces
B (e.g. a compiler) it should be possible to have a folder with 
both inputs and results and generate a test case for every 
possible input file.


(instead of one big testcase for every input file).








Re: Infer function template parameters

2012-09-20 Thread Timon Gehr

On 09/20/2012 09:57 PM, Jonas Drewsen wrote:

...
What am I missing (except some code that needs chaging because only
param type and not name has been specified in [i]t?



Nothing, that is about it. (C backwards-compatibility could maybe be
added) Of course, we could make upper case identifiers indicate
parameters without name and lower case identifiers indicate parameters
with templated types, keeping the breakages at a minimum. :o)

Note that other language changes would have to be made, eg:

void main(){
int delegate(int) dg1 = x=>x; // currently ok, should stay ok
auto foo(T)(T x){ return x; }
int delegate(int) dg2 = &foo; // currently error, would become ok
}

(x=>x would become a template delegate literal following your proposal)


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jens Mueller
Jonathan M Davis wrote:
> On Thursday, September 20, 2012 18:53:38 Johannes Pfau wrote:
> > Proposal:
> [snip]
> 
> In general, I'm all for instrumenting druntime such that unit testing tools 
> could run unit tests individually and present their output in a customized 
> manner, just so long as it doesn't really change how unit tests work now as 
> far as compiling with -unittest and running your program goes. The only 
> change 
> that might be desirable would be making it so that after a unittest block 
> fails, subsequent unittest blocks within its module are still run.
> 
> I _would_ point out that running any unittest blocks in a function without 
> running every other unittest block before them is error prone (running 
> further 
> unittet blocks after a failure is risky enough). Last time I used JUnit, even 
> though it claimed to run unitests individually and tell you the result, it 
> didn't really. It might have not run all of the unit tests after the one you 
> asked for, but it still ran all of the ones before it. Not doing so would be 
> p 
> problem in any situation where one unit test affects state that further unit 
> tests use (much as that's arguably bad practice).

You say that JUnit silently runs all unittests before the first
specified one, don't you? If that is done silently that's indeed
strange.
When to abort the execution of a unittest or all unittests of a module
is indeed a delicate question. But even though there is a strong default
to abort in case of any failure I see merit of allowing the user to
change this behavior on demand.

> Regardless, I confess that I don't care too much about the details of how 
> this 
> sort of thing is done so long as it doesn't really change how they work from 
> the standpoint of compiling with -unittest and running your executable. The 
> _one_ feature that I really wish we had was the ability to name unit tests, 
> since then you'd get a decent name in stack traces, but the fact that the 
> pull 
> request which makes it so that unittest block functions are named after their 
> line number has finally been merged in makes that less of an issue.

When has this been merged? It must have been after v2.060 was released.
Because I noticed some number at the end of the unittest function names.
But it was not the line number.

Jens


Re: Infer function template parameters

2012-09-20 Thread Peter Alexander
On Thursday, 20 September 2012 at 19:56:48 UTC, Jonas Drewsen 
wrote:
Wouldn't it make sense to allow the same for function templates 
as well: 


What am I missing (except some code that needs chaging because 
only param type and not name has been specified in t?


I can't see any implementation issues with it, but I think 
templates should be an explicit choice when writing functions.


Like it or not, templates still cause a lot of code bloat, 
complicate linking, cannot be virtual, increase compilation 
resources, and generate difficult to understand messages. They 
are a powerful tool, but need to be used wisely.





Re: LDC blacklisted in Ubuntu

2012-09-20 Thread David Nadlinger
On Thursday, 20 September 2012 at 18:03:18 UTC, David Nadlinger 
wrote:
Unfortunately, nobody on the core dev team uses Ubuntu for 
their daily work, or has other experiences with Debian packages.


I didn't mean "packages" of course, but "packaging". Knowing how 
to use dpkg or build the occasional .deb is one thing, but 
knowing the Debian conventions enough to get a new package 
accepted is an entirely different one.


David


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jens Mueller
Johannes Pfau wrote:
> Current situation:
> The compiler combines all unittests of a module into one huge function.
> If a unittest in a module fails, the rest won't be executed. The
> runtime (which is responsible for calling that per module unittest
> method) must always run all unittests of a module.
> 
> Goal:
> The runtime / test runner can decide for every test if it wants to
> continue testing or abort. It should also be possible to run single
> tests and skip some tests. As a secondary goal the runtime should
> receive the filename and line number of the unittest declaration.
> 
> Proposal:
> Introduce a new 'MInewunitTest' ModuleInfo flag in object.d and in the
> compiler. If MInewunitTest is present, the moduleinfo does not contain
> a unittester function. Instead it contains an array (slice) of UnitTest
> structs. So the new module property looks like this:
> 
> @property UnitTest[] unitTest() nothrow pure;
> 
> 
> the UnitTest struct looks like this:
> 
> struct UnitTest
> {
>string name; //Not used yet
>string fileName;
>uint line;
>void function() testFunc;
> }
> 
> 
> The compiler generates a static array of all UnitTest objects for every
> module and sets the UnitTest[] slice in the moduleinfo to point to this
> static array. As the compiler already contains individual functions for
> every unittest, this isn't too difficult.
> 
> 
> Proof of Concept:
> I haven't done any dmd hacking before so this might be terrible code,
> but it is working as expected and can be used as a guide on how to
> implement this:
> https://github.com/jpf91/druntime/compare/newUnittest
> https://github.com/jpf91/dmd/compare/newUnittest
> 
> In this POC the MInewunitTest flag is not present yet, the new method
> is always used. Also the implementation in druntime is only minimally
> changed. The compiler changes allow an advanced testrunner to do a lot
> more:
> 
> * Be a GUI tool / use colored output / ...
> * Allow to run single, specific tests, skip tests, ...
> * Execute tests in a different process, communicate with IPC. This way
>   we can deal with segmentation faults in unit tests.

Very recently I have polished a tool I wrote called dtest.
http://jkm.github.com/dtest/dtest.html
And the single thing I want to support but failed to implement is
calling individual unittests. I looked into it. I thought I could find a
way to inspect the assembly with some C library. But I couldn't make it
work. Currently each module has a __modtest which calls the unittests.

I haven't looked into segmentation faults but I think you can handle
them already currently. You just need to provide your own segmentation
fault handler. I should add this to dtest. dtest also let's you continue
executing the tests if an assertion fails and it can turn failures into
break points. When you use GNU ld you can even continue and break on any
thrown Throwable.

In summary I think everything can be done already but not on an
individual unittest level. But I also think that this is important and
this restriction alone is enough to merge your pull request after a
review.
But the changes should be backward compatible. I think there is no need
to make the runtime more complex. Just let it execute the single
function __modtest as it was but add the array of unittests. I'd be
happy to extend dtest to use this array because I found no different
solution.

> Sample output:
> Testing generated/linux/debug/32/unittest/std/array
> std/array.d:86  SUCCESS
> std/array.d:145 SUCCESS
> std/array.d:183 SUCCESS
> std/array.d:200 SUCCESS
> std/array.d:231 SUCCESS
> std/array.d:252 SUCCESS
> std/array.d:317 SUCCESS

See
https://buildhive.cloudbees.com/job/jkm/job/dtest/16/console
for dtest's output.
$ ./dtest --output=xml
Testing 1 modules: ["dtest_unittest"]
== Run 1 of 1 ==
PASS dtest_unittest

All modules passed: ["dtest_unittest"]

This also generates a JUnit/GTest-compatible XML report.

Executing ./failing gives more interesting output:
$ ./failing --abort=asserts
Testing 3 modules: ["exception", "fail", "pass"]
== Run 1 of 1 ==
FAIL exception
object.Exception@tests/exception.d(3): first exception
object.Exception@tests/exception.d(4): second exception
FAIL fail
core.exception.AssertError@fail(5): unittest failure
PASS pass

Failed modules (2 of 3): ["exception", "fail"]

I also found some inconsistency in the output when asserts have no
message. It'll be nice if that could be fixed too.
http://d.puremagic.com/issues/show_bug.cgi?id=8652

> The perfect solution:
> Would allow user defined attributes on tests, so you could name them,
> assign categories, etc. But till we have those user defined attributes,
> this seems to be a good solution.

This is orthogonal to your proposal. You just want that every unittest
is exposed as a function. How to define attributes for functions is a
different story.

Jens


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 18:53:38 Johannes Pfau wrote:
> Proposal:
[snip]

In general, I'm all for instrumenting druntime such that unit testing tools 
could run unit tests individually and present their output in a customized 
manner, just so long as it doesn't really change how unit tests work now as 
far as compiling with -unittest and running your program goes. The only change 
that might be desirable would be making it so that after a unittest block 
fails, subsequent unittest blocks within its module are still run.

I _would_ point out that running any unittest blocks in a function without 
running every other unittest block before them is error prone (running further 
unittet blocks after a failure is risky enough). Last time I used JUnit, even 
though it claimed to run unitests individually and tell you the result, it 
didn't really. It might have not run all of the unit tests after the one you 
asked for, but it still ran all of the ones before it. Not doing so would be p 
problem in any situation where one unit test affects state that further unit 
tests use (much as that's arguably bad practice).

Regardless, I confess that I don't care too much about the details of how this 
sort of thing is done so long as it doesn't really change how they work from 
the standpoint of compiling with -unittest and running your executable. The 
_one_ feature that I really wish we had was the ability to name unit tests, 
since then you'd get a decent name in stack traces, but the fact that the pull 
request which makes it so that unittest block functions are named after their 
line number has finally been merged in makes that less of an issue.

- Jonathan M Davis


Re: LDC blacklisted in Ubuntu

2012-09-20 Thread Jonas Drewsen
On Thursday, 20 September 2012 at 18:03:18 UTC, David Nadlinger 
wrote:
On Thursday, 20 September 2012 at 17:26:25 UTC, Joseph Rushton 
Wakeling wrote:
Some rather urgent news: LDC has just been blacklisted in 
Ubuntu.


It would be great if somebody from the D community experienced 
in packaging could jump in to help us on this front.


I've done some debs before and might be able to find some time to 
do it depending on how complex the package is.


I haven't tried LDC before though. Can you provide some info on 
how to get started with the LDC building/packaging?


/Jonas



Re: Infer function template parameters

2012-09-20 Thread Jonas Drewsen
On Thursday, 20 September 2012 at 19:56:48 UTC, Jonas Drewsen 
wrote:

In foreach statements the type can be inferred:



Clicked the send butten too early by mistake but I guess you get 
the idea.


/Jonas




Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Johannes Pfau
Am Thu, 20 Sep 2012 20:51:47 +0200
schrieb Jacob Carlborg :

> On 2012-09-20 19:37, Johannes Pfau wrote:
> 
> > That's just an example output. We could leave the druntime
> > test runner as is and don't change the output at all. We could only
> > print the failure messages. Or we could collect all failures and
> > print them at the end. All that can easily be changed in druntime
> > (and I'd argue we should enhance the druntime interface, so
> > everyone could implement a custom test runner), but we need the
> > compiler changes to allow this.
> 
> It's already possible, just set a unit test runner using 
> Runtime.moduleUnitTester.
> 

Oh right, I thought that interface was more restrictive. So the only
changes necessary in druntime are to adapt to the new compiler
interface.

The new dmd code is still necessary, as it allows to access
all unittests of a module individually. The current code only
provides one function for all unittests in a module.


Re: Review of Andrei's std.benchmark

2012-09-20 Thread foobar
On Thursday, 20 September 2012 at 12:35:15 UTC, Andrei 
Alexandrescu wrote:

On 9/20/12 2:42 AM, Manu wrote:

On 19 September 2012 12:38, Peter Alexander
> wrote:


   The fastest execution time is rarely useful to me, I'm 
almost

   always much
   more interested in the slowest execution time.
   In realtime software, the slowest time is often the only
   important factor,
   everything must be designed to tolerate this 
possibility.
   I can also imagine other situations where multiple 
workloads are

   competing
   for time, the average time may be more useful in that 
case.



   The problem with slowest is that you end up with the 
occasional OS
   hiccup or GC collection which throws the entire benchmark 
off. I see
   your point, but unless you can prevent the OS from 
interrupting, the

   time would be meaningless.


So then we need to start getting tricky, and choose the 
slowest one that

is not beyond an order of magnitude or so outside the average?


The "best way" according to some of the people who've advised 
my implementation of the framework at Facebook is to take the 
mode of the measurements distribution, i.e. the time at the 
maximum density.


I implemented that (and it's not easy). It yielded numbers 
close to the minimum, but less stable and needing more 
iterations to become stable (when they do get indeed close to 
the minimum).


Let's use the minimum. It is understood it's not what you'll 
see in production, but it is an excellent proxy for indicative 
and reproducible performance numbers.



Andrei


From the responses on the thread clearly there isn't a "best way".
There are different use-cases with different tradeoffs so why not 
allow the user to choose the policy best suited for their 
use-case?
I'd suggest to provide a few reasonable common choices to choose 
from, as well as a way to provide a user defined calculation 
(function pointer/delegate?)


Re: Why do not have `0o` prefix for octal numbers?

2012-09-20 Thread monarch_dodra
On Thursday, 20 September 2012 at 18:12:50 UTC, Steven 
Schveighoffer wrote:
On Wed, 19 Sep 2012 12:15:19 -0400, monarch_dodra 
 wrote:



On Wednesday, 19 September 2012 at 16:02:41 UTC, Hauleth wrote:
Some time ago I've asked on SO why most languages have `0` 
prefix for octal numbers. My opinion is the same as D 
designers that it cause a lot of bugs, but why octal numbers 
are avaible only by using `std.conv.octal`?


AFAIK: It is experimental. "The $(D octal) facility is 
intended as an experimental facility to replace _octal 
literals starting with $(D '0'), which many find confusing."


That comment is very old.  It is no longer experimental.

If you want an explanation, see here:

http://www.drdobbs.com/tools/user-defined-literals-in-the-d-programmi/229401068

-Steve


Very interesting read. TY.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jacob Carlborg

On 2012-09-20 19:37, Johannes Pfau wrote:


That's just an example output. We could leave the druntime
test runner as is and don't change the output at all. We could only
print the failure messages. Or we could collect all failures and print
them at the end. All that can easily be changed in druntime (and
I'd argue we should enhance the druntime interface, so everyone could
implement a custom test runner), but we need the compiler changes to
allow this.


It's already possible, just set a unit test runner using 
Runtime.moduleUnitTester.


--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Dmitry Olshansky

On 20-Sep-12 22:18, bearophile wrote:

Johannes Pfau:


The perfect solution:
Would allow user defined attributes on tests, so you could name them,
assign categories, etc. But till we have those user defined attributes,
this seems to be a good solution.


We have @disable, maybe it's usable for unittests too :-)


We have version(none)




--
Dmitry Olshansky


Re: Review of Andrei's std.benchmark

2012-09-20 Thread Jacob Carlborg

On 2012-09-20 14:36, Andrei Alexandrescu wrote:


Let's use the minimum. It is understood it's not what you'll see in
production, but it is an excellent proxy for indicative and reproducible
performance numbers.


Why not min, max and average?

--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread bearophile

Johannes Pfau:


The perfect solution:
Would allow user defined attributes on tests, so you could name 
them,
assign categories, etc. But till we have those user defined 
attributes,

this seems to be a good solution.


We have @disable, maybe it's usable for unittests too :-)

Bye,
bearophile


Re: Why do not have `0o` prefix for octal numbers?

2012-09-20 Thread Steven Schveighoffer
On Wed, 19 Sep 2012 12:15:19 -0400, monarch_dodra   
wrote:



On Wednesday, 19 September 2012 at 16:02:41 UTC, Hauleth wrote:
Some time ago I've asked on SO why most languages have `0` prefix for  
octal numbers. My opinion is the same as D designers that it cause a  
lot of bugs, but why octal numbers are avaible only by using  
`std.conv.octal`?


AFAIK: It is experimental. "The $(D octal) facility is intended as an  
experimental facility to replace _octal literals starting with $(D '0'),  
which many find confusing."


That comment is very old.  It is no longer experimental.

If you want an explanation, see here:

http://www.drdobbs.com/tools/user-defined-literals-in-the-d-programmi/229401068

-Steve


Re: LDC blacklisted in Ubuntu

2012-09-20 Thread David Nadlinger
On Thursday, 20 September 2012 at 17:26:25 UTC, Joseph Rushton 
Wakeling wrote:
Some rather urgent news: LDC has just been blacklisted in 
Ubuntu.


It is not really news, as the LDC version in the Debian repo has 
not been updated for ages. But yes, it would definitely be 
important to have an LDC package in as many distribution repos as 
possible.


This seems to be entirely down to no one keeping the Debian 
universe up to date with the latest LDC work. :-(


Could someone on the LDC team get in touch with Ubuntu and see 
what can be done about this?


As far as I see, we would at the very least need somebody to 
maintain the Debian/Ubuntu packages for this. Unfortunately, 
nobody on the core dev team uses Ubuntu for their daily work, or 
has other experiences with Debian packages.


It would be great if somebody from the D community experienced in 
packaging could jump in to help us on this front. We'd be happy 
to help with any questions, and I don't think the packaging 
process should be particularly difficult (LDC builds fine on 
Ubuntu, and Arch and Fedora are already shipping recent 
versions). The thing is just that creating good packages for a 
system you are not intimately familiar with is quite hard, and we 
are already chronically lacking manpower anyway.


David


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Johannes Pfau
Am Thu, 20 Sep 2012 19:27:00 +0200
schrieb "Jesse Phillips" :


> 
> I didn't read everything in your post, where does the FAILURE 
> show up. If it is intermixed with the SUCCESS, then I could see 
> that as a problem.
> 
> While I can't say I've hated/liked the lack of output for 
> unittest success, I believe my feeling would be the same with 
> this.

That's just an example output. We could leave the druntime
test runner as is and don't change the output at all. We could only
print the failure messages. Or we could collect all failures and print
them at the end. All that can easily be changed in druntime (and
I'd argue we should enhance the druntime interface, so everyone could
implement a custom test runner), but we need the compiler changes to
allow this.

In the end, you have an array of UnitTests (ordered as they appear in
the source file). A UnitTest has a filename, line number and a function
member(the actual unittest function). What you do with this is
completely up to you.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Jesse Phillips
On Thursday, 20 September 2012 at 16:52:40 UTC, Johannes Pfau 
wrote:



Sample output:
Testing generated/linux/debug/32/unittest/std/array
std/array.d:86  SUCCESS
std/array.d:145 SUCCESS
std/array.d:183 SUCCESS
std/array.d:200 SUCCESS
std/array.d:231 SUCCESS
std/array.d:252 SUCCESS
std/array.d:317 SUCCESS

The perfect solution:
Would allow user defined attributes on tests, so you could name 
them,
assign categories, etc. But till we have those user defined 
attributes,

this seems to be a good solution.


I didn't read everything in your post, where does the FAILURE 
show up. If it is intermixed with the SUCCESS, then I could see 
that as a problem.


While I can't say I've hated/liked the lack of output for 
unittest success, I believe my feeling would be the same with 
this.


LDC blacklisted in Ubuntu

2012-09-20 Thread Joseph Rushton Wakeling

Some rather urgent news: LDC has just been blacklisted in Ubuntu.
https://bugs.launchpad.net/ubuntu/+source/ldc/+bug/941549

This seems to be entirely down to no one keeping the Debian 
universe up to date with the latest LDC work. :-(


Could someone on the LDC team get in touch with Ubuntu and see 
what can be done about this?


Extending unittests [proposal] [Proof Of Concept]

2012-09-20 Thread Johannes Pfau
Current situation:
The compiler combines all unittests of a module into one huge function.
If a unittest in a module fails, the rest won't be executed. The
runtime (which is responsible for calling that per module unittest
method) must always run all unittests of a module.

Goal:
The runtime / test runner can decide for every test if it wants to
continue testing or abort. It should also be possible to run single
tests and skip some tests. As a secondary goal the runtime should
receive the filename and line number of the unittest declaration.

Proposal:
Introduce a new 'MInewunitTest' ModuleInfo flag in object.d and in the
compiler. If MInewunitTest is present, the moduleinfo does not contain
a unittester function. Instead it contains an array (slice) of UnitTest
structs. So the new module property looks like this:

@property UnitTest[] unitTest() nothrow pure;


the UnitTest struct looks like this:

struct UnitTest
{
   string name; //Not used yet
   string fileName;
   uint line;
   void function() testFunc;
}


The compiler generates a static array of all UnitTest objects for every
module and sets the UnitTest[] slice in the moduleinfo to point to this
static array. As the compiler already contains individual functions for
every unittest, this isn't too difficult.


Proof of Concept:
I haven't done any dmd hacking before so this might be terrible code,
but it is working as expected and can be used as a guide on how to
implement this:
https://github.com/jpf91/druntime/compare/newUnittest
https://github.com/jpf91/dmd/compare/newUnittest

In this POC the MInewunitTest flag is not present yet, the new method
is always used. Also the implementation in druntime is only minimally
changed. The compiler changes allow an advanced testrunner to do a lot
more:

* Be a GUI tool / use colored output / ...
* Allow to run single, specific tests, skip tests, ...
* Execute tests in a different process, communicate with IPC. This way
  we can deal with segmentation faults in unit tests.

Sample output:
Testing generated/linux/debug/32/unittest/std/array
std/array.d:86  SUCCESS
std/array.d:145 SUCCESS
std/array.d:183 SUCCESS
std/array.d:200 SUCCESS
std/array.d:231 SUCCESS
std/array.d:252 SUCCESS
std/array.d:317 SUCCESS

The perfect solution:
Would allow user defined attributes on tests, so you could name them,
assign categories, etc. But till we have those user defined attributes,
this seems to be a good solution.



Re: no-arg constructor for structs (again)

2012-09-20 Thread Timon Gehr

On 09/20/2012 10:11 AM, Felix Hufnagel wrote:

...
but whats even more confusing: you are not allowed to declare an
no_arg constructor. but you are allowed to declare one where all
parameters have default parameters. but then, how to call it
without args? auto k = S(); doesn't work?




struct S{
this(int=0){}
}
void main(){
S s;
s.__ctor();
}


Re: classes structs

2012-09-20 Thread Timon Gehr

On 09/20/2012 03:43 AM, David Currie wrote:

On Tuesday, 18 September 2012 at 18:42:33 UTC, Timon Gehr wrote:

On 09/18/2012 07:07 AM, David Currie wrote:

[ALL CAPS]


It does not matter who is the loudest guy in the room. If you have a
point to make, just make it. (Stating the conclusion is not making a
point. Skipping forward and predicting polite refusal does not help.)

Most of the statements in the OP are inaccurate.

The best way to get in touch with the language is by reading the online
documentation and by experimenting with the compiler (prepare for some
bugs and unspecified corner cases). Reading the newsgroup helps too.

Usually it is best to double-check any claims about the language
expressed online, using the reference implementation.


Apologies for SHOUTING. I am unfamiliar with forum syntax and etiquette
I merely wished *stressing* some words.



I see. I wouldn't stress more than one or two words per post anyway.
It is fatiguing for the reader and tends to get annoying, without
strengthening the point.


What is OP and perhaps why are most statements inaccurate?


This is the original post:

On 09/15/2012 12:19 AM, David Currie wrote:

At the risk of appearing ignorant, I don't know everything about D.
However in D I have noticed the following.

It is a policy decision in D that a class is ALWAYS on the heap and passed
by REFERENCE.


This is the default, but do as you wish.


(I know there is a keyword to put a class object on the stack
but this is not facile and needing a workaround is poor language design).


Built-in scoped classes are going away.


This is effectively FORCED Java.


No. Forced means there is no other way.


A D struct is on the stack


Not necessarily.


and is NOT a class


Yes.


and has NO inheritance.



Composition of value types and alias this achieve both inheritance and 
subtyping. There is no built-in method overriding facility for value

types, so just use function pointers.


I have issues with this philosophy.

It seems FUNDAMENTAL to me that a programmer needs both stack and heap
objects


This is not fundamental. Stack objects are just 'nice to have'. If
there is an execution stack at all, of course.


and should KNOW when to use each and should ALWAYS have a choice.



Well, he does. The preferred usage is prescribed at the declaration
site. The programmer who defines the type should know better, but his
decision can always be overridden if this appears to be fundamental.


ALL struct VARIABLES when declared are initialised to their .init value.


Not all of them, no.

S s = void;


Just in case a programmer "forgets" to initialize them.


No, unless a programmer "forgets" to not initialize them.


This is like using a sledgehammer instead of a scalpel.


It is like defaulting to the sledgehammer when the scalpel is usually 
not appropriate.



Could you answer me WHY??
ALL classes when declared are instantiated on the heap
and their constructor called.


No.

class C{} // no run time behaviour
C c; // initialized to null


Again I ask WHY??

Why can't the programmer have the freedom to build his own objects
when he wants to with the compiler advising of errors ?



Because scoped classes conflict with the infinite lifetime model. One
programmer's freedom restricts another programmer. It is always a
matter of trade-offs. classes implement the Java model of OO.
structs have few limitations, and if absolutely needed can be used
together with unsafe constructs to [br|tw]eak the OO model. Depending
on who you ask, this may actually be undesirable.

The only way to get correct code is by proving it correct. Restricting
the constructs the programmer uses can make a proof easier, so this can
be a very good thing. Not that this would matter a lot for D at this 
point, of course, but reasoning about code is important even in

languages that make this notoriously difficult.


Of course I have more to say about this but I need answers to these
questions to proceed.






How does one get to the newsgroups. I only got here because Walter gave
me a link. I would gratefully welcome links.



Subscribe to news.digitalmars.com or use the web interface: 
http://forum.dlang.org/





Re: Weird Link Error

2012-09-20 Thread freeman

I started seeing these same errors after installing 2.6.  Perhaps
it is a linker problem tied to some conf file (in debian or dmd)?
The crude solution that works for me is to delete / re-establish
soft-linked libraries.

On Thursday, 20 September 2012 at 13:08:19 UTC, Daniel wrote:
I have searched everywhere and I can't find anything so I 
decided to come here. I have a simple Hello World program in 
the file Test.d, it compiles just fine but when I try this...


[daniel@arch D]$ dmd Test.o
/usr/lib/libphobos2.a(dmain2_459_1a5.o): In function 
`_D2rt6dmain24mainUiPPaZi7runMainMFZv':
(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x10): undefined 
reference to `_Dmain'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x4): 
undefined reference to `_deh_beg'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0xc): 
undefined reference to `_deh_beg'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x13): 
undefined reference to `_deh_end'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x36): 
undefined reference to `_deh_end'
/usr/lib/libphobos2.a(thread_18f_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_18f_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_19f_6e4.o): In function 
`thread_attachThis':
(.text.thread_attachThis+0xb7): undefined reference to 
`_tlsstart'
/usr/lib/libphobos2.a(thread_19f_6e4.o): In function 
`thread_attachThis':

(.text.thread_attachThis+0xbc): undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17d_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17d_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x27): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_17e_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17e_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x27): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_17a_713.o): In function 
`thread_entryPoint':

(.text.thread_entryPoint+0x64): undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17a_713.o): In function 
`thread_entryPoint':
(.text.thread_entryPoint+0x6a): undefined reference to 
`_tlsstart'

collect2: error: ld returned 1 exit status
--- errorlevel 1

I would try to solve the problem myself, except that I have no 
clue wtf this means.





Re: Review of Andrei's std.benchmark

2012-09-20 Thread Manu
On 20 September 2012 15:36, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> On 9/20/12 2:42 AM, Manu wrote:
>
>> On 19 September 2012 12:38, Peter Alexander
>> > >
>> wrote:
>>
>> The fastest execution time is rarely useful to me, I'm almost
>> always much
>> more interested in the slowest execution time.
>> In realtime software, the slowest time is often the only
>> important factor,
>> everything must be designed to tolerate this possibility.
>> I can also imagine other situations where multiple workloads are
>> competing
>> for time, the average time may be more useful in that case.
>>
>>
>> The problem with slowest is that you end up with the occasional OS
>> hiccup or GC collection which throws the entire benchmark off. I see
>> your point, but unless you can prevent the OS from interrupting, the
>> time would be meaningless.
>>
>>
>> So then we need to start getting tricky, and choose the slowest one that
>> is not beyond an order of magnitude or so outside the average?
>>
>
> The "best way" according to some of the people who've advised my
> implementation of the framework at Facebook is to take the mode of the
> measurements distribution, i.e. the time at the maximum density.
>
> I implemented that (and it's not easy). It yielded numbers close to the
> minimum, but less stable and needing more iterations to become stable (when
> they do get indeed close to the minimum).
>
> Let's use the minimum. It is understood it's not what you'll see in
> production, but it is an excellent proxy for indicative and reproducible
> performance numbers.


If you do more than a single iteration, the minimum will virtually always
be influenced by ideal cache pre-population, which is unrealistic. Memory
locality is often the biggest contributing performance hazard in many
algorithms, and usually the most unpredictable. I want to know about that
in my measurements.
Reproducibility is not important to me as accuracy. And I'd rather be
conservative(/pessimistic) with the error.

What guideline would you apply to estimate 'real-world' time spent when
always working with hyper-optimistic measurements?


Re: Weird Link Error

2012-09-20 Thread Daniel
On Thursday, 20 September 2012 at 13:38:03 UTC, Adam D. Ruppe 
wrote:

On Thursday, 20 September 2012 at 13:19:08 UTC, Daniel wrote:
Oh wow I had the main function inside a class, I can't believe 
the answer was so simple. I feel like an idiot now.


The linker errors are really hard to read if you haven't seen 
them before (and sometimes even then)...


Yeah I'm new to D if you couldn't tell. I figured it must have 
been something I did wrong when I installed dmd, but I guess it 
was just a dumb mistake.


Re: Weird Link Error

2012-09-20 Thread Adam D. Ruppe

On Thursday, 20 September 2012 at 13:19:08 UTC, Daniel wrote:
Oh wow I had the main function inside a class, I can't believe 
the answer was so simple. I feel like an idiot now.


The linker errors are really hard to read if you haven't seen 
them before (and sometimes even then)...


Re: Weird Link Error

2012-09-20 Thread Jens Mueller
Daniel wrote:
> I have searched everywhere and I can't find anything so I decided to
> come here. I have a simple Hello World program in the file Test.d,
> it compiles just fine but when I try this...

Can you attach the Test.d?
But it looks like you didn't define a main function.

> [daniel@arch D]$ dmd Test.o
> /usr/lib/libphobos2.a(dmain2_459_1a5.o): In function
> `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
> (.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x10): undefined
> reference to `_Dmain'
> /usr/lib/libphobos2.a(deh2_43b_525.o): In function
> `_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
> (.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x4):
> undefined reference to `_deh_beg'
> /usr/lib/libphobos2.a(deh2_43b_525.o): In function
> `_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
> (.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0xc):
> undefined reference to `_deh_beg'
> /usr/lib/libphobos2.a(deh2_43b_525.o): In function
> `_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
> (.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x13):
> undefined reference to `_deh_end'
> /usr/lib/libphobos2.a(deh2_43b_525.o): In function
> `_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
> (.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x36):
> undefined reference to `_deh_end'
> /usr/lib/libphobos2.a(thread_18f_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d):
> undefined reference to `_tlsend'
> /usr/lib/libphobos2.a(thread_18f_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24):
> undefined reference to `_tlsstart'
> /usr/lib/libphobos2.a(thread_19f_6e4.o): In function
> `thread_attachThis':
> (.text.thread_attachThis+0xb7): undefined reference to `_tlsstart'
> /usr/lib/libphobos2.a(thread_19f_6e4.o): In function
> `thread_attachThis':
> (.text.thread_attachThis+0xbc): undefined reference to `_tlsend'
> /usr/lib/libphobos2.a(thread_17d_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x1d):
> undefined reference to `_tlsend'
> /usr/lib/libphobos2.a(thread_17d_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x27):
> undefined reference to `_tlsstart'
> /usr/lib/libphobos2.a(thread_17e_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x1d):
> undefined reference to `_tlsend'
> /usr/lib/libphobos2.a(thread_17e_1b8.o): In function
> `_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
> (.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x27):
> undefined reference to `_tlsstart'
> /usr/lib/libphobos2.a(thread_17a_713.o): In function
> `thread_entryPoint':
> (.text.thread_entryPoint+0x64): undefined reference to `_tlsend'
> /usr/lib/libphobos2.a(thread_17a_713.o): In function
> `thread_entryPoint':
> (.text.thread_entryPoint+0x6a): undefined reference to `_tlsstart'
> collect2: error: ld returned 1 exit status
> --- errorlevel 1
> 
> I would try to solve the problem myself, except that I have no clue
> wtf this means.

These are linker errors. Just read the first two line:

> /usr/lib/libphobos2.a(dmain2_459_1a5.o): In function
> `_D2rt6dmain24mainUiPPaZi7runMainMFZv':
> (.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x10): undefined
> reference to `_Dmain'

The linker looks for a D main function since it is referenced in the
function with mangled name _D2rt6dmain24mainUiPPaZi7runMainMFZv. But
it cannot find it (i.e. the reference is undefined) you'll get this
error and subsequent errors.

Jens


Re: Weird Link Error

2012-09-20 Thread Daniel
On Thursday, 20 September 2012 at 13:12:41 UTC, Adam D. Ruppe 
wrote:

On Thursday, 20 September 2012 at 13:08:19 UTC, Daniel wrote:

undefined reference to `_Dmain'


Does your program have a main() function?


Oh wow I had the main function inside a class, I can't believe 
the answer was so simple. I feel like an idiot now.


Re: Weird Link Error

2012-09-20 Thread Adam D. Ruppe

On Thursday, 20 September 2012 at 13:08:19 UTC, Daniel wrote:

 undefined reference to `_Dmain'


Does your program have a main() function?



Weird Link Error

2012-09-20 Thread Daniel
I have searched everywhere and I can't find anything so I decided 
to come here. I have a simple Hello World program in the file 
Test.d, it compiles just fine but when I try this...


[daniel@arch D]$ dmd Test.o
/usr/lib/libphobos2.a(dmain2_459_1a5.o): In function 
`_D2rt6dmain24mainUiPPaZi7runMainMFZv':
(.text._D2rt6dmain24mainUiPPaZi7runMainMFZv+0x10): undefined 
reference to `_Dmain'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x4): 
undefined reference to `_deh_beg'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0xc): 
undefined reference to `_deh_beg'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x13): 
undefined reference to `_deh_end'
/usr/lib/libphobos2.a(deh2_43b_525.o): In function 
`_D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable':
(.text._D2rt4deh213__eh_finddataFPvZPS2rt4deh29FuncTable+0x36): 
undefined reference to `_deh_end'
/usr/lib/libphobos2.a(thread_18f_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_18f_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFZC4core6thread6Thread+0x24): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_19f_6e4.o): In function 
`thread_attachThis':

(.text.thread_attachThis+0xb7): undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_19f_6e4.o): In function 
`thread_attachThis':

(.text.thread_attachThis+0xbc): undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17d_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17d_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFPFZvkZC4core6thread6Thread+0x27): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_17e_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x1d): 
undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17e_1b8.o): In function 
`_D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread':
(.text._D4core6thread6Thread6__ctorMFDFZvkZC4core6thread6Thread+0x27): 
undefined reference to `_tlsstart'
/usr/lib/libphobos2.a(thread_17a_713.o): In function 
`thread_entryPoint':

(.text.thread_entryPoint+0x64): undefined reference to `_tlsend'
/usr/lib/libphobos2.a(thread_17a_713.o): In function 
`thread_entryPoint':

(.text.thread_entryPoint+0x6a): undefined reference to `_tlsstart'
collect2: error: ld returned 1 exit status
--- errorlevel 1

I would try to solve the problem myself, except that I have no 
clue wtf this means.


Re: [OT] Was: totally satisfied :D

2012-09-20 Thread Steven Schveighoffer
On Wed, 19 Sep 2012 17:05:35 -0400, Nick Sabalausky  
 wrote:



On Wed, 19 Sep 2012 10:11:50 -0400
"Steven Schveighoffer"  wrote:


I cannot argue that Apple's audio volume isn't too simplistic for its
own good.  AIUI, they have two "volumes", one for the ringer, and one
for playing audio, games, videos, etc.



There's also a separate one for alarms/alerts:
http://www.ipodnn.com/articles/12/01/13/user.unaware.that.alarm.going.off.was.his/


This makes sense.  Why would you ever want your alarm clock to "alarm  
silently"  How would you wake up?  This is another case of someone using  
the wrong tool for the job (for reminders, use the new reminder feature,  
or use an appointment with an alert, those obey the silent switch).


And the volume is set by the ringer, it's not a separate volume.  It's  
just that it doesn't obey the silent switch.  If it did I'd be pissed,  
because I frequently turn my phone to silent at night, but expect the  
alarm to wake me up.



Apple actually thought that was a good idea.


Because it is.


Plus, my understanding is that one of Apple's explicit design principles
is that if an user-prompted action is something that's "expected" to
make a sound (by whatever *Apple* decides is "expected", naturally),
then to hell with the user's volume setting, it should make a sound
anyway.


I don't know any examples of sounds that disobey the silent switch except  
for the "find my iPhone" alert, and the alarm clock, both of which would  
be quite foolish to have make no sounds.


Really, when you take the silent switch into account, the sound system  
works adequately for most people.



It's just unbelievably convoluted, over-engineered, and as far from
"simple" as could possibly be imagined. Basically, you have "volume up"
and "volume down", but there's so much damn modality (something Apple
*loves*, but it almost universally bad for UI design) that they
work pretty much randomly.


I think you exaggerate.  Just a bit.


I think if they simply made the volume buttons control the ringer
while locked and not playing music, it would solve the problem.



I very much disagree. Then when you take it out to use it, everything
will *still* be surprisingly too loud (or quiet). Just not when a call
comes in...


The ringer volume affects almost all the incidental sounds, the click for  
keyboard typing, the lock/unlock sounds, alert sounds, alarm volume, etc.   
The audio volume affects basically music, video, and game sounds.



BTW, a cool feature I didn't know for a long time is if you double
tap the home button, your audio controls appear on the lock screen
(play/pause, next previous song, and audio volume).  But I think you
have to unlock to access ringer volume.



That's good to know (I didn't know).

Unfortunately, it still only eliminates one, maybe two, swipes from an
already-complex procedure, that on any sensible device would have been
one step: Reach down into the pocket to adjust the volume.


Well, for music/video, the volume buttons *do* work in locked mode.





It's more moving parts to break.  I wouldn't like it.  Just my
opinion.



How often has anyone ever had a volume POT go bad? I don't think I've
*ever* even had it happen. It's a solid, well-established technology.


I have had several sound systems where the volume knob started  
misbehaving, due to corrosion, dust, whatever.  You can hear it mostly  
when you turn the knob, and it has a scratchy sound coming from the  
speakers.



If you want to develop for only jailbroken phones, you basically
alienate most users of iPhone.  It's not a viable business model
IMO.  Yes, it sucks to have to jump through apple's hoops, but having
access to millions of users is very much worth it.



No, no, no, I'd jailbreak it for *testing*. Like I said, I'd
begrudgingly still pay Apple's ransom for publishing, because what
other realistic option is there?


I wouldn't do that if it were me.  You might find yourself adding features  
that aren't allowed or available in non-jailbroken phones, and then go to  
publish, find out your whole design is not feasible.



Oh, when you develop apps, it's quite easy to install on the phone,
you just click "run" from xcode, selecting your device, you don't
ever have to start itunes (though itunes will auto-start every time
you plug in the phone, but you can disable this in itunes, more
annoying is that iPhoto *always* starts, I can't figure out how to
stop that).  From then on, the app is installed.  The issue is
setting up all the certificates via xcode and their web portal to get
that to work (should only have to do this once).  I think the process
has streamlined a bit, you used to have to create an app id for each
app and select which devices were authorized to install it.  Now I
think you get a wildcard app id, but you still have to register each
device.



I don't use a mac, and I never will again. I spent about a year or two
with OSX last decade and I'll never go back for *any* reason. 

Re: Review of Andrei's std.benchmark

2012-09-20 Thread Andrei Alexandrescu

On 9/20/12 2:42 AM, Manu wrote:

On 19 September 2012 12:38, Peter Alexander
mailto:peter.alexander...@gmail.com>> wrote:

The fastest execution time is rarely useful to me, I'm almost
always much
more interested in the slowest execution time.
In realtime software, the slowest time is often the only
important factor,
everything must be designed to tolerate this possibility.
I can also imagine other situations where multiple workloads are
competing
for time, the average time may be more useful in that case.


The problem with slowest is that you end up with the occasional OS
hiccup or GC collection which throws the entire benchmark off. I see
your point, but unless you can prevent the OS from interrupting, the
time would be meaningless.


So then we need to start getting tricky, and choose the slowest one that
is not beyond an order of magnitude or so outside the average?


The "best way" according to some of the people who've advised my 
implementation of the framework at Facebook is to take the mode of the 
measurements distribution, i.e. the time at the maximum density.


I implemented that (and it's not easy). It yielded numbers close to the 
minimum, but less stable and needing more iterations to become stable 
(when they do get indeed close to the minimum).


Let's use the minimum. It is understood it's not what you'll see in 
production, but it is an excellent proxy for indicative and reproducible 
performance numbers.



Andrei


From APL

2012-09-20 Thread bearophile
The paper touches only a small subset of what's needed to write 
modern program, it's mostly about array operations and related 
matters. It compares some parts of the old Fortran 88 with parts 
of the dead APL language. The author seems fond of APL, and 
several things written in the paper are unfair:


"Fortran 88 Arrays - Paper Clips and Rubber Bands" (2001), by 
Robert Bernecky:

http://www.snakeisland.com/fortran8.htm

Some of the ideas of APL are widely used in modern functional 
languages. D contains array operations and Phobos contains some 
higher order operations similar to some APL verbs and adverbs.


--

Section 2.3 is about Scan operations, that are like reduce or 
fold, but keep all the intermediate results too:


+\ of 3 1 2 4

is 3 4 6 10

Some lazy scans are present in the Haskell Prelude too (and in 
Mathematica) (the Prelude contains functions and constants that 
are loaded on default):

http://zvon.org/other/haskell/Outputprelude/scanl_f.html

I think scans are not present in Phobos. Maybe one or two are 
worth adding.


--

Section 2.5 suggests to generalize the dot product:

If the + and * of the Fortran 77 DO loops for inner product are 
replaced by other functions, a whole family of interesting inner 
products appear,<


Some examples of usage (in J language):

Associative search   *./..=y
Inverted associative x+./..~:y
Minima of residues for primesx<./..|y
Transitive closure step on Booleans  y+./..*.y
Minima of maxima x<./..>.y

Maybe a higher order function named "dot" (that takes two 
callables) is worth adding to Phobos. But I am not sure.


--

Section 2.6 reminds us that a transpose() is a commonly useful 
function. A transpose is handy to have in Phobos.


Bye,
bearophile


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 12:57:03 monarch_dodra wrote:
> BTW: std.container also has MakeContainter, but, AFAIK, I've
> never seen ANYONE use it :/

What std.container has is make, which is supposed to construct a type where it 
goes by default (classes on the heap with new, and structs on the stack). The 
idea is definitely useful, and I have a pull request to improve upon it:

https://github.com/D-Programming-Language/phobos/pull/756

But I don't think that I've ever seen anyone use std.container.make either. 
The new version will be more useful though (particularly, since it'll work on 
more than just structs and classes).

- Jonathan M Davis


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread monarch_dodra
On Thursday, 20 September 2012 at 10:46:22 UTC, Johannes Pfau 
wrote:


That's what I did in std.digest. All Digests have a start() 
method,
even if it's not necessary for that specific Digest. It's 
probably a
good solution if you want a uniform interface for types which 
need

initialization and types which don't.

But there's also another, nice solution which should work: 
Introduce a
new makeRNG template function. This func checks if the RNG has 
a seed
function and calls it if available. Then you can do this for 
all RNGs:


auto rng = makeRNG!MersenneTwister();
auto rng = makeRNG!Misnstdrand();


These are good suggestions, but they are also breaking changes :/ 
I *AM* writing them down though, should we ever go down Jonathan 
M Davis's suggestion is changing the module.


BTW: std.container also has MakeContainter, but, AFAIK, I've 
never seen ANYONE use it :/


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread Johannes Pfau
Am Thu, 20 Sep 2012 12:23:12 +0200
schrieb "monarch_dodra" :
> 
> That is a good point, I'd also make the "default" heap allocated, 
> but give a way to access a stack allocated payload.
> 
> Regarding the "developer mistake", the problem is that currently:
> "Misnstdrand a;"
> Will create a valid and seeded PRNG that is ready for use, so we 
> can't break that.

Well for by reference types we'd need intialization, so there's
probably no way not to break it. We'd probably have to do the
initialization check in release mode for some phobos releases.
But after some time it should be changed to a debug-only warning.

> 
> Arguably though, the argument holds for Mersenne twister, which 
> needs a function call to be (default) seeded.
> 
> HOWEVER I find that:
> auto a = MersenneTwister();  //Not seeded and invalid
> auto b = MersenneTwister(5); //Seeded and valid
> Is a confusing, especially since MersenneTwister provides 
> "seed()" to default seed.
> 
> I'd rather have:
> auto a = MersenneTwister();  //Not *yet* seeded: It will be done 
> on the fly...
> auto b = MersenneTwister(5); //Seeded and valid
> 
> But AGAIN, on the other hand, if you change back again to your 
> proposed stack allocated MersenneTwisterImpl:
> auto a = MersenneTwister();  //Not Seeded, but not assertable
> auto b = MersenneTwister(5); //Seeded and valid
> 
> It really feels like there is no perfect solution.
> 
> 
> The truth is that I would have rather ALL prngs not have ANY 
> constructors, and that they ALL required an explicit seed: This 
> would be uniform, and not have any surprises.
> 
> OR
> 
> That creating a prng would seed it with the default seed if 
> nothing is specified, meaning that a prng is ALWAYS valid.
> 
> It feels like the current behavior is a bastardly hybrid...

That's what I did in std.digest. All Digests have a start() method,
even if it's not necessary for that specific Digest. It's probably a
good solution if you want a uniform interface for types which need
initialization and types which don't.

But there's also another, nice solution which should work: Introduce a
new makeRNG template function. This func checks if the RNG has a seed
function and calls it if available. Then you can do this for all RNGs:

auto rng = makeRNG!MersenneTwister();
auto rng = makeRNG!Misnstdrand();


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread monarch_dodra
On Thursday, 20 September 2012 at 09:58:41 UTC, Johannes Pfau 
wrote:

Am Thu, 20 Sep 2012 08:51:28 +0200
schrieb "monarch_dodra" :

> Moving to classes would definitely break code, but it should 
> be possible to
> make them reference types simply by making it so that their 
> internal state is

> in a separate object held by a pointer.

I was thinking of doing that. The problem with this (as I've 
run into and stated in another thread), is a problem of 
initialization: The simpler PRNGs are init'ed seeded, and are 
ready for use immediately. Changing to this approach would 
break the initialization, as shown in this post:


How would the internal state be allocated? malloc + refcounting 
or GC?


I'm not really happy about that as I'd like to avoid allocation
(especially GC) whenever possible. Using a PRNG only locally in 
a
function seems like a valid use case to me and it currently 
doesn't

require allocation. As the similar problem with
std.digest/std.algorithm.copy has showed value type ranges seem 
not

very well supported in phobos.

I wonder why we don't pass all struct/value type ranges by ref? 
Is

there a reason this wouldn't work?

Or we could leave the PRNGs as is and provide reference 
wrappers. This
would allow to place the PRNG on the stack and still make it 
work with

all range functions. But it's also cumbersome and error prone.

Best solution is probably to have wrappers which allocate by 
default,
but also allow to pass a reference to a stack allocated value. 
Then
make the wrappers default, so the default's safe and easy to 
use.


Pseudo-code:

struct RNG_Impl
{
uint front();
void popFront();
bool empty();
}

struct RNG
{
RNG_Impl* impl;

this(ref RNG_Impl impl)
impl = cast(RNG_Impl*) impl;

void initialize()
{
assert(!impl); //Either initialize or constructor
impl = new RNG_Impl();
}
}

RNG_Impl impl;
RNG(impl).take(5); //No allocation (but must not leak 
references...)


Regarding the initialization check: I'd avoid the check in 
release
mode. Not initializing a struct is a developer mistake and 
should be
found in debug mode. I think it's unlikely that error handling 
code can

handle such a situation anyway. But you could check how
std.typecons.RefCounted handles this, as it also need explicit
initialization.


That is a good point, I'd also make the "default" heap allocated, 
but give a way to access a stack allocated payload.


Regarding the "developer mistake", the problem is that currently:
"Misnstdrand a;"
Will create a valid and seeded PRNG that is ready for use, so we 
can't break that.


Arguably though, the argument holds for Mersenne twister, which 
needs a function call to be (default) seeded.


HOWEVER I find that:
auto a = MersenneTwister();  //Not seeded and invalid
auto b = MersenneTwister(5); //Seeded and valid
Is a confusing, especially since MersenneTwister provides 
"seed()" to default seed.


I'd rather have:
auto a = MersenneTwister();  //Not *yet* seeded: It will be done 
on the fly...

auto b = MersenneTwister(5); //Seeded and valid

But AGAIN, on the other hand, if you change back again to your 
proposed stack allocated MersenneTwisterImpl:

auto a = MersenneTwister();  //Not Seeded, but not assertable
auto b = MersenneTwister(5); //Seeded and valid

It really feels like there is no perfect solution.


The truth is that I would have rather ALL prngs not have ANY 
constructors, and that they ALL required an explicit seed: This 
would be uniform, and not have any surprises.


OR

That creating a prng would seed it with the default seed if 
nothing is specified, meaning that a prng is ALWAYS valid.


It feels like the current behavior is a bastardly hybrid...


Re: no-arg constructor for structs (again)

2012-09-20 Thread Don Clugston

On 20/09/12 11:09, Jonathan M Davis wrote:

On Thursday, September 20, 2012 10:11:41 Felix Hufnagel wrote:

On Thursday, 20 September 2012 at 00:14:04 UTC, Jonathan M Davis

wrote:

On Thursday, September 20, 2012 00:12:04 Felix Hufnagel wrote:

isn't it even worse?

import std.stdio;
struct S
{
int i;
this(void* p = null){this.i = 5;}
}
void main()
{
//S l(); //gives a linker error
auto k = S();
writeln(k.i); //prints 0
}


Of course that generates a linker error. You just declared a
function without
a body.

- Jonathan M Davis


sure, but it's a bit unexpected. do we need to be able to declare
empty functions?


It can be useful at module scope, and it would complicate the grammar to make
it anything else at function scope, even if there's no practical reason to use
it that way there. C/C++ (which doesn't have nested functions) also treats
that declaration as a function declaration.


but whats even more confusing: you are not allowed to declare an
no_arg constructor. but you are allowed to declare one where all
parameters have default parameters. but then, how to call it
without args? auto k = S(); doesn't work?


It's a bug. I'm pretty sure that there's a bug report for it already, but I'd
have to go digging for it to know which one it is.

- Jonathan M Davis


Bug 3438





Re: no-arg constructor for structs (again)

2012-09-20 Thread deadalnix

Le 20/09/2012 00:12, Felix Hufnagel a écrit :

isn't it even worse?

import std.stdio;
struct S
{
int i;
this(void* p = null){this.i = 5;}
}
void main()
{
//S l(); //gives a linker error
auto k = S();
writeln(k.i); //prints 0
}


Last time I checked it, it was not working. No constructor was called.


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread Johannes Pfau
Am Thu, 20 Sep 2012 08:51:28 +0200
schrieb "monarch_dodra" :

> > Moving to classes would definitely break code, but it should be 
> > possible to
> > make them reference types simply by making it so that their 
> > internal state is
> > in a separate object held by a pointer.  
> 
> I was thinking of doing that. The problem with this (as I've run 
> into and stated in another thread), is a problem of 
> initialization: The simpler PRNGs are init'ed seeded, and are 
> ready for use immediately. Changing to this approach would break 
> the initialization, as shown in this post:

How would the internal state be allocated? malloc + refcounting or GC?

I'm not really happy about that as I'd like to avoid allocation
(especially GC) whenever possible. Using a PRNG only locally in a
function seems like a valid use case to me and it currently doesn't
require allocation. As the similar problem with
std.digest/std.algorithm.copy has showed value type ranges seem not
very well supported in phobos.

I wonder why we don't pass all struct/value type ranges by ref? Is
there a reason this wouldn't work?

Or we could leave the PRNGs as is and provide reference wrappers. This
would allow to place the PRNG on the stack and still make it work with
all range functions. But it's also cumbersome and error prone.

Best solution is probably to have wrappers which allocate by default,
but also allow to pass a reference to a stack allocated value. Then
make the wrappers default, so the default's safe and easy to use.

Pseudo-code:

struct RNG_Impl
{
uint front();
void popFront();
bool empty();
}

struct RNG
{
RNG_Impl* impl;

this(ref RNG_Impl impl)
impl = cast(RNG_Impl*) impl;

void initialize()
{
assert(!impl); //Either initialize or constructor
impl = new RNG_Impl();
}
}

RNG_Impl impl;
RNG(impl).take(5); //No allocation (but must not leak references...)

Regarding the initialization check: I'd avoid the check in release
mode. Not initializing a struct is a developer mistake and should be
found in debug mode. I think it's unlikely that error handling code can
handle such a situation anyway. But you could check how
std.typecons.RefCounted handles this, as it also need explicit
initialization. 


Re: no-arg constructor for structs (again)

2012-09-20 Thread deadalnix

Le 20/09/2012 08:26, monarch_dodra a écrit :

On Wednesday, 19 September 2012 at 12:31:08 UTC, Maxim Fomin wrote:

On Wednesday, 19 September 2012 at 11:51:13 UTC, monarch_dodra wrote:

The biggest issue with not having a no-arg constructor can easilly be
seen if you have ever worked with a "Reference Semantic" semantic
struct: A struct that has a pointer to a payload. Basically, a class,
but without the inherited Object polymorphism.


This means that you still have a class object. What is design behind
inserting class into the structure for the sake of escaping from classes?



That's not the point at all. For starters, the "Payload" is another
struct, NOT a class wrapped in a struct.

As for why we aren't using a class to begin with? First, because a class
wraps much more than we want: polymorphism, adherence to a base "Object
Type", virtual opEquals, RTTI...

But mostly, because the object we manipulate is a struct and has always
been a struct. It uses reference semantics, but is in dire need of a an
initialization to default.

On Wednesday, 19 September 2012 at 14:09:10 UTC, deadalnix wrote:

Le 19/09/2012 15:24, Timon Gehr a écrit :

I don't think making the use of optional parens affect semantics is an
idea worth following.


I have to agree with that.

However, argument-less constructor is something required for struct.
The problem is raised on a regular basis on this newsgroup, and some
solution already have been proposed.

As discussed earlier in the reference thread, the compiler will have
to track down initialization at some point. A struct with an
argument-less constructor which isn't initialized must be an error.
This will avoid the () semantic dichotomy while solving that problem.

Would you happen to have some links to those proposed solutions, or
reword them here for us?


My solution was to include code analysis in the compiler in order to 
ensure that a struct with an argument-less constructor is assigned 
before being used.


struct S {
this() {}
}

S s;
foo(s); // Error, s may not have been initialized

s = S();
foo(s); // OK

S.init contains the struct memory layout as it is before any constructor 
run on them. It is not @safe to use in on a struct with a default argument.


Note that the code analysis required for such a task is planed to be 
included in dmd, to support @disable this(); anyway.


Re: no-arg constructor for structs (again)

2012-09-20 Thread monarch_dodra

On Thursday, 20 September 2012 at 09:22:39 UTC, David wrote:

The only thing I really miss is:


class Foo {}

struct Bar {
Foo foo = new Foo();
}

void main() {
Bar s = Bar();
assert(s.foo !is null);
}


That probably won't _ever_ work, because that is a default 
*instruction*, not a default *value*.


It is a default constructor in disguise :D which is a no-no, as 
it would break all of D's move semantics (which are pretty 
awesome, IMO).




Re: no-arg constructor for structs (again)

2012-09-20 Thread David

The only thing I really miss is:


class Foo {}

struct Bar {
Foo foo = new Foo();
}

void main() {
Bar s = Bar();
assert(s.foo !is null);
}


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread monarch_dodra
On Thursday, 20 September 2012 at 07:26:01 UTC, Jonathan M Davis 
wrote:

On Thursday, September 20, 2012 08:51:28 monarch_dodra wrote:

On Tuesday, 18 September 2012 at 17:59:04 UTC, Jonathan M Davis

wrote:
> On Tuesday, September 18, 2012 17:05:26 monarch_dodra wrote:
>> This is issue #1: I'd propose that all objects in 
>> std.random be
>> migrated to classes (or be made reference structs), sooner 
>> than
>> later. This might break some code, so I do not know how 
>> this is
>> usually done, but I think it is necessary. I do not, 
>> however,

>> propose that they should all derive from a base class.
> 
> Moving to classes would definitely break code, but it should 
> be

> possible to
> make them reference types simply by making it so that their
> internal state is
> in a separate object held by a pointer.

I was thinking of doing that. The problem with this (as I've 
run

into and stated in another thread), is a problem of
initialization: The simpler PRNGs are init'ed seeded, and are
ready for use immediately. Changing to this approach would 
break

the initialization, as shown in this post:

http://forum.dlang.org/thread/bvuquzwfykiytdwsq...@forum.dlang.org#post-yvts
ivozyhqzscgddbrl:40forum.dlang.org

A "used to be valid" PRNG has now become an un-initialized 
PRNG".
This is extremely insidious, as the code still compiles, but 
will

crash.


There's always the check that the internals have been 
initialized on every
call and initialize it if it hasn't been solution. It's not 
pretty, but it
won't break code. It's actually a use case that makes me wish 
that we had
something like the invariant which ran before every public 
function call
except that it was always there (even in -release) and let you 
do anything you

want.

In any case, while it's a bit ugly, I believe that simply 
adding checks for
initialization in every function call is the cleanest solution 
from the
standpoint of backwards compatibility, and the ugliness is all 
self-contained.
As far as performance goes, it's only an issue if you're 
iterating over it in
a tight loop, but the actual random number generation is so 
much more
expensive than a check for a null pointer that it probably 
doesn't matter.



#2
Change to class, but leave behind some "opCall"s for each old
constructor, plus an extra one for default:



Is this second solution something you think I should look into?


Since

A a;

will just blow up in your face if you switch it to a class, 
it's not a non-
breaking change even as a migration path, so I don't see that 
as really being
viable. Even if you've found a way to minimize the immediate 
code breakage,
you didn't eliminate it. If you're going to break code 
immediately, you might
as well just break it all at once and get people to fix their 
stuff rather than
mostly fix it but not quite, especially when you're asking them 
to change their

code later anyway as part of a migration path.

Regardless, when this came up previously, I believe that the 
conclusion was
that if we were going to switch to classes, we needed to do 
something like
create std.random2 and schedule std.random for deprecation 
rather than
changing the current structs to classes (either that or rename 
_every_ type in
there and schedule them for deprecation individually, but then 
you have to
come up for new names for everything, and it's more of a pain 
to migrate,
since all the names changed rather than just the import). So, I 
believe that
the idea of switching to classes was pretty much rejected 
previously unless

entirely new types were used so that no code would be broken.

I think that we have two options at this point:

1. Switch the internals so that they're in a separate struct 
pointed to by the
outer struct and check for initialization on every function 
call to avoid the

problem where init was used.

2. Create a new module to replace std.random and make them 
final classes in

there, scheduling the old module for deprecation.

Honestly, I'd just go with #1 at this point, because it avoids 
breaking code,
and there's increasing resistance to breaking code. Even 
Andrei, who was
fairly willing to break code for improvements before, is almost 
paranoid about
it now, and Walter was _always_ against it. So, if we have a 
viable solution
that avoids breaking code (especially if any ugliness that 
comes with it is
internal to the implementation), we should probably go with 
that.


- Jonathan M Davis


TY for your insight. Good points. I'll try to do your "#1": It is 
simple and non breaking. *IF* we ever do decide to break, and 
rather it be done after a very thourough discussion of 
requirements, specifications, migration path, etc...


Re: no-arg constructor for structs (again)

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 10:11:41 Felix Hufnagel wrote:
> On Thursday, 20 September 2012 at 00:14:04 UTC, Jonathan M Davis
> 
> wrote:
> > On Thursday, September 20, 2012 00:12:04 Felix Hufnagel wrote:
> >> isn't it even worse?
> >> 
> >> import std.stdio;
> >> struct S
> >> {
> >> int i;
> >> this(void* p = null){this.i = 5;}
> >> }
> >> void main()
> >> {
> >> //S l(); //gives a linker error
> >> auto k = S();
> >> writeln(k.i); //prints 0
> >> }
> > 
> > Of course that generates a linker error. You just declared a
> > function without
> > a body.
> > 
> > - Jonathan M Davis
> 
> sure, but it's a bit unexpected. do we need to be able to declare
> empty functions?

It can be useful at module scope, and it would complicate the grammar to make 
it anything else at function scope, even if there's no practical reason to use 
it that way there. C/C++ (which doesn't have nested functions) also treats 
that declaration as a function declaration.

> but whats even more confusing: you are not allowed to declare an
> no_arg constructor. but you are allowed to declare one where all
> parameters have default parameters. but then, how to call it
> without args? auto k = S(); doesn't work?

It's a bug. I'm pretty sure that there's a bug report for it already, but I'd 
have to go digging for it to know which one it is.

- Jonathan M Davis


Re: no-arg constructor for structs (again)

2012-09-20 Thread Felix Hufnagel

On Thursday, 20 September 2012 at 00:14:04 UTC, Jonathan M Davis
wrote:

On Thursday, September 20, 2012 00:12:04 Felix Hufnagel wrote:

isn't it even worse?

import std.stdio;
struct S
{
int i;
this(void* p = null){this.i = 5;}
}
void main()
{
//S l(); //gives a linker error
auto k = S();
writeln(k.i); //prints 0
}


Of course that generates a linker error. You just declared a 
function without

a body.

- Jonathan M Davis


sure, but it's a bit unexpected. do we need to be able to declare
empty functions?

but whats even more confusing: you are not allowed to declare an
no_arg constructor. but you are allowed to declare one where all
parameters have default parameters. but then, how to call it
without args? auto k = S(); doesn't work?




Re: Review of Andrei's std.benchmark

2012-09-20 Thread Manu
On 19 September 2012 12:38, Peter Alexander wrote:

> The fastest execution time is rarely useful to me, I'm almost always much
>> more interested in the slowest execution time.
>> In realtime software, the slowest time is often the only important factor,
>> everything must be designed to tolerate this possibility.
>> I can also imagine other situations where multiple workloads are competing
>> for time, the average time may be more useful in that case.
>>
>
> The problem with slowest is that you end up with the occasional OS hiccup
> or GC collection which throws the entire benchmark off. I see your point,
> but unless you can prevent the OS from interrupting, the time would be
> meaningless.
>

So then we need to start getting tricky, and choose the slowest one that is
not beyond an order of magnitude or so outside the average?


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-20 Thread Jonathan M Davis
On Thursday, September 20, 2012 08:51:28 monarch_dodra wrote:
> On Tuesday, 18 September 2012 at 17:59:04 UTC, Jonathan M Davis
> 
> wrote:
> > On Tuesday, September 18, 2012 17:05:26 monarch_dodra wrote:
> >> This is issue #1: I'd propose that all objects in std.random be
> >> migrated to classes (or be made reference structs), sooner than
> >> later. This might break some code, so I do not know how this is
> >> usually done, but I think it is necessary. I do not, however,
> >> propose that they should all derive from a base class.
> > 
> > Moving to classes would definitely break code, but it should be
> > possible to
> > make them reference types simply by making it so that their
> > internal state is
> > in a separate object held by a pointer.
> 
> I was thinking of doing that. The problem with this (as I've run
> into and stated in another thread), is a problem of
> initialization: The simpler PRNGs are init'ed seeded, and are
> ready for use immediately. Changing to this approach would break
> the initialization, as shown in this post:
> 
> http://forum.dlang.org/thread/bvuquzwfykiytdwsq...@forum.dlang.org#post-yvts
> ivozyhqzscgddbrl:40forum.dlang.org
> 
> A "used to be valid" PRNG has now become an un-initialized PRNG".
> This is extremely insidious, as the code still compiles, but will
> crash.

There's always the check that the internals have been initialized on every 
call and initialize it if it hasn't been solution. It's not pretty, but it 
won't break code. It's actually a use case that makes me wish that we had 
something like the invariant which ran before every public function call 
except that it was always there (even in -release) and let you do anything you 
want.

In any case, while it's a bit ugly, I believe that simply adding checks for 
initialization in every function call is the cleanest solution from the 
standpoint of backwards compatibility, and the ugliness is all self-contained. 
As far as performance goes, it's only an issue if you're iterating over it in 
a tight loop, but the actual random number generation is so much more 
expensive than a check for a null pointer that it probably doesn't matter.

> #2
> Change to class, but leave behind some "opCall"s for each old
> constructor, plus an extra one for default:

> Is this second solution something you think I should look into?

Since

A a;

will just blow up in your face if you switch it to a class, it's not a non-
breaking change even as a migration path, so I don't see that as really being 
viable. Even if you've found a way to minimize the immediate code breakage, 
you didn't eliminate it. If you're going to break code immediately, you might 
as well just break it all at once and get people to fix their stuff rather than 
mostly fix it but not quite, especially when you're asking them to change their 
code later anyway as part of a migration path.

Regardless, when this came up previously, I believe that the conclusion was 
that if we were going to switch to classes, we needed to do something like 
create std.random2 and schedule std.random for deprecation rather than 
changing the current structs to classes (either that or rename _every_ type in 
there and schedule them for deprecation individually, but then you have to 
come up for new names for everything, and it's more of a pain to migrate, 
since all the names changed rather than just the import). So, I believe that 
the idea of switching to classes was pretty much rejected previously unless 
entirely new types were used so that no code would be broken.

I think that we have two options at this point:

1. Switch the internals so that they're in a separate struct pointed to by the 
outer struct and check for initialization on every function call to avoid the 
problem where init was used.

2. Create a new module to replace std.random and make them final classes in 
there, scheduling the old module for deprecation.

Honestly, I'd just go with #1 at this point, because it avoids breaking code, 
and there's increasing resistance to breaking code. Even Andrei, who was 
fairly willing to break code for improvements before, is almost paranoid about 
it now, and Walter was _always_ against it. So, if we have a viable solution 
that avoids breaking code (especially if any ugliness that comes with it is 
internal to the implementation), we should probably go with that.

- Jonathan M Davis