Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Paulo Pinto
On Friday, 21 September 2012 at 21:37:23 UTC, Nick Sabalausky 
wrote:

On Fri, 21 Sep 2012 22:13:22 +0200
"Paulo Pinto"  wrote:


On Friday, 21 September 2012 at 19:09:48 UTC, H. S. Teoh wrote:
>
> The saddest thing is that people are paying big bucks for 
> this kind of
> "enterprise" code. It's one of those things that make me 
> never want to
> pay for *any* kind of software... why waste the money when 
> you can
> download the OSS version for free? Yeah a lot of OSS code is 
> crap, but

> it's not like it's any worse than the crap you pay for.
>

Welcome to my world. As a Fortune 500 outsourcing consulting 
company

employee, I see this type of code everyday.



I find it depressing to see just how *easy* it is to have
dailywtf-worthy material. They anonymized my name as Nate here:

http://thedailywtf.com/Articles/We_Have_Met_the_Enemy.aspx

Note also that the "' ...code here" and "' ...more code here" 
sections

were typically HUGE.

And that was only scratching the surface of the lunacy that was 
going

on there - both in and out of the codebase.

I've been sticking to contract stuff now, largely because I 
really
just can't take that sort of insanity anymore (not that I ever 
could).
If I ever needed to go back to 9-5 code, or cubicles, or 
open-floorplan

warrooms, I'd *really* be in trouble.


One of the reasons that keeps me in the company is the offer 
around
my area. Many of the other companies I would work for, are the 
same

type or I will be forced to switch region for something better.

--
Paulo




Re: Review of Andrei's std.benchmark

2012-09-21 Thread Nick Sabalausky
Stepping back for a moment, I think we're facing two key issues here: 

The first key issue is that the docs for std.benchmark don't adequately
explain Andre's intended charter/scope for it, it's methodology or the
rationale for its methodology. So people see "benchmark" and they think
"oh, ok, for timing stuff", but it appears to be intended as being
for very specific use-cases. I think this entire discussion serves as
evidence that, at the very least, it needs to communicate that
scope/methodology/rationale better that it currently does. If all of
us are having trouble "getting it", then others certainly will too.

Aside from that, there's the second key issue: whether the
current intended scope is sufficient. Should it be more general in
scope and not so specialized? Personally, I would tend to think do, and
I think that seems to the the popular notion. But I don't know for sure.
If it should be more generalized, than does it need to be so for the
first iteration, or can it be done later after being added to phobos?
That, I have no idea.



Re: Review of Andrei's std.benchmark

2012-09-21 Thread Nick Sabalausky
On Fri, 21 Sep 2012 17:00:29 -0400
Andrei Alexandrescu  wrote:

> On 9/21/12 11:12 AM, Manu wrote:
> > On 21 September 2012 07:45, Andrei Alexandrescu
> >
> > Currently std.benchmark does not expose raw results for the
> > sake of simplicity. It's easy to expose such, but I'd need a bit
> > ***more convincing about their utility***.
> >

(Emphasis added for proper context.)

> >
> > Custom visualisation, realtime charting/plotting, user supplied
> > reduce function?
> 
> Hrm, that sounds like an entire new project.
> 

That doesn't diminish their utility.

Keep in mind, nobody's suggesting putting all of that into
std.benchmark (certainly not initially anyway), but the idea is to at
least have the door open for them.



Re: [OT] Was: totally satisfied :D

2012-09-21 Thread H. S. Teoh
On Fri, Sep 21, 2012 at 05:38:06PM -0400, Nick Sabalausky wrote:
> On Fri, 21 Sep 2012 22:13:22 +0200
> "Paulo Pinto"  wrote:
> 
> > On Friday, 21 September 2012 at 19:09:48 UTC, H. S. Teoh wrote:
> > >
> > > The saddest thing is that people are paying big bucks for this
> > > kind of "enterprise" code. It's one of those things that make me
> > > never want to pay for *any* kind of software... why waste the
> > > money when you can download the OSS version for free? Yeah a lot
> > > of OSS code is crap, but it's not like it's any worse than the
> > > crap you pay for.
> > >
> > 
> > Welcome to my world. As a Fortune 500 outsourcing consulting company
> > employee, I see this type of code everyday.
> > 
> 
> I find it depressing to see just how *easy* it is to have
> dailywtf-worthy material. They anonymized my name as Nate here:
> 
> http://thedailywtf.com/Articles/We_Have_Met_the_Enemy.aspx

LOL... I should submit the ipv6 prefix checking code that does
conversion to string.

The sad part is that so many of the commenters have no idea that
adjacent C literals are concatenated at compile-time. It's a very nice
way to put long strings in code and have it nicely indented, something
that is sorely lacking in most languages. But regardless, why are they
posting if they clearly don't know C that well?!


> Note also that the "' ...code here" and "' ...more code here" sections
> were typically HUGE.

Speaking of 1000-line functions... yeah I routinely work with those
monsters. They tend to also have a ridiculously long list of parameters,
which over the history of the function have been added one by one as
people felt the need for Yet Another Variation on the function's
capabilities.  Most of those parameters are either meaningless or
ignored most of the time (necessitating ridiculously long lists of
null/dummy values every single time the function is called), save for
one or two exceptional cases when most of the *other* parameters aren't
needed. Calling the function with unforeseen combinations of parameters
usually triggers a bug caused by unexpected interactions between
parameters that were assumed to be independent.


> And that was only scratching the surface of the lunacy that was going
> on there - both in and out of the codebase.

I have seen code whose function names are along the lines of "do_it()"
and "do_everything()". As well as "do_main()" and
"${program_name}_main()" in addition to "main()".


> I've been sticking to contract stuff now, largely because I really
> just can't take that sort of insanity anymore (not that I ever could).
> If I ever needed to go back to 9-5 code, or cubicles, or
> open-floorplan warrooms, I'd *really* be in trouble.

I really should start doing contract work. Being stuck with the same
project and dealing with the same stupid code that never gets fixed is
just very taxing on the nerves.


T

-- 
Blunt statements really don't have a point.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Peter Alexander
On Friday, 21 September 2012 at 19:54:12 UTC, Andrei Alexandrescu 
wrote:

On 9/19/12 4:06 AM, Peter Alexander wrote:
I don't see why `benchmark` takes (almost) all of its 
parameters as
template parameters. It looks quite odd, seems unnecessary, 
and (if I'm

not mistaken) makes certain use cases quite difficult.


That is intentional - indirect calls would add undue overhead 
to the measurements.


I accept that it adds undue overhead. I just think that the 
function would be more usable with non-template parameters (as 
per my example). I also think the overhead would be negligible.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 5:36 PM, David Piepgrass wrote:

Some random comments about std.benchmark based on its
documentation:

- It is very strange that the documentation of printBenchmarks
uses neither of the words "average" or "minimum", and doesn't say
how many trials are done


Because all of those are irrelevant and confusing.


Huh? It's not nearly as confusing as reading the documentation and
not having the faintest idea what it will do. The way the benchmarker
works is somehow 'irrelevant'? The documentation doesn't even
indicate that the functions are to be run more than once!!


I misunderstood. I agree that it's a good thing to specify how
benchmarking proceeds.


I don't think that's a good idea.


I have never seen you make such vague arguments, Andrei.


I had expanded my point elsewhere. Your suggestion was:


- It is very strange that the documentation of printBenchmarks uses
neither of the words "average" or "minimum", and doesn't say how many
trials are done I suppose the obvious interpretation is that it
only does one trial, but then we wouldn't be having this discussion
about averages and minimums right? Øivind says tests are run 1000
times... but it needs to be configurable per-test (my idea: support a
_x1000 suffix in function names, or _for1000ms to run the test for at
least 1000 milliseconds; and allow a multiplier when when running a
group of benchmarks, e.g. a multiplier argument of 0.5 means to only
run half as many trials as usual.) Also, it is not clear from the
documentation what the single parameter to each benchmark is (define
"iterations count".)


I don't think it's a good idea because the "for 1000 ms" doesn't say 
anything except how good the clock resolution was on the system. I'm as 
strongly convinced we shouldn't print useless information as I am we 
should print useful information.



Andrei


Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Nick Sabalausky
On Fri, 21 Sep 2012 22:13:22 +0200
"Paulo Pinto"  wrote:

> On Friday, 21 September 2012 at 19:09:48 UTC, H. S. Teoh wrote:
> >
> > The saddest thing is that people are paying big bucks for this 
> > kind of
> > "enterprise" code. It's one of those things that make me never 
> > want to
> > pay for *any* kind of software... why waste the money when you 
> > can
> > download the OSS version for free? Yeah a lot of OSS code is 
> > crap, but
> > it's not like it's any worse than the crap you pay for.
> >
> 
> Welcome to my world. As a Fortune 500 outsourcing consulting 
> company
> employee, I see this type of code everyday.
> 

I find it depressing to see just how *easy* it is to have
dailywtf-worthy material. They anonymized my name as Nate here:

http://thedailywtf.com/Articles/We_Have_Met_the_Enemy.aspx

Note also that the "' ...code here" and "' ...more code here" sections
were typically HUGE.

And that was only scratching the surface of the lunacy that was going
on there - both in and out of the codebase.

I've been sticking to contract stuff now, largely because I really
just can't take that sort of insanity anymore (not that I ever could).
If I ever needed to go back to 9-5 code, or cubicles, or open-floorplan
warrooms, I'd *really* be in trouble.



Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread nazriel
On Friday, 21 September 2012 at 07:40:11 UTC, Andrej Mitrovic 
wrote:
On 9/21/12, Andrei Alexandrescu  
wrote:

snip


Integrating this with dpaste would be aweee..sooome!


http://dpaste.dzfl.pl/news/change-log---v0.82

Those are in plans for all compilers but atm, we are struggling 
with problem of exceeding monthly bandwidth. Disassemble output 
takes a lot of space



Anyways, I like design of this website!
Very similar to dpaste, which rox ;>

Looks very nice, probably we could adapt some ideas to dpaste, 
like

__Compiler options__


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-21 Thread monarch_dodra
On Friday, 21 September 2012 at 19:47:16 UTC, Jonathan M Davis 
wrote:

On Friday, September 21, 2012 15:20:49 monarch_dodra wrote:

#3
The only thing I'm having an issue with is "save". IMO, it is
exceptionally dangerous to have a PRNG be a ForwardRange: It
should only be saved if you have a damn good reason to do so. 
You

can still "dup" if you want (manually) (if you think that is
smart), but I don't think it should answer true to
"isForwardRange".


It is _very_ crippling to a range to not be a forward range. 
There's lots of
stuff that requires it. And I really don't see what the problem 
is. _Maybe_
it's okay for them to be input ranges and not forward ranges, 
but in general,
I think that we need to be _very_ careful about doing that. It 
can be really,

really annoying when a range is not a forward range.


I know it is crippling, but that is kind of the entire point: If 
somebody wants to read the same numbers trough twice, he'd better 
have a damn good reason. For example, *even* with reference 
ranges:



auto rng = SomeReferenceForwardRange();
auto a = new int[](10);
auto b = new int[](10);
a.fill(rng);
b.fill(rng);


This will fill a and b with the *same* numbers (!), even though 
rng is a reference range (!) Arguably, the bug is in fill (and 
has since been fixed), but the point is that by the time you 
notice it, who knows how long you've debugged? And who knows 
which *other* algorithms will break your prnd?


Had the rng been only InputRange, the compilation would have 
raised an error. And the work around is also easy. Safety first, 
right?


It might not be worth deprecating *now*, but I do think it is a 
latent danger.


PS: In all of random, there is not one single use case for having 
a ForwardPrng. Just saying.



You just don't know what an algorithm will do under the hood if
it finds out the range is saveable. In particular, save can be
non-trivial and expensive...


It shouldn't be. It's a copy, and copy's are not supposed to be 
expensive.
We've discussed this before with regards to postlbit 
constructors. Certainly,
beyond the extra cost of having to new things up in some cases, 
they should be

relatively cheap.


The copy of the reference is designed to be cheap (and is), yes 
(we've discussed this).


However, when you save, you have to duplicate the payload, and 
even considering that there is no postblit cost, you have 
strictly no control over its size:


LCG: 1 Int
XORShift: 5 Ints
MersenneTwister: 397 Ints
LaggedFib: (607 to 44497) ulongs or doubles

Not the end of the world, but not trivially cheap either.


QUESTION:
If I (were to) deprecate "save", how would that work with the
range traits type? If a range has "save", but it is deprecated,
does it answer true to isForwardRange?


You'd have to  test it. It might depend on whether -d is used, 
but it could

easily be that it'll be true as long as save exists.

- Jonathan M Davis


I just tested it btw:
without -d: isForwardRange: false
with-d: isForwardRange: true

It is kind of the logical behavior actually.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread David Piepgrass
Some random comments about std.benchmark based on its 
documentation:


- It is very strange that the documentation of printBenchmarks 
uses neither of the words "average" or "minimum", and doesn't 
say how many trials are done


Because all of those are irrelevant and confusing.


Huh? It's not nearly as confusing as reading the documentation 
and not having the faintest idea what it will do. The way the 
benchmarker works is somehow 'irrelevant'? The documentation 
doesn't even indicate that the functions are to be run more than 
once!!



I don't think that's a good idea.


I have never seen you make such vague arguments, Andrei.


Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Nick Sabalausky
On Fri, 21 Sep 2012 08:24:07 -0400
"Steven Schveighoffer"  wrote:
> 
> That works too, but doesn't warrant rants about how you haven't
> learned how to use the fucking thing :)
> 

It's *volume* controls, there doesn't need to be *anything* to learn.

> >
> > Try listing out all the different volume rules (that you're *aware*
> > of - who knows what other hidden quirks there might be), all
> > together, and I think you may be surprised just how much complexity
> > there is.
> 
> 1. ringer volume affects all sounds except for music/video/games
> 2. Silent switch will ringer volume to 0 for all sounds except for  
> find-my-iphone and alarm clock
> 3. If playing a game/video/music, the volume buttons affect that
> volume, otherwise, they affect ringer volume.
> 
> Wow, you are right, three whole rules.

And each one with exceptions, the rules as a whole aren't particularly
intuitive.

And then there's the question of what rules you forgot. I can think of
one right now:

4. If you're in the camera app then the volume button takes a picture
instead of adjusting volume.


> That's way more than 1.  I stand corrected :)
> 

Now compare that to a normal device:

1. The volume control adjusts the volume.

Gee, how horrible to have one trivially intuitive rule and no
exceptions.

Bottom line, they took something trivial, complicated it, and people
hail them as genius visionaries.

> > Then compare that to, for example, a walkman or other portable music
> > player (iTouch doesn't count, it's a PDA) which is 100% predictable
> > and trivially simple right from day one. You never even have to
> > think about it, the volume **just works**, period. The fact that
> > the ijunk has various other uses besides music is immaterial: It
> > could have been simple and easy and worked well, and they instead
> > chose to make it complex.
> >
> > Not only that, but it would have been trivial to just offer an
> > *option* to turn that "smart" junk off. But then allowing a user to
> > configure their own property to their own liking just wouldn't be
> > very "Apple", now would it?
> 
> I detect a possible prejudice against Apple here :)
> 

Heh :) But yea, I *do* take a lot a issue with Apple, partly
because as a business they make MS look like the EFF, but also largely
because I've dealt with their products, and I really *do* find them to
be awful overall.

> >
> > Not trying to "convert" you, just FWIW:
> >
> > You might like Win7. It's very Mac-like out-of-the-box which is
> > exactly why I hate it ;)
> 
> No, it's nowhere near the same level.  I have Win 7, had it from the
> day of its release, and while it's WAY better than XP,

Heh, yea I had a feeling. Like I said, Win7 is very Mac-like as far as
windows goes. I find it interesting that while I absolutely can't stand
Win7 (at least without *heavy* non-standard configuring and some
hacks), Mac people OTOH tend to see it as a big improvement over XP.
It's Microsoft OSX.


> For instance, when I want to turn my Mac off, I press the power
> button, shut down, and when it comes back up, all the applications I
> was running return in exactly the same state they were in.  This is
> not hibernation, it's a complete shutdown.  Every app has built in
> it, the ability to restore its state.  This is because it's one of
> the things Mac users expect.
> 
> You can't do that with Windows or even Linux.  Ubuntu has tried to
> make their UI more mac like, but because the applications are not
> built to handle the features, it doesn't quite work right.
> 

Well, we can make any OS look good by picking one nice feature.

And personally, I actually like that shutdown serves as a "close all".
There's a number of programs that do have settings for roughly "when
starting, resume wherever I left off last time". I always end up
turning that off because it just means I usually have to close whatever
it auto-opened anyway. When I close/exit/etc something, it's generally
because I'm done with that task. So auto-resumes just get in
my way. OS is the same thing: If it auto-resumed everything, then I
would just have to go closing most of it myself. Makes more work for
me in it's quest to be "helpful".

> >> The *screen* wasn't broken, it's just the plastic starts
> >> deteriorating. Jobs famously had an early iPhone prototype with a
> >> plastic screen and pulled it out at a designer meeting and yelled
> >> at them saying "this fucking thing is in with my keys, it's
> >> getting all scratched up!  we need something better."  That's when
> >> they started thinking about using the glass screens.
> >>
> >
> > Yea, he never did grow up, did he? Still throwing tantrums all the
> > way up to, what was he, like 60?
> >
> > And he never did learn about such things as "covers", did he?
> 
> Interesting that's what you see as the defining point of that
> story :)

It's a story that always did stike me as odd: Here we have a grown
man (one who was *well known* to be unstable, asinine, drug-soaked and
frankly, border

Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jens Mueller
Andrei Alexandrescu wrote:
> On 9/21/12 5:39 AM, Jacob Carlborg wrote:
> >On 2012-09-21 06:23, Andrei Alexandrescu wrote:
> >
> >>For a very simple reason: unless the algorithm under benchmark is very
> >>long-running, max is completely useless, and it ruins average as well.
> >
> >I may have completely misunderstood this but aren't we talking about
> >what do include in the output of the benchmark? In that case, if you
> >don't like max and average just don't look at it.
> 
> I disagree. I won't include something in my design just so people
> don't look at it most of the time. Min and average are most of the
> time an awful thing to include, and will throw off people with
> bizarre results.
> 
> If it's there, it's worth looking at. Note how all columns are
> directly comparable (I might add, unlike other approaches to
> benchmarking).
> 
> >>For virtually all benchmarks I've run, the distribution of timings is a
> >>half-Gaussian very concentrated around the minimum. Say you have a
> >>minimum of e.g. 73 us. Then there would be a lot of results close to
> >>that; the mode of the distribution would be very close, e.g. 75 us, and
> >>the more measurements you take, the closer the mode is to the minimum.
> >>Then you have a few timings up to e.g. 90 us. And finally you will
> >>inevitably have a few outliers at some milliseconds. Those are orders of
> >>magnitude larger than anything of interest and are caused by system
> >>interrupts that happened to fall in the middle of the measurement.
> >>
> >>Taking those into consideration and computing the average with those
> >>outliers simply brings useless noise into the measurement process.
> >
> >After your replay to one of Manu's post, I think I misunderstood the
> >std.benchmark module. I was thinking more of profiling. But are these
> >quite similar tasks, couldn't std.benchmark work for both?
> 
> This is an interesting idea. It would delay release quite a bit
> because I'd need to design and implement things like performance
> counters and such.

You mean like extending StopWatch and allowing the user to provide the
measuring code, i.e. counting the number of instructions. This would be
very useful. Is it possible to make sure that these changes can be
introduced later without breaking the API?

Jens


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Jens Mueller
Ellery Newcomer wrote:
> On 09/21/2012 03:04 AM, Jens Mueller wrote:
> >But it's nice to have source code and assembly side by side.
> >
> >Jens
> >
> 
> And very nice to have demangled names in assembly.

You can pipe your assembly code to ddemangle if there is some other tool
that missing demangling. I did this for example when I looked at output
from a statistical profiler.

Jens


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
David Piepgrass wrote:
> >However, what's truly insane IMHO is continuing to run a unittest
> >block after
> >it's already had a failure in it. Unless you have exceedingly
> >simplistic unit
> >tests, the failures after the first one mean pretty much _nothing_
> >and simply
> >clutter the results.
> 
> I disagree. Not only are my unit tests independent (so of course the
> test runner should keep running tests after one fails) but often I
> do want to keep running after a failure.
> 
> I like the BOOST unit test library's approach, which has two types
> of "assert": BOOST_CHECK and BOOST_REQUIRE. After a BOOST_CHECK
> fails, the test keeps running, but BOOST_REQUIRE throws an exception
> to stop the test. When testing a series of inputs in a loop, it is
> useful (for debugging) to see the complete set of which ones succeed
> and which ones fail. For this feature (continuation) to be really
> useful though, it needs to be able to output context information on
> failure (e.g. "during iteration 13 of input group B").

This leads us to the distinction of exceptions and errors. It is safe to
catch exceptions but less so for errors. At least it is far more
dangerous and less advised to continue execution but should not be
prohibited I think.

Jens


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 2:49 PM, David Piepgrass wrote:

After extensive tests with a variety of aggregate functions, I can say
firmly that taking the minimum time is by far the best when it comes
to assessing the speed of a function.


Like others, I must also disagree in princple. The minimum sounds like a
useful metric for functions that (1) do the same amount of work in every
test and (2) are microbenchmarks, i.e. they measure a small and simple
task.


That is correct.


If the benchmark being measured either (1) varies the amount of
work each time (e.g. according to some approximation of real-world
input, which obviously may vary)* or (2) measures a large system, then
the average and standard deviation and even a histogram may be useful
(or perhaps some indicator whether the runtimes are consistent with a
normal distribution or not). If the running-time is long then the max
might be useful (because things like task-switching overhead probably do
not contribute that much to the total).

* I anticipate that you might respond "so, only test a single input per
benchmark", but if I've got 1000 inputs that I want to try, I really
don't want to write 1000 functions nor do I want 1000 lines of output
from the benchmark. An average, standard deviation, min and max may be
all I need, and if I need more detail, then I might break it up into 10
groups of 100 inputs. In any case, the minimum runtime is not the
desired output when the input varies.


I understand. What we currently do at Facebook is support benchmark 
functions with two parameters (see 
https://github.com/facebook/folly/blob/master/folly/docs/Benchmark.md). 
One is the number of iterations, the second is "problem size", akin to 
what you're discussing.


I chose to not support that in this version of std.benchmark because it 
can be tackled later easily, but I probably need to add it now, sigh.



It's a little surprising to hear "The purpose of std.benchmark is not to
estimate real-world time. (That is the purpose of profiling)"...
Firstly, of COURSE I would want to estimate real-world time with some of
my benchmarks. For some benchmarks I just want to know which of two or
three approaches is faster, or to get a coarse ball-park sense of
performance, but for others I really want to know the wall-clock time
used for realistic inputs.


I would contend that a benchmark without a baseline is very often 
misguided. I've seen tons and tons and TONS of nonsensical benchmarks 
lacking a baseline. "I created one million smart pointers, it took me 
only one millisecond!" Well how long did it take you to create one 
million dumb pointers?


Choosing good baselines and committing to good comparisons instead of 
un-based absolutes is what makes the difference between a professional 
and a well-intended dilettante.



Secondly, what D profiler actually helps you answer the question "where
does the time go in the real-world?"? The D -profile switch creates an
instrumented executable, which in my experience (admittedly not
experience with DMD) severely distorts running times. I usually prefer
sampling-based profiling, where the executable is left unchanged and a
sampling program interrupts the program at random and grabs the call
stack, to avoid the distortion effect of instrumentation. Of course,
instrumentation is useful to find out what functions are called the most
and whether call frequencies are in line with expectations, but I
wouldn't trust the time measurements that much.

As far as I know, D doesn't offer a sampling profiler, so one might
indeed use a benchmarking library as a (poor) substitute. So I'd want to
be able to set up some benchmarks that operate on realistic data, with
perhaps different data in different runs in order to learn about how the
speed varies with different inputs (if it varies a lot then I might
create more benchmarks to investigate which inputs are processed
quickly, and which slowly.)


I understand there's a good case to be made for profiling. If this turns 
out to be an acceptance condition for std.benchmark (which I think it 
shouldn't), I'll define one.



Some random comments about std.benchmark based on its documentation:

- It is very strange that the documentation of printBenchmarks uses
neither of the words "average" or "minimum", and doesn't say how many
trials are done


Because all of those are irrelevant and confusing. We had an older 
framework at Facebook that reported those numbers, and they were utterly 
and completely meaningless. Besides the trials column contained numbers 
that were not even comparable. Everybody was happy when I removed them 
with today's simple and elegant numbers.



I suppose the obvious interpretation is that it only
does one trial, but then we wouldn't be having this discussion about
averages and minimums right? Øivind says tests are run 1000 times... but
it needs to be configurable per-test (my idea: support a _x1000 suffix
in function names, or _for1000ms to run the test for at least 1000
m

Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Jacob Carlborg wrote:
> On 2012-09-21 20:01, Johannes Pfau wrote:
> 
> >I didn't think of setAssertHandler. My changes are perfectly compatible
> >with it.
> >IIRC setAssertHandler has the small downside that it's used for all
> >asserts, not only those used in unit tests? I'm not sure if that's a
> >drawback or actually useful.
> 
> That's no problem, there's a predefined version, "unittest", when
> you pass the -unittest flag to the compiler:
> 
> version (unittest)
> setAssertHandler(myUnitTestSpecificAssertHandler);

But if you have an assert in some algorithm to ensure some invariant or
in a contract it will be handled by myUnitTestSpecificAssertHandler.
But I think that is not a drawback. Don't you want to no whenever an
assert is violated?

Jens


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Tobias Pankrath wrote:
> 
> >I'm actually kinda surprised the feedback on this is rather
> >negative. I
> >thought running unit tests individually and printing
> >line/file/name was
> >requested quite often?
> 
> I want to have this. My workflow is: Run all tests0(run all). If
> some fail, see if there might be a common reason (so don't stop).
> Than run the unit tests that will most likely tell you what's wrong
> in a debugger (run one test individually).

Though dtest is in an early state, you can
do:
$ ./dtest --abort=no

runs all unittests and report each failure, i.e. it continues in case of
a failure instead of aborting.

Then run:
$ ./dtest --abort=no --break=both
to turn all failures into breakpoints.
What is true that you cannot pick here an individual unittest. But you
can continue in the debugger though this may have its problems. But
running them individually may have problems too if the unittests are
not written to be executed independently.

Jens


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 11:14 AM, Manu wrote:

On 21 September 2012 07:23, Andrei Alexandrescu
mailto:seewebsiteforem...@erdani.org>>
wrote:

For a very simple reason: unless the algorithm under benchmark is
very long-running, max is completely useless, and it ruins average
as well.


This is only true for systems with a comprehensive pre-emptive OS
running on the same core. Most embedded systems will only be affected by
cache misses and bus contention, in that situation, max is perfectly
acceptable.


I think embedded systems that run e.g. Linux will be affected by task 
switching.


Andrei


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Dmitry Olshansky

On 21-Sep-12 23:59, Andrei Alexandrescu wrote:

On 9/19/12 4:11 PM, "Øivind" wrote:

New question for you :)

To register benchmarks, the 'scheduleForBenchmarking' mixin inserts a
shared static initializer into the module. If I have a module A and a
module B, that both depend on eachother, than this will probably not
work..? The runtime will detect the init cycle and fail with the
following error:

"Cycle detected between modules with ctors/dtors"

Or am I wrong now?


I think you have discovered a major issue. Ideas on how to attack this?


Not ideal but...

Make scheduleForBenchmarking to mixin in something else but not code - 
say global templated struct with certain name.


Then it should be possible to do:

benchmarkModules!(module1, module2, ...);

That would search for this specific anchor at the top scope of modules 
and collect all info. I'm not sure we can pass module names as alias 
parameters but I think our meta-programming tricksters certainly did 
something along the these lines.


--
Dmitry Olshansky


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 11:12 AM, Manu wrote:

On 21 September 2012 07:45, Andrei Alexandrescu
mailto:seewebsiteforem...@erdani.org>>
wrote:

As such, you're going to need a far more
convincing argument than "It worked well for me."


Sure. I have just detailed the choices made by std.benchmark in a
couple of posts.

At Facebook we measure using the minimum, and it's working for us.


Facebook isn't exactly 'realtime' software. Obviously, faster is always
better, but it's not in a situation where if you slip a sync point by
1ms in an off case, it's all over. You can lose 1ms here, and make it up
at a later time, and the result is the same. But again, this feeds back
to your distinction between benchmarking and profiling.


You'd be surprised at how much we care about e.g. 90 percentile time to 
interaction.



Otherwise, I think we'll need richer results. At the very least
there
should be an easy way to get at the raw results programmatically
so we can run whatever
stats/plots/visualizations/__output-formats we
want. I didn't see anything like that browsing through the docs, but
it's possible I may have missed it.


Currently std.benchmark does not expose raw results for the sake of
simplicity. It's easy to expose such, but I'd need a bit more
convincing about their utility.


Custom visualisation, realtime charting/plotting, user supplied reduce
function?


Hrm, that sounds like an entire new project.


Andrei


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 5:39 AM, Jacob Carlborg wrote:

On 2012-09-21 06:23, Andrei Alexandrescu wrote:


For a very simple reason: unless the algorithm under benchmark is very
long-running, max is completely useless, and it ruins average as well.


I may have completely misunderstood this but aren't we talking about
what do include in the output of the benchmark? In that case, if you
don't like max and average just don't look at it.


I disagree. I won't include something in my design just so people don't 
look at it most of the time. Min and average are most of the time an 
awful thing to include, and will throw off people with bizarre results.


If it's there, it's worth looking at. Note how all columns are directly 
comparable (I might add, unlike other approaches to benchmarking).



For virtually all benchmarks I've run, the distribution of timings is a
half-Gaussian very concentrated around the minimum. Say you have a
minimum of e.g. 73 us. Then there would be a lot of results close to
that; the mode of the distribution would be very close, e.g. 75 us, and
the more measurements you take, the closer the mode is to the minimum.
Then you have a few timings up to e.g. 90 us. And finally you will
inevitably have a few outliers at some milliseconds. Those are orders of
magnitude larger than anything of interest and are caused by system
interrupts that happened to fall in the middle of the measurement.

Taking those into consideration and computing the average with those
outliers simply brings useless noise into the measurement process.


After your replay to one of Manu's post, I think I misunderstood the
std.benchmark module. I was thinking more of profiling. But are these
quite similar tasks, couldn't std.benchmark work for both?


This is an interesting idea. It would delay release quite a bit because 
I'd need to design and implement things like performance counters and such.



Andrei


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Jacob Carlborg wrote:
> On 2012-09-21 16:37, Jens Mueller wrote:
> 
> >If there are use cases I agree. I do not know one.
> >The question whether there are *tools* that report in case of success is
> >easier to verify. Do you know any tool that does reporting in case
> >success? I think gtest does not do it. I'm not sure about JUnit.
> >But of course if a unittest has additional information and that is
> >already implemented or easy to implement fine with me. My point is more
> >that for the common cases you do not need this. Maybe in most. Maybe in
> >all.
> 
> Test::Unit, the default testing framework for Ruby on Rails prints a
> dot for each successful test.

That is fine. But you don't need the name of the unittest then.

Jens


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Johannes Pfau wrote:
> Am Fri, 21 Sep 2012 16:37:37 +0200
> schrieb Jens Mueller :
> 
> > Jacob Carlborg wrote:
> > > On 2012-09-21 14:19, Jens Mueller wrote:
> > > 
> > > >Why do you need filename and line information of a unittest. If a
> > > >unittest fails you'll get the relevant information. Why do you
> 
> With the recent name mangling change it's possible to get the unittest
> line if a test fails, but only if you have working backtraces. That
> might not be true for other compilers / non x86 architectures.
> 
> To get the filename you have to demangle the unittest function name
> (IIRC core.demangle can't even demangle that name right now) and this
> only gives you the module name (which you could also get using
> moduleinfo though)

I'm saying I do not care which unittest succeeded. All I need that all
unittest I ran where successfully executed.

> It's also useful for disabled tests, so you can actually look them up.

That may be useful. So you say these tests where disabled instead of
just 2 tests where disabled.

> > > >want the information when a unittest succeeded? I only care about
> > > >failed unittests. A count of the number of executed unittests and
> > > >total number is enough, I think.
> 
> The posted example shows everything that can be done, even if it might
> not make sense. However printing successful tests also has a use case:
> 
> 1: It shows the progress of unit testing. (Not so important)
> 2: If code crashes and doesn't produce a backtrace, you still now which
> test crashed as the file name and line number are printed before
> running the test. (might sound unprobable. But try porting gdc to a new
> architecture. I don't want to waste time anymore commenting out
> unit tests to find the failing one in a file with dozens of tests and
> an ARM machine that takes ages to run the tests)

Why don't you get report when the program crashes?

> Another use case is printing all unittests in a library. Or a gui app
> displaying all unittests, allowing to only run single unittests, etc.

Listing on a unittest level and selecting may be useful.

> > > But others might care about other things. I doesn't hurt if the
> > > information is available. There might be use cases when one would
> > > want to display all tests regardless of if they failed or not.
> > 
> > If there are use cases I agree. I do not know one.
> > The question whether there are *tools* that report in case of success
> > is easier to verify. Do you know any tool that does reporting in case
> > success? I think gtest does not do it. I'm not sure about JUnit.
> 
> I don't know those tools, but I guess they have some sort of progress
> indicator?

They have them at test case level. I'm not sure whether there is a
strict relation between unittest and test case for D. The module level
may be enough.

> But I remember some .NET unit test GUIs that showed a green button for
> successful tests. But it's been years since I've done anything in .NET.
> 
> > But of course if a unittest has additional information and that is
> > already implemented or easy to implement fine with me. My point is
> > more that for the common cases you do not need this. Maybe in most.
> > Maybe in all.
> 
> You usually don't have to print sucessful tests (although sometimes I
> wasn't sure if tests actually run), but as you can't know at compile
> time which tests fail you either have this information for all tests or
> for none.

But you could just count the number and report it. If it says
"testing std.algorithm with 134 of 134 unittest"
you know all have been executed. What is true it won't tell you which
unittests were disabled. But that is easy to find out.

> The main reason _I_ want this is for gdc: We currently don't run the
> unit tests on gdc at all. I know they won't pass on ARM. But the unit
> tests error out on the first failing test. Often that error is a
> difficult to fix backend bug, and lots of simpler library bugs are
> hidden because the other tests aren't executed.

But this is a different problem. You want to keep executing on failure.
You don't need a unittest name for this. Maybe you say skipping a
failing unittest is better and disabling them in the source using
@disable is tedious.

> I'm actually kinda surprised the feedback on this is rather negative. I
> thought running unit tests individually and printing line/file/name was
> requested quite often?

Running unittests individually is very useful. But I'm not so sure about
the latter. I think driving the execution of how to execute the
unittests is important. Not so much reporting listing single unittests.
But I won't object when you add this feature if you believe it will be
used. Just saying I have less use for it. And if the change is simple
it should be unlikely to introduce any bugs.

Jens


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jonathan M Davis
On Friday, September 21, 2012 15:59:31 Andrei Alexandrescu wrote:
> On 9/19/12 4:11 PM, "Øivind" wrote:
> > New question for you :)
> > 
> > To register benchmarks, the 'scheduleForBenchmarking' mixin inserts a
> > shared static initializer into the module. If I have a module A and a
> > module B, that both depend on eachother, than this will probably not
> > work..? The runtime will detect the init cycle and fail with the
> > following error:
> > 
> > "Cycle detected between modules with ctors/dtors"
> > 
> > Or am I wrong now?
> 
> I think you have discovered a major issue. Ideas on how to attack this?

Some of us have been asking for ages for the ability to mark a static 
constructor as not depending on anything so that the runtime _doesn't_ think 
that there's a circular dependency, but Walter has been against the idea when 
it's been brought up. That would _really_ help here.

Without redesigning std.benchmark so that it doesn't use static constructors, 
I don't know how you can fix that. Normally, if you really need a static 
constructor, you go through the pain of creating a separate module which does 
the initialization for you (like std.stdio does). But that won't work in this 
case, because you're mixing it in. So, unless you can redesign it so that 
std.benchmark doesn't require static constructors, it may have to be a 
limitation of std.benchmark that it can't be used where it would create a 
circular dependency.

Unfortunately, the circular dependency issue makes static constructors almost 
useless outside of isolated cases, even though they rarely actually have 
circular dependencies. It's one of the few places in D that I'd say that 
there's a major design flaw.

- Jonathan M Davis


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Dmitry Olshansky

I'd throw in a request to address the following.

Suppose we have a function F and a set of inputs S that are supposedly 
different scenarios we optimize for.
What is interesting is to benchmark all of F(S[i]) as |S| separate 
functions greatly saving on boilerplate (and helping readability).


One way would to allow passing in an input range of ArgumentTuples to F.

Say as prefix:

void benchmark_f(int a, double b, string s){ ... }

enum benchmark_data_f = [ tuple(1, 2.0, "hi"), tuple(2, 3.0, "bye") ];

Then in the results it'd look as:
f(1, 2.0, "hi")   
f(2, 3.0, "bye")  

Using any input range is interestingly flexible e.g. :

enum benchmark_data_x = cortesianProduct(iota(1, 3), iota(1, 3));
//we should probably have it in std.range somewhere

void benchmark_x(int a, int b){ ... }

That being said I don't really get the benefit of passing iteration 
count to the function being benched. To allow it to do initialization 
step once then do resumeBenchmark() and run some inner loop n times?


--
Dmitry Olshansky


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Dmitry Olshansky

On 21-Sep-12 22:49, David Piepgrass wrote:

After extensive tests with a variety of aggregate functions, I can say
firmly that taking the minimum time is by far the best when it comes
to assessing the speed of a function.




As far as I know, D doesn't offer a sampling profiler, so one might
indeed use a benchmarking library as a (poor) substitute. So I'd want to
be able to set up some benchmarks that operate on realistic data, with
perhaps different data in different runs in order to learn about how the
speed varies with different inputs (if it varies a lot then I might
create more benchmarks to investigate which inputs are processed
quickly, and which slowly.)


Real good profilers are the ones served by CPU vendor. See AMD's 
CodeAnalyst or Intel's VTune. They could even count number of branch 
predictions, cache misses etc.
It is certainly out of the charter of module or for that matter any 
standard library code.




Some random comments about std.benchmark based on its documentation:

- It is very strange that the documentation of printBenchmarks uses
neither of the words "average" or "minimum", and doesn't say how many
trials are done I suppose the obvious interpretation is that it only
does one trial, but then we wouldn't be having this discussion about
averages and minimums right?


See the algorithm in action here:
https://github.com/D-Programming-Language/phobos/pull/794/files#L2R381

In other word a function is run 10^n times with n is picked so that 
total time is big enough to be a trustworthy measurement. Then run-time 
is time/10^n.


Øivind says tests are run 1000 times...

The above 1000 times, picking the minimum as the best. Obviously it'd be 
good to be configurable.


 but

it needs to be configurable per-test (my idea: support a _x1000 suffix
in function names, or _for1000ms to run the test for at least 1000
milliseconds; and allow a multiplier when when running a group of
benchmarks, e.g. a multiplier argument of 0.5 means to only run half as
many trials as usual.) Also, it is not clear from the documentation what
the single parameter to each benchmark is (define "iterations count".)




- The "benchmark_relative_" feature looks quite useful. I'm also happy
to see benchmarkSuspend() and benchmarkResume(), though
benchmarkSuspend() seems redundant in most cases: I'd like to just call
one function, say, benchmarkStart() to indicate "setup complete, please
start measuring time now."

- I'm glad that StopWatch can auto-start; but the documentation should
be clearer: does reset() stop the timer or just reset the time to zero?
does stop() followed by start() start from zero or does it keep the time
on the clock? I also think there should be a method that returns the
value of peek() and restarts the timer at the same time (perhaps stop()
and reset() should just return peek()?)


It's the same as the usual stopwatch (as in the real hardware thingy). Thus:
- reset just resets numbers to zeros
- stop just stops counting
- start just starts counting
- peek imitates taking a look at numbers on a device ;)



- After reading the documentation of comparingBenchmark and measureTime,
I have almost no idea what they do.


I think that comparingBenchmark was present in std.datetime and is 
carried over as is.


--
Dmitry Olshansky


Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Paulo Pinto

On Friday, 21 September 2012 at 19:09:48 UTC, H. S. Teoh wrote:

On Fri, Sep 21, 2012 at 03:54:21PM +0200, Paulo Pinto wrote:
[...]

In big corporations you spend more time taking care of existing
projects in big teams, than developing stuff from scratch.

In these type of environments you learn to appreciate the 
verbosity
of certain programming languages, and keep away from cute 
hacks.


I have to say, this is very true. When I first got my current 
job, I was
appalled at the verbosity of the C code that I had to work 
with. C
code!! Not Java or any of that stuff. My manager told me to try 
to
conform to the (very verbose) style of the code. So I thought, 
well

they're paying me to do this, so I'll shut up and cope.

After a few years, I started to like the verbosity (which is 
saying a
lot from a person like me -- I used to code with 2-space 
indents),
because it makes it so darned easy to read, to search, and to 
spot
stupid bugs. Identifier names are predictable, so you could 
just guess
the correct name and you'd be right most of the time. Makes it 
easy to
search for identifier usage in the ~2 million line codebase, 
because the

predictable pattern excludes (almost) all false positives.

However:


Specially when you take into consideration the quality of work 
that

many programming drones are capable of.

[...]

Yeah, even the verbosity / consistent style of the code didn't 
prevent
people from doing stupid things with the code. Utterly stupid 
things.
My favorite example is a particular case of checking for IPv6 
subnets by
converting the subnet and IP address to strings and then using 
string
prefix comparison. Another example is a bunch of static 
functions with
identical names and identical contents, copy-n-pasted across 
like 30
modules (or worse, some copies are imperfect buggy versions).  
It makes

you wonder if the guy who wrote it even understands what code
factorization means. Or "bug fixes" that consists of a whole 
bunch of
useless redundant code to "fix" a problem, that adds all sorts 
of
spurious buggy corner cases to the code and *doesn't actually 
address
the cause of the bug at all*. It boggles the mind how something 
like

that made it through code review.

The saddest thing is that people are paying big bucks for this 
kind of
"enterprise" code. It's one of those things that make me never 
want to
pay for *any* kind of software... why waste the money when you 
can
download the OSS version for free? Yeah a lot of OSS code is 
crap, but

it's not like it's any worse than the crap you pay for.

Sigh.


T


Welcome to my world. As a Fortune 500 outsourcing consulting 
company

employee, I see this type of code everyday.

--
Paulo



Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/20/12 3:42 AM, Manu wrote:

On 19 September 2012 12:38, Peter Alexander
mailto:peter.alexander...@gmail.com>> wrote:

The fastest execution time is rarely useful to me, I'm almost
always much
more interested in the slowest execution time.
In realtime software, the slowest time is often the only
important factor,
everything must be designed to tolerate this possibility.
I can also imagine other situations where multiple workloads are
competing
for time, the average time may be more useful in that case.


The problem with slowest is that you end up with the occasional OS
hiccup or GC collection which throws the entire benchmark off. I see
your point, but unless you can prevent the OS from interrupting, the
time would be meaningless.


So then we need to start getting tricky, and choose the slowest one that
is not beyond an order of magnitude or so outside the average?


That's exactly where it all starts getting unprincipled. Just use the 
minimum.


Just. Use. The. Minimum.

Andrei


Re: LDC blacklisted in Ubuntu

2012-09-21 Thread Joseph Rushton Wakeling

On 20/09/12 19:04, David Nadlinger wrote:

On Thursday, 20 September 2012 at 17:26:25 UTC, Joseph Rushton Wakeling wrote:

Some rather urgent news: LDC has just been blacklisted in Ubuntu.


It is not really news, as the LDC version in the Debian repo has not been
updated for ages.


It's not news that the package is out of date, but it _is_ news that the package 
has been blacklisted, and very unwelcome news, because it could make it much 
more difficult to get an updated package included.



But yes, it would definitely be important to have an LDC
package in as many distribution repos as possible.


I'd add here that you're talking about by far the most widely used distro.


As far as I see, we would at the very least need somebody to maintain the
Debian/Ubuntu packages for this. Unfortunately, nobody on the core dev team uses
Ubuntu for their daily work, or has other experiences with Debian packages.

It would be great if somebody from the D community experienced in packaging
could jump in to help us on this front. We'd be happy to help with any
questions, and I don't think the packaging process should be particularly
difficult (LDC builds fine on Ubuntu, and Arch and Fedora are already shipping
recent versions). The thing is just that creating good packages for a system you
are not intimately familiar with is quite hard, and we are already chronically
lacking manpower anyway.


Isn't it worth someone from the LDC team discussing with the Ubuntu people 
concerned (e.g. the person who decided to blacklist the package) and try and get 
their feedback and advice on packaging?  My experience is that the Ubuntu team 
are fairly friendly and helpful.


AFAICS the reason this situation has arisen is because you've got a bug on 
Launchpad that never got communicated as far as the LDC devs.  Opening that 
channel of communication could help prevent something like this happening again.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/19/12 4:11 PM, "Øivind" wrote:

New question for you :)

To register benchmarks, the 'scheduleForBenchmarking' mixin inserts a
shared static initializer into the module. If I have a module A and a
module B, that both depend on eachother, than this will probably not
work..? The runtime will detect the init cycle and fail with the
following error:

"Cycle detected between modules with ctors/dtors"

Or am I wrong now?


I think you have discovered a major issue. Ideas on how to attack this?

Andrei


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/19/12 3:59 PM, Graham Fawcett wrote:

For comparison's sake, the Criterion benchmarking package for Haskell is
worth a look:

http://www.serpentine.com/blog/2009/09/29/criterion-a-new-benchmarking-library-for-haskell/


Criterion accounts for clock-call costs, displays various central
tendencies, reports outliers (and their significance --- whether the
variance is significantly affected by the outliers), etc., etc. It's a
very well conceived benchmarking system, and might well be worth
stealing from.


Will look into it, thanks.

Andrei


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/19/12 3:54 PM, Jacob Carlborg wrote:

On 2012-09-19 11:38, Peter Alexander wrote:


The problem with slowest is that you end up with the occasional OS
hiccup or GC collection which throws the entire benchmark off. I see
your point, but unless you can prevent the OS from interrupting, the
time would be meaningless.


That's way the average is good to have as well.


The occasional hiccup is often orders of magnitude slower than the rest, 
which means it will ruin the average. You may have meant "median", which 
has more merit, but then I'd say why bother - just use the minimum.


Andrei




Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/19/12 4:12 AM, Thiez wrote:

On Tuesday, 18 September 2012 at 22:01:30 UTC, Andrei Alexandrescu wrote:

After extensive tests with a variety of aggregate functions, I can say
firmly that taking the minimum time is by far the best when it comes
to assessing the speed of a function.


What if one tries to benchmark a nondeterministic function? In such a
case one might well be interested in the best run, worst run, and the
average.


I agree. Currently std.benchmark is not geared for measuring 
non-deterministic functions.


Andrei


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/19/12 4:06 AM, Peter Alexander wrote:

I don't see why `benchmark` takes (almost) all of its parameters as
template parameters. It looks quite odd, seems unnecessary, and (if I'm
not mistaken) makes certain use cases quite difficult.


That is intentional - indirect calls would add undue overhead to the 
measurements.


Andrei


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-21 Thread Jonathan M Davis
On Friday, September 21, 2012 15:20:49 monarch_dodra wrote:
> #3
> The only thing I'm having an issue with is "save". IMO, it is
> exceptionally dangerous to have a PRNG be a ForwardRange: It
> should only be saved if you have a damn good reason to do so. You
> can still "dup" if you want (manually) (if you think that is
> smart), but I don't think it should answer true to
> "isForwardRange".

It is _very_ crippling to a range to not be a forward range. There's lots of 
stuff that requires it. And I really don't see what the problem is. _Maybe_ 
it's okay for them to be input ranges and not forward ranges, but in general, 
I think that we need to be _very_ careful about doing that. It can be really, 
really annoying when a range is not a forward range.

> You just don't know what an algorithm will do under the hood if
> it finds out the range is saveable. In particular, save can be
> non-trivial and expensive...

It shouldn't be. It's a copy, and copy's are not supposed to be expensive. 
We've discussed this before with regards to postlbit constructors. Certainly, 
beyond the extra cost of having to new things up in some cases, they should be 
relatively cheap.

> QUESTION:
> If I (were to) deprecate "save", how would that work with the
> range traits type? If a range has "save", but it is deprecated,
> does it answer true to isForwardRange?

You'd have to  test it. It might depend on whether -d is used, but it could 
easily be that it'll be true as long as save exists.

- Jonathan M Davis


Re: Infer function template parameters

2012-09-21 Thread Jonas Drewsen
On Friday, 21 September 2012 at 15:04:14 UTC, Steven 
Schveighoffer wrote:
On Thu, 20 Sep 2012 15:57:47 -0400, Jonas Drewsen 
 wrote:



In foreach statements the type can be inferred:

foreach (MyFooBar fooBar; fooBars) writeln(fooBar);
same as:
foreach (foobar; fooBars) writeln(fooBar);

This is nice and tidy.
Wouldn't it make sense to allow the same for function 
templates as well:


auto min(L,R)(L a, R b)
{
return a < b;
}

same as:

auto min(a,b)
{
return a < b;
}

What am I missing (except some code that needs chaging because 
only param type and not name has been specified in t?


Although I like it, I wonder if it works in D's context free 
grammar.  Timon probably would know best...


I came up with this code, which compiles today:

import std.stdio;
alias int x;

void foo(x) {}


This would not be a valid syntax in my proposal since x is not a 
parameter name as it should be, but a type name.



void foo2(string x) {writeln(x);}

void main()
{
foo(1);
foo2("hello");
}

Under your proposal, if we shorten foo2 to foo2(x), what 
happens?  Does it become just like foo?  Or does it turn into a 
template?  Or is it an error?


A mentioned in the proposal (albeit not very clear) it requires 
non-templated function definitions to include both type and param 
names. If only one name is provided in a definition is always a 
param name. Unfortunately this is a breaking change for some code 
and that does speak against the proposal.


Note that just because some syntax isn't valid doesn't mean it 
should be utilized for a valid use.  That can result in code 
compiling and meaning something completely different than you 
expect.


I agree.




Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jonathan M Davis
On Friday, September 21, 2012 17:58:05 Manu wrote:
> Okay, I can buy this distinction in terminology.
> What I'm typically more interested in is profiling. I do occasionally need
> to do some benchmarking by your definition, so I'll find this useful, but
> should there then be another module to provide a 'profiling' API? Also
> worked into this API?

dmd has the -profile flag.

- Jonathan M Davis


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Manu
On 21 September 2012 07:23, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> For a very simple reason: unless the algorithm under benchmark is very
> long-running, max is completely useless, and it ruins average as well.
>

This is only true for systems with a comprehensive pre-emptive OS running
on the same core. Most embedded systems will only be affected by cache
misses and bus contention, in that situation, max is perfectly acceptable.


Re: Infer function template parameters

2012-09-21 Thread Jonas Drewsen
On Friday, 21 September 2012 at 11:40:54 UTC, Jonathan M Davis 
wrote:

On Friday, September 21, 2012 13:14:56 Jonas Drewsen wrote:
Maybe I wasn't clear in my suggestion. The new syntax in 
simply a
way to define a templated function - not a non-templated one 
ie:


auto foo(a,b) {}
is exactly the same as
auto foo(A,B)(A a, B b) {}


So all it does is save you a few characters? I don't think that 
that's even
vaguely worth it. It complicates the language and doesn't add 
any

functionality whatsoever.


Correct. The same with foreach where you also just save some 
characters but it is darn nice anyway.


And when you consider that it then makes it _harder_ to quickly 
see that a
function is templated, and it potentially makes it easier to 
accidentally
templatize a function, I think that it's a net loss even 
without considering
the fact that it complicates the language further. And _with_ 
considering it,

I think that it's definitely more trouble than it's worth.


Fair enough.

-Jonas




Re: [OT] Was: totally satisfied :D

2012-09-21 Thread H. S. Teoh
On Fri, Sep 21, 2012 at 03:54:21PM +0200, Paulo Pinto wrote:
[...]
> In big corporations you spend more time taking care of existing
> projects in big teams, than developing stuff from scratch.
> 
> In these type of environments you learn to appreciate the verbosity
> of certain programming languages, and keep away from cute hacks.

I have to say, this is very true. When I first got my current job, I was
appalled at the verbosity of the C code that I had to work with. C
code!! Not Java or any of that stuff. My manager told me to try to
conform to the (very verbose) style of the code. So I thought, well
they're paying me to do this, so I'll shut up and cope.

After a few years, I started to like the verbosity (which is saying a
lot from a person like me -- I used to code with 2-space indents),
because it makes it so darned easy to read, to search, and to spot
stupid bugs. Identifier names are predictable, so you could just guess
the correct name and you'd be right most of the time. Makes it easy to
search for identifier usage in the ~2 million line codebase, because the
predictable pattern excludes (almost) all false positives.

However:


> Specially when you take into consideration the quality of work that
> many programming drones are capable of.
[...]

Yeah, even the verbosity / consistent style of the code didn't prevent
people from doing stupid things with the code. Utterly stupid things.
My favorite example is a particular case of checking for IPv6 subnets by
converting the subnet and IP address to strings and then using string
prefix comparison. Another example is a bunch of static functions with
identical names and identical contents, copy-n-pasted across like 30
modules (or worse, some copies are imperfect buggy versions).  It makes
you wonder if the guy who wrote it even understands what code
factorization means. Or "bug fixes" that consists of a whole bunch of
useless redundant code to "fix" a problem, that adds all sorts of
spurious buggy corner cases to the code and *doesn't actually address
the cause of the bug at all*. It boggles the mind how something like
that made it through code review.

The saddest thing is that people are paying big bucks for this kind of
"enterprise" code. It's one of those things that make me never want to
pay for *any* kind of software... why waste the money when you can
download the OSS version for free? Yeah a lot of OSS code is crap, but
it's not like it's any worse than the crap you pay for.

Sigh.


T

-- 
Маленькие детки - маленькие бедки.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread jerro

As far as I know, D doesn't offer a sampling profiler,


It is possible to use a sampling profiler on D executables 
though. I usually use perf on Linux and AMD CodeAnalyst on 
Windows.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread David Piepgrass
After extensive tests with a variety of aggregate functions, I 
can say firmly that taking the minimum time is by far the best 
when it comes to assessing the speed of a function.


Like others, I must also disagree in princple. The minimum sounds 
like a useful metric for functions that (1) do the same amount of 
work in every test and (2) are microbenchmarks, i.e. they measure 
a small and simple task. If the benchmark being measured either 
(1) varies the amount of work each time (e.g. according to some 
approximation of real-world input, which obviously may vary)* or 
(2) measures a large system, then the average and standard 
deviation and even a histogram may be useful (or perhaps some 
indicator whether the runtimes are consistent with a normal 
distribution or not). If the running-time is long then the max 
might be useful (because things like task-switching overhead 
probably do not contribute that much to the total).


* I anticipate that you might respond "so, only test a single 
input per benchmark", but if I've got 1000 inputs that I want to 
try, I really don't want to write 1000 functions nor do I want 
1000 lines of output from the benchmark. An average, standard 
deviation, min and max may be all I need, and if I need more 
detail, then I might break it up into 10 groups of 100 inputs. In 
any case, the minimum runtime is not the desired output when the 
input varies.


It's a little surprising to hear "The purpose of std.benchmark is 
not to estimate real-world time. (That is the purpose of 
profiling)"... Firstly, of COURSE I would want to estimate 
real-world time with some of my benchmarks. For some benchmarks I 
just want to know which of two or three approaches is faster, or 
to get a coarse ball-park sense of performance, but for others I 
really want to know the wall-clock time used for realistic inputs.


Secondly, what D profiler actually helps you answer the question 
"where does the time go in the real-world?"? The D -profile 
switch creates an instrumented executable, which in my experience 
(admittedly not experience with DMD) severely distorts running 
times. I usually prefer sampling-based profiling, where the 
executable is left unchanged and a sampling program interrupts 
the program at random and grabs the call stack, to avoid the 
distortion effect of instrumentation. Of course, instrumentation 
is useful to find out what functions are called the most and 
whether call frequencies are in line with expectations, but I 
wouldn't trust the time measurements that much.


As far as I know, D doesn't offer a sampling profiler, so one 
might indeed use a benchmarking library as a (poor) substitute. 
So I'd want to be able to set up some benchmarks that operate on 
realistic data, with perhaps different data in different runs in 
order to learn about how the speed varies with different inputs 
(if it varies a lot then I might create more benchmarks to 
investigate which inputs are processed quickly, and which slowly.)


Some random comments about std.benchmark based on its 
documentation:


- It is very strange that the documentation of printBenchmarks 
uses neither of the words "average" or "minimum", and doesn't say 
how many trials are done I suppose the obvious interpretation 
is that it only does one trial, but then we wouldn't be having 
this discussion about averages and minimums right? Øivind says 
tests are run 1000 times... but it needs to be configurable 
per-test (my idea: support a _x1000 suffix in function names, or 
_for1000ms to run the test for at least 1000 milliseconds; and 
allow a multiplier when when running a group of benchmarks, e.g. 
a multiplier argument of 0.5 means to only run half as many 
trials as usual.) Also, it is not clear from the documentation 
what the single parameter to each benchmark is (define 
"iterations count".)


- The "benchmark_relative_" feature looks quite useful. I'm also 
happy to see benchmarkSuspend() and benchmarkResume(), though 
benchmarkSuspend() seems redundant in most cases: I'd like to 
just call one function, say, benchmarkStart() to indicate "setup 
complete, please start measuring time now."


- I'm glad that StopWatch can auto-start; but the documentation 
should be clearer: does reset() stop the timer or just reset the 
time to zero? does stop() followed by start() start from zero or 
does it keep the time on the clock? I also think there should be 
a method that returns the value of peek() and restarts the timer 
at the same time (perhaps stop() and reset() should just return 
peek()?)


- After reading the documentation of comparingBenchmark and 
measureTime, I have almost no idea what they do.




Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 19:45, Johannes Pfau wrote:


A perfect use case for user defined attributes ;-)

@benchmark void foo(){}
@benchmark("File read test") void foo(){}


Yes, we need user defined attributes and AST macros ASAP :)

--
/Jacob Carlborg


Re: Infer function template parameters

2012-09-21 Thread jerro
Although I like it, I wonder if it works in D's context free 
grammar.  Timon probably would know best...


I came up with this code, which compiles today:

import std.stdio;
alias int x;

void foo(x) {}

void foo2(string x) {writeln(x);}

void main()
{
foo(1);
foo2("hello");
}

Under your proposal, if we shorten foo2 to foo2(x), what 
happens?  Does it become just like foo?  Or does it turn into a 
template?  Or is it an error?


I don't see any way the proposed syntax could work either. We 
could have this, though:


auto min(auto a, auto b)
{
return a < b;
}

But I don't think this feature is worth changing the language 
anyway.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Johannes Pfau
Am Fri, 21 Sep 2012 00:45:44 -0400
schrieb Andrei Alexandrescu :
> 
> The issue here is automating the benchmark of a module, which would 
> require some naming convention anyway.

A perfect use case for user defined attributes ;-)

@benchmark void foo(){}
@benchmark("File read test") void foo(){}


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 20:01, Johannes Pfau wrote:


I didn't think of setAssertHandler. My changes are perfectly compatible
with it.
IIRC setAssertHandler has the small downside that it's used for all
asserts, not only those used in unit tests? I'm not sure if that's a
drawback or actually useful.


That's no problem, there's a predefined version, "unittest", when you 
pass the -unittest flag to the compiler:


version (unittest)
setAssertHandler(myUnitTestSpecificAssertHandler);

--
/Jacob Carlborg


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Ellery Newcomer

On 09/21/2012 03:04 AM, Jens Mueller wrote:

But it's nice to have source code and assembly side by side.

Jens



And very nice to have demangled names in assembly.


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 18:21, Andrei Alexandrescu wrote:


That's a good angle. Profiling is currently done by the -profile switch,
and there are a couple of library functions associated with it. To my
surprise, that documentation page has not been ported to the dlang.org
style: http://digitalmars.com/ctg/trace.html

I haven't yet thought whether std.benchmark should add more
profiling-related primitives. I'd opine for releasing it without such
for the time being.


If you have an API that is fairly open and provides more of the raw 
results then one can build a more profiling like solution on top of 
that. This can later be used to create a specific profiling module if we 
choose to do so.


--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Johannes Pfau
Am Fri, 21 Sep 2012 19:15:13 +0200
schrieb Jacob Carlborg :

> On 2012-09-21 17:32, Johannes Pfau wrote:
> 
> > Well, I think we should just leave the basic unittest runner in
> > druntime unchanged. There are unittests in phobos which depend on
> > that behavior.
> 
> Yeah, this was more a philosophical discussion.
> 
> > Other projects can use a custom test runner like Jens Mueller's
> > dtest.
> >
> > Ignoring assert in a test is not supported by this proposal. It
> > would need much more work and it's probably not a good idea anyway.
> 
> There's core.runtime.setAssertHandler, I hope your changes are 
> compatible with that.

Oh, I totally forgot about that. So the compiler is already calling
into druntime on assert statements, so forgot what I just said,
setAssertHandler should work just fine.

> 
> > But when porting gdc it's quite annoying if a unit test fails
> > because of a compiler (codegen) error and you can't see the result
> > of the remaining unit tests. If unit tests are not independent,
> > this could cause some false positives, or crash in the worst case.
> > But as long as this is not the default in druntime I see no reason
> > why we should explicitly prevent it.
> > Again, the default unit test runner in druntime hasn't changed _at
> > all_. This just provides additional possibilities for test runners.
> 
> With core.runtime.exception.setAssertHandler and 
> core.runtime.Runtime.moduleUnitTester I think that's only thing I
> need to run the unit tests the way I want it.
> 

I didn't think of setAssertHandler. My changes are perfectly compatible
with it.
IIRC setAssertHandler has the small downside that it's used for all
asserts, not only those used in unit tests? I'm not sure if that's a
drawback or actually useful.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 17:32, Johannes Pfau wrote:


Well, I think we should just leave the basic unittest runner in
druntime unchanged. There are unittests in phobos which depend on that
behavior.


Yeah, this was more a philosophical discussion.


Other projects can use a custom test runner like Jens Mueller's dtest.

Ignoring assert in a test is not supported by this proposal. It would
need much more work and it's probably not a good idea anyway.


There's core.runtime.setAssertHandler, I hope your changes are 
compatible with that.



But when porting gdc it's quite annoying if a unit test fails because
of a compiler (codegen) error and you can't see the result of the
remaining unit tests. If unit tests are not independent, this could
cause some false positives, or crash in the worst case. But as long as
this is not the default in druntime I see no reason why we should
explicitly prevent it.
Again, the default unit test runner in druntime hasn't changed _at all_.
This just provides additional possibilities for test runners.


With core.runtime.exception.setAssertHandler and 
core.runtime.Runtime.moduleUnitTester I think that's only thing I need 
to run the unit tests the way I want it.


--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 16:37, Jens Mueller wrote:


If there are use cases I agree. I do not know one.
The question whether there are *tools* that report in case of success is
easier to verify. Do you know any tool that does reporting in case
success? I think gtest does not do it. I'm not sure about JUnit.
But of course if a unittest has additional information and that is
already implemented or easy to implement fine with me. My point is more
that for the common cases you do not need this. Maybe in most. Maybe in
all.


Test::Unit, the default testing framework for Ruby on Rails prints a dot 
for each successful test.


--
/Jacob Carlborg


GC.malloc problems with the DMD

2012-09-21 Thread Raphael Basso

Hello

I'm porting my connection library to Firebird database for the D 
language, and I'm having problems with "Access Violation" using 
DMD (did some testing and this problem does not occur with GDC). 
This code is based on a simple C implementation, and it works ok 
in VC++ and GCC):


[code]
import std.stdio;
import core.memory;
import std.conv;

alias uint ISC_STATUS;

const ISC_STATUS_LENGTH = 20;
alias ISC_STATUS ISC_STATUS_ARRAY[ISC_STATUS_LENGTH];

alias char ISC_SCHAR;
alias ubyte ISC_UCHAR;

alias int ISC_LONG;
alias uint ISC_ULONG;

alias short ISC_SHORT;
alias ushort ISC_USHORT;

struct ISC_QUAD
{
ISC_LONG gds_quad_high;
ISC_ULONG gds_quad_low;
}

struct XSQLVAR
{
ISC_SHORT sqltype;
ISC_SHORT sqlscale;
ISC_SHORT sqlsubtype;
ISC_SHORT sqllen;
ISC_SCHAR * sqldata;
ISC_SHORT * sqlind;
ISC_SHORT sqlname_length;
ISC_SCHAR sqlname[32];
ISC_SHORT relname_length;
ISC_SCHAR relname[32];
ISC_SHORT ownname_length;
ISC_SCHAR ownname[32];
ISC_SHORT aliasname_length;
ISC_SCHAR aliasname[32];
}

const SQLDA_VERSION1 = 1;

struct XSQLDA
{
ISC_SHORT ver;
ISC_SCHAR sqldaid[8];
ISC_LONG  sqldabc;
ISC_SHORT sqln;
ISC_SHORT sqld;
XSQLVAR sqlvar[1];
}

const isc_dpb_version1 = 1;
const isc_dpb_user_name = 28;
const isc_dpb_password = 29;
const isc_dpb_address = 1;

const DSQL_close = 1;
const DSQL_drop = 2;
const DSQL_unprepare = 4;

alias void * FB_API_HANDLE;
alias FB_API_HANDLE isc_db_handle;
alias FB_API_HANDLE isc_blob_handle;
alias FB_API_HANDLE isc_stmt_handle;
alias FB_API_HANDLE isc_tr_handle;

const SQL_TEXT = 452;
const SQL_VARYING = 448;
const SQL_SHORT = 500;
const SQL_LONG = 496;
const SQL_FLOAT = 482;
const SQL_DOUBLE = 480;
const SQL_D_FLOAT = 530;
const SQL_TIMESTAMP = 510;
const SQL_BLOB = 520;
const SQL_ARRAY = 540;
const SQL_QUAD = 550;
const SQL_TYPE_TIME = 560;
const SQL_TYPE_DATE = 570;
const SQL_INT64 = 580;
const SQL_NULL = 32766;

const SQL_DATE = SQL_TIMESTAMP;

const SQL_DIALECT_V5 = 1;
const SQL_DIALECT_V6_TRANSITION = 2;
const SQL_DIALECT_V6 = 3;
const SQL_DIALECT_CURRENT = SQL_DIALECT_V6;

alias int ISC_DATE;
alias uint ISC_TIME;

struct ISC_TIMESTAMP
{
ISC_DATE timestamp_date;
ISC_TIME timestamp_time;
}

struct tm
{
int tm_sec;
int tm_min;
int tm_hour;
int tm_mday;
int tm_mon;
int tm_year;
int tm_wday;
int tm_yday;
int tm_isdst;
}

const isc_segstr_eof = 335544367L;

version(DigitalMars)
{
extern(C)
{
		ISC_STATUS isc_attach_database(ISC_STATUS *, short, const 
ISC_SCHAR *, isc_db_handle *, short, const ISC_SCHAR *);

ISC_STATUS isc_print_status(const ISC_STATUS *);
ISC_STATUS isc_detach_database(ISC_STATUS *, isc_db_handle *);

		ISC_STATUS isc_start_transaction(ISC_STATUS *, isc_tr_handle *, 
short, ...);
		ISC_STATUS isc_commit_transaction(ISC_STATUS *, isc_tr_handle 
*);
		ISC_STATUS isc_rollback_transaction(ISC_STATUS *, isc_tr_handle 
*);


		ISC_STATUS isc_dsql_allocate_statement(ISC_STATUS *, 
isc_db_handle *, isc_stmt_handle *);
		ISC_STATUS isc_dsql_alloc_statement2(ISC_STATUS *, 
isc_db_handle *, isc_stmt_handle *);


		ISC_STATUS isc_dsql_describe(ISC_STATUS *, isc_stmt_handle *, 
ushort, XSQLDA *);
		ISC_STATUS isc_dsql_describe_bind(ISC_STATUS *, isc_stmt_handle 
*, ushort, XSQLDA *);
		ISC_STATUS isc_dsql_execute(ISC_STATUS *, isc_tr_handle *, 
isc_stmt_handle *, ushort, const XSQLDA*);
		ISC_STATUS isc_dsql_execute2(ISC_STATUS *, isc_tr_handle *, 
isc_stmt_handle *, ushort, const XSQLDA *, const XSQLDA *);
		ISC_STATUS isc_dsql_execute_immediate(ISC_STATUS *, 
isc_db_handle *, isc_tr_handle *, ushort, const ISC_SCHAR *, 
ushort, const XSQLDA *);
		ISC_STATUS isc_dsql_fetch(ISC_STATUS *, isc_stmt_handle *, 
ushort, const XSQLDA *);

ISC_STATUS isc_dsql_finish(isc_db_handle *);
		ISC_STATUS isc_dsql_free_statement(ISC_STATUS *, 
isc_stmt_handle *, ushort);
		ISC_STATUS isc_dsql_prepare(ISC_STATUS *, isc_tr_handle *, 
isc_stmt_handle *, ushort, const ISC_SCHAR *, ushort, XSQLDA *);


ISC_LONG fb_interpret(ISC_SCHAR *, uint, const ISC_STATUS **);
		ISC_STATUS isc_open_blob(ISC_STATUS *, isc_db_handle *, 
isc_tr_handle *, isc_blob_handle *, ISC_QUAD *);
		ISC_STATUS isc_open_blob2(ISC_STATUS *, isc_db_handle *, 
isc_tr_handle *, isc_blob_handle *, ISC_QUAD *, ISC_USHORT, const 
ISC_UCHAR *);
		ISC_STATUS isc_get_segment(ISC_STATUS *, isc_blob_handle *, 
ushort *, ushort, ISC_SCHAR *);

ISC_STATUS isc_close_blob(ISC_STATUS *, isc_blob_handle *);

void isc_decode_sql_date(const ISC_DATE *, void *);
void isc_decode_sql_time(const ISC_TIME *, void *);
void isc_decode_timestamp(const ISC_TIMESTAMP *, void *);
}
}
else
{
extern(Windows)
{
		ISC_STATUS is

Re: Review of Andrei's std.benchmark

2012-09-21 Thread Andrei Alexandrescu

On 9/21/12 10:58 AM, Manu wrote:

What I'm typically more interested in is profiling. I do occasionally
need to do some benchmarking by your definition, so I'll find this
useful, but should there then be another module to provide a 'profiling'
API? Also worked into this API?


That's a good angle. Profiling is currently done by the -profile switch, 
and there are a couple of library functions associated with it. To my 
surprise, that documentation page has not been ported to the dlang.org 
style: http://digitalmars.com/ctg/trace.html


I haven't yet thought whether std.benchmark should add more 
profiling-related primitives. I'd opine for releasing it without such 
for the time being.



Thanks,

Andrei


Re: Infer function template parameters

2012-09-21 Thread Steven Schveighoffer
On Thu, 20 Sep 2012 15:57:47 -0400, Jonas Drewsen   
wrote:



In foreach statements the type can be inferred:

foreach (MyFooBar fooBar; fooBars) writeln(fooBar);
same as:
foreach (foobar; fooBars) writeln(fooBar);

This is nice and tidy.
Wouldn't it make sense to allow the same for function templates as well:

auto min(L,R)(L a, R b)
{
 return a < b;
}

same as:

auto min(a,b)
{
 return a < b;
}

What am I missing (except some code that needs chaging because only  
param type and not name has been specified in t?


Although I like it, I wonder if it works in D's context free grammar.   
Timon probably would know best...


I came up with this code, which compiles today:

import std.stdio;
alias int x;

void foo(x) {}

void foo2(string x) {writeln(x);}

void main()
{
foo(1);
foo2("hello");
}

Under your proposal, if we shorten foo2 to foo2(x), what happens?  Does it  
become just like foo?  Or does it turn into a template?  Or is it an error?


Note that just because some syntax isn't valid doesn't mean it should be  
utilized for a valid use.  That can result in code compiling and meaning  
something completely different than you expect.


-Steve


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Tobias Pankrath


I'm actually kinda surprised the feedback on this is rather 
negative. I
thought running unit tests individually and printing 
line/file/name was

requested quite often?


I want to have this. My workflow is: Run all tests0(run all). If 
some fail, see if there might be a common reason (so don't stop). 
Than run the unit tests that will most likely tell you what's 
wrong in a debugger (run one test individually).




Re: Infer function template parameters

2012-09-21 Thread Peter Alexander

On Thursday, 20 September 2012 at 21:04:15 UTC, Timon Gehr wrote:

On 09/20/2012 10:52 PM, Peter Alexander wrote:
Like it or not, templates still cause a lot of code bloat, 
complicate
linking, cannot be virtual, increase compilation resources, 
and generate
difficult to understand messages. They are a powerful tool, 
but need to

be used wisely.


The proposal does not make wise usage harder. It only makes 
usage more

concise in some cases.


Conciseness encourages use, both wise and unwise.

I don't think that templates should have more concise syntax than 
non-templates. Having shorter syntax suggests that it should be 
the default choice, and it's a bad default choice for most 
functions.





Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Johannes Pfau
Am Fri, 21 Sep 2012 11:25:10 +0200
schrieb Jacob Carlborg :

> On 2012-09-20 23:14, Jonathan M Davis wrote:
> 
> > Running more unittest blocks after a failure is similarly flawed,
> > but at least in that case, you know that had a failure earlier in
> > the module, which should then tell you that you may not be able to
> > trust further tests (but if you still run them, it's at least then
> > potentially possible to fix further failures at the same time -
> > particularly if your tests don't rely on external state). So, while
> > not necessarily a great idea, it's not as bad to run subsequent
> > unittest blocks after a failure (especially if programmers are
> > doing what they're supposed to and making their unit tests
> > independent).
> 
> I don't agree. I think that if you designed your unittests blocks so 
> they depend on other unittest blocks are equally flawed. There's a 
> reason for that most testing frameworks have "setup" and "teardown" 
> functions that are called before and after each test. With these 
> function you can restore the environment to a known state and have
> the tests keep running.
> 
> On the other hand, if there's a failure in a test, continue running
> that test would be quite bad.
> 

Well, I think we should just leave the basic unittest runner in
druntime unchanged. There are unittests in phobos which depend on that
behavior.

Other projects can use a custom test runner like Jens Mueller's dtest.

Ignoring assert in a test is not supported by this proposal. It would
need much more work and it's probably not a good idea anyway.


But when porting gdc it's quite annoying if a unit test fails because
of a compiler (codegen) error and you can't see the result of the
remaining unit tests. If unit tests are not independent, this could
cause some false positives, or crash in the worst case. But as long as
this is not the default in druntime I see no reason why we should
explicitly prevent it.
Again, the default unit test runner in druntime hasn't changed _at all_.
This just provides additional possibilities for test runners.


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-21 Thread monarch_dodra

On Friday, 21 September 2012 at 13:19:55 UTC, monarch_dodra wrote:

QUESTION:
If I (were to) deprecate "save", how would that work with the 
range traits type? If a range has "save", but it is deprecated, 
does it answer true to isForwardRange?


Never mind I found out the answer: Using something deprecated is 
a compile error, so traits correctly report the change.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Johannes Pfau
Am Fri, 21 Sep 2012 16:37:37 +0200
schrieb Jens Mueller :

> Jacob Carlborg wrote:
> > On 2012-09-21 14:19, Jens Mueller wrote:
> > 
> > >Why do you need filename and line information of a unittest. If a
> > >unittest fails you'll get the relevant information. Why do you

With the recent name mangling change it's possible to get the unittest
line if a test fails, but only if you have working backtraces. That
might not be true for other compilers / non x86 architectures.

To get the filename you have to demangle the unittest function name
(IIRC core.demangle can't even demangle that name right now) and this
only gives you the module name (which you could also get using
moduleinfo though)

It's also useful for disabled tests, so you can actually look them up.


> > >want the information when a unittest succeeded? I only care about
> > >failed unittests. A count of the number of executed unittests and
> > >total number is enough, I think.

The posted example shows everything that can be done, even if it might
not make sense. However printing successful tests also has a use case:

1: It shows the progress of unit testing. (Not so important)
2: If code crashes and doesn't produce a backtrace, you still now which
test crashed as the file name and line number are printed before
running the test. (might sound unprobable. But try porting gdc to a new
architecture. I don't want to waste time anymore commenting out
unit tests to find the failing one in a file with dozens of tests and
an ARM machine that takes ages to run the tests)

Another use case is printing all unittests in a library. Or a gui app
displaying all unittests, allowing to only run single unittests, etc.

Of course names are better than filename+line. But names need a change
in the language and filename+line are an useful identifier as long as we
don't have names.

> > 
> > But others might care about other things. I doesn't hurt if the
> > information is available. There might be use cases when one would
> > want to display all tests regardless of if they failed or not.
> 
> If there are use cases I agree. I do not know one.
> The question whether there are *tools* that report in case of success
> is easier to verify. Do you know any tool that does reporting in case
> success? I think gtest does not do it. I'm not sure about JUnit.

I don't know those tools, but I guess they have some sort of progress
indicator?

But I remember some .NET unit test GUIs that showed a green button for
successful tests. But it's been years since I've done anything in .NET.

> But of course if a unittest has additional information and that is
> already implemented or easy to implement fine with me. My point is
> more that for the common cases you do not need this. Maybe in most.
> Maybe in all.

You usually don't have to print sucessful tests (although sometimes I
wasn't sure if tests actually run), but as you can't know at compile
time which tests fail you either have this information for all tests or
for none.

The main reason _I_ want this is for gdc: We currently don't run the
unit tests on gdc at all. I know they won't pass on ARM. But the unit
tests error out on the first failing test. Often that error is a
difficult to fix backend bug, and lots of simpler library bugs are
hidden because the other tests aren't executed.

I'm actually kinda surprised the feedback on this is rather negative. I
thought running unit tests individually and printing line/file/name was
requested quite often?


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Manu
On 21 September 2012 07:45, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> As such, you're going to need a far more
>> convincing argument than "It worked well for me."
>>
>
> Sure. I have just detailed the choices made by std.benchmark in a couple
> of posts.
>
> At Facebook we measure using the minimum, and it's working for us.


Facebook isn't exactly 'realtime' software. Obviously, faster is always
better, but it's not in a situation where if you slip a sync point by 1ms
in an off case, it's all over. You can lose 1ms here, and make it up at a
later time, and the result is the same. But again, this feeds back to your
distinction between benchmarking and profiling.

 Otherwise, I think we'll need richer results. At the very least there
>> should be an easy way to get at the raw results programmatically
>> so we can run whatever stats/plots/visualizations/**output-formats we
>> want. I didn't see anything like that browsing through the docs, but
>> it's possible I may have missed it.
>>
>
> Currently std.benchmark does not expose raw results for the sake of
> simplicity. It's easy to expose such, but I'd need a bit more convincing
> about their utility.


Custom visualisation, realtime charting/plotting, user supplied reduce
function?


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Manu
On 21 September 2012 07:30, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> I don't quite agree. This is a domain in which intuition is having a hard
> time, and at least some of the responses come from an intuitive standpoint,
> as opposed from hard data.
>
> For example, there's this opinion that taking the min, max, and average is
> the "fair" thing to do and the most informative.


I don't think this is a 'fair' claim, the situation is that different
people are looking for different statistical information, and you can
distinguish it with whatever terminology you prefer. You are only
addressing a single use case; 'benchmarking', by your definition. I'm more
frequently interested in profiling than 'benchmark'ing, and I think both
are useful to have.

The thing is, the distinction between 'benchmarking' and 'profiling' is
effectively implemented via nothing more than the sampling algorithm; min
vs avg, so is it sensible to expose the distinction in the API in this way?


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread David Piepgrass
However, what's truly insane IMHO is continuing to run a 
unittest block after
it's already had a failure in it. Unless you have exceedingly 
simplistic unit
tests, the failures after the first one mean pretty much 
_nothing_ and simply

clutter the results.


I disagree. Not only are my unit tests independent (so of course 
the test runner should keep running tests after one fails) but 
often I do want to keep running after a failure.


I like the BOOST unit test library's approach, which has two 
types of "assert": BOOST_CHECK and BOOST_REQUIRE. After a 
BOOST_CHECK fails, the test keeps running, but BOOST_REQUIRE 
throws an exception to stop the test. When testing a series of 
inputs in a loop, it is useful (for debugging) to see the 
complete set of which ones succeed and which ones fail. For this 
feature (continuation) to be really useful though, it needs to be 
able to output context information on failure (e.g. "during 
iteration 13 of input group B").


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Iain Buclaw
On 21 September 2012 14:49, bearophile  wrote:
>> It seems even this program produces a too much long asm listing for the
>> site:
>>
>> import std.stdio;
>> void main() {
>> writeln("%f", 1.5);
>> }
>
>
> Compiled with:
>
> -O0 -march=native
>
> Bye,
> bearophile


Curse those templates. ;-)

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Manu
On 21 September 2012 07:17, Andrei Alexandrescu <
seewebsiteforem...@erdani.org> wrote:

> On 9/20/12 10:05 AM, Manu wrote:
>
>> Memory locality is often the biggest contributing
>>
> performance hazard in many algorithms, and usually the most
>> unpredictable. I want to know about that in my measurements.
>> Reproducibility is not important to me as accuracy. And I'd rather be
>> conservative(/pessimistic) with the error.
>>
> >
>
>> What guideline would you apply to estimate 'real-world' time spent when
>> always working with hyper-optimistic measurements?
>>
>
> The purpose of std.benchmark is not to estimate real-world time. (That is
> the purpose of profiling.) Instead, benchmarking measures and provides a
> good proxy of that time for purposes of optimizing the algorithm. If work
> is done on improving the minimum time given by the benchmark framework, it
> is reasonable to expect that performance in-situ will also improve.


Okay, I can buy this distinction in terminology.
What I'm typically more interested in is profiling. I do occasionally need
to do some benchmarking by your definition, so I'll find this useful, but
should there then be another module to provide a 'profiling' API? Also
worked into this API?


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Jacob Carlborg wrote:
> On 2012-09-21 14:19, Jens Mueller wrote:
> 
> >Why do you need filename and line information of a unittest. If a
> >unittest fails you'll get the relevant information. Why do you want the
> >information when a unittest succeeded? I only care about failed
> >unittests. A count of the number of executed unittests and total number
> >is enough, I think.
> 
> But others might care about other things. I doesn't hurt if the
> information is available. There might be use cases when one would
> want to display all tests regardless of if they failed or not.

If there are use cases I agree. I do not know one.
The question whether there are *tools* that report in case of success is
easier to verify. Do you know any tool that does reporting in case
success? I think gtest does not do it. I'm not sure about JUnit.
But of course if a unittest has additional information and that is
already implemented or easy to implement fine with me. My point is more
that for the common cases you do not need this. Maybe in most. Maybe in
all.

Jens


Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Paulo Pinto
On Thursday, 20 September 2012 at 21:15:24 UTC, Nick Sabalausky 
wrote:

On Thu, 20 Sep 2012 08:46:00 -0400
"Steven Schveighoffer"  wrote:

On Wed, 19 Sep 2012 17:05:35 -0400, Nick Sabalausky  
 wrote:


> On Wed, 19 Sep 2012 10:11:50 -0400
> "Steven Schveighoffer"  wrote:
>
>> I cannot argue that Apple's audio volume isn't too 
>> simplistic for
>> its own good.  AIUI, they have two "volumes", one for the 
>> ringer,

>> and one for playing audio, games, videos, etc.
>>
>
> There's also a separate one for alarms/alerts:
> 
http://www.ipodnn.com/articles/12/01/13/user.unaware.that.alarm.going.off.was.his/

This makes sense.  Why would you ever want your alarm clock to
"alarm silently"


I don't carry around my alarm clock everywhere I go.

Aside from that, if it happens to be set wrong, I damn sure 
don't want
it going off in a library, in a meeting, at the front row of a 
show,

etc.


How would you wake up?


By using a real alarm clock?

Besides, we can trivially both have our own ways thanks to the 
simple
invention of "options". Unfortunately, Apple apparently seems 
to think

somebody's got that patented or something.


This is another case of
someone using the wrong tool for the job


Apparently so ;)



I don't know any examples of sounds that disobey the silent 
switch


There is no silent switch. The switch only affects *some* 
sounds, and
I'm not interested in memorizing which ones just so I can try 
to avoid

the others.

The only "silent switch" is the one I use: Just leave the 
fucking thing

in the car.


except for the "find my iPhone" alert,


That's about the only one that actually does make any sense at 
all.


> It's just unbelievably convoluted, over-engineered, and as 
> far from
> "simple" as could possibly be imagined. Basically, you have 
> "volume
> up" and "volume down", but there's so much damn modality 
> (something
> Apple *loves*, but it almost universally bad for UI design) 
> that

> they work pretty much randomly.

I think you exaggerate.  Just a bit.



Not really (and note I said "pretty much randomly" not "truly
randomly").

Try listing out all the different volume rules (that you're 
*aware* of -
who knows what other hidden quirks there might be), all 
together, and I

think you may be surprised just how much complexity there is.

Then compare that to, for example, a walkman or other portable 
music
player (iTouch doesn't count, it's a PDA) which is 100% 
predictable and
trivially simple right from day one. You never even have to 
think about
it, the volume **just works**, period. The fact that the ijunk 
has
various other uses besides music is immaterial: It could have 
been
simple and easy and worked well, and they instead chose to make 
it

complex.

Not only that, but it would have been trivial to just offer an 
*option*
to turn that "smart" junk off. But then allowing a user to 
configure
their own property to their own liking just wouldn't be very 
"Apple",

now would it?

>> BTW, a cool feature I didn't know for a long time is if you 
>> double
>> tap the home button, your audio controls appear on the lock 
>> screen
>> (play/pause, next previous song, and audio volume).  But I 
>> think

>> you have to unlock to access ringer volume.
>>
>
> That's good to know (I didn't know).
>
> Unfortunately, it still only eliminates one, maybe two, 
> swipes from
> an already-complex procedure, that on any sensible device 
> would
> have been one step: Reach down into the pocket to adjust the 
> volume.


Well, for music/video, the volume buttons *do* work in locked 
mode.




More complexity and modality! Great.


>
> How often has anyone ever had a volume POT go bad? I don't 
> think
> I've *ever* even had it happen. It's a solid, 
> well-established

> technology.

I have had several sound systems where the volume knob started
 misbehaving, due to corrosion, dust, whatever.  You can hear 
it
mostly when you turn the knob, and it has a scratchy sound 
coming

from the speakers.



Was that before or after the "three year old" mark?


>
> I don't use a mac, and I never will again. I spent about a 
> year or
> two with OSX last decade and I'll never go back for *any* 
> reason.
> Liked it at first, but the more I used it the more I hated 
> it.


It's a required thing for iOS development :)


Uhh, like I said, it *isn't*. I've *already* built an iOS 
package on my
Win machine (again, using Marmalade, although I'd guess Corona 
and
Unity are likely the same story), which a co-worker has 
*already*

successfully run on his jailbroken iTouches and iPhone.

And the *only* reason they needed to be jailbroken is because we
haven't yet paid Apple's ransom for a signing certificate. Once 
we have
that, I can sign the .ipa right here on Win with Marmalade's 
deployment

tool.

The *only* thing unfortunately missing without a mac is 
submission to

the Big Brother store.


I have recently
experienced the exact opposite.  I love my mac, and I would 
never go

back to Windows.


Not tr

Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread bearophile
It seems even this program produces a too much long asm listing 
for the site:


import std.stdio;
void main() {
writeln("%f", 1.5);
}


Compiled with:

-O0 -march=native

Bye,
bearophile


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Timon Gehr

On 09/21/2012 12:23 PM, Jens Mueller wrote:

Timon Gehr wrote:
...

The issue is that CTFE can only interpret functions that are fully
analyzed and therefore the analysis of foo depends circularly on
itself. The compiler should spit out an error that indicates the
issue.


That is true. I will file such a diagnostics bug with low priority.


You could post an enhancement request to allow interpretation of
incompletely-analyzed functions, if you think it is of any use.


I think it is.
What do you think?



I think if it has an obvious interpretation (and in this case, it even
has an obvious analysis strategy) it should compile.


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread bearophile

I'd like a way to filter the output to the
disassembly of just one (or few) functions, because otherwise 
the output risks being too much large.


It seems even this program produces a too much long asm listing 
for the site:


import std.stdio;
void main() {
writeln("%f", 1.5);
}


Bye,
bearophile


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Timon Gehr

On 09/21/2012 10:29 AM, deadalnix wrote:

Le 21/09/2012 01:13, Timon Gehr a écrit :

You could post an enhancement request to allow interpretation of
incompletely-analyzed functions, if you think it is of any use.



I predict tricky implementation.


This depends on the existing code base. It is not inherently tricky.


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread bearophile

Andrei Alexandrescu:

I've met Matt Goldbolt, the author of the GCC Explorer at 
http://gcc.godbolt.org - a very handy online disassembler for 
GCC.


We got to talk a bit about D and he hacked together support for 
D by using gdc. Take a look at http://d.godbolt.org, I think 
it's pretty darn cool! I'm talking to him about integrating his 
work with our servers.


It's a nice idea. I'd like a way to filter the output to the
disassembly of just one (or few) functions, because otherwise the
output risks being too much large.

At my second try I have received this, I don't know why:





Bye,
bearophile


Re: Reference semantic ranges and algorithms (and std.random)

2012-09-21 Thread monarch_dodra
On Thursday, 20 September 2012 at 11:10:43 UTC, Jonathan M Davis 
wrote:

[SNIP]

- Jonathan M Davis


#1
Hey, I've been working on this (locally): I've made all the PRNGs 
reference ranges. It actually works perfectly. I took the "ensure 
initialized" route, as you suggested. I was able to take an 
approach that moved little code, so there is a clean distinction 
between the old "Payload", which was not touched (much), and the 
wrappers. The advantage is that the diff is very clean.


I was able to do this without adding or removing any 
functionality. Great!


Regarding the cost of "is initialized", I think I found a very 
good work around. I'll show it in a pull.


#2
Johannes Pfau: When asking for a generator, by default you get a 
reference prng. This is the _safe_ approach.


If somebody *really* wants to, they can always declare a 
PRNGPayload on the stack, but I advise against that as:

*Passing it by value could generate duplicate sequences
*Passing it by value (for certain PRNGs) can be prohibitively 
expansive.


But the option is there.

#3
The only thing I'm having an issue with is "save". IMO, it is 
exceptionally dangerous to have a PRNG be a ForwardRange: It 
should only be saved if you have a damn good reason to do so. You 
can still "dup" if you want (manually) (if you think that is 
smart), but I don't think it should answer true to 
"isForwardRange".


You just don't know what an algorithm will do under the hood if 
it finds out the range is saveable. In particular, save can be 
non-trivial and expensive...


I think if somebody really wants to be able to access some random 
numbers more than once, they are just as well off doing:

Type[] randomNumbers = myPRNG.take(50).array();

The advantage here is that at no point to you ever risk having 
duplicate numbers. (Take advances the reference range generator.


QUESTION:
If I (were to) deprecate "save", how would that work with the 
range traits type? If a range has "save", but it is deprecated, 
does it answer true to isForwardRange?




Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 14:19, Jens Mueller wrote:


Why do you need filename and line information of a unittest. If a
unittest fails you'll get the relevant information. Why do you want the
information when a unittest succeeded? I only care about failed
unittests. A count of the number of executed unittests and total number
is enough, I think.


But others might care about other things. I doesn't hurt if the 
information is available. There might be use cases when one would want 
to display all tests regardless of if they failed or not.


--
/Jacob Carlborg


Re: [OT] Was: totally satisfied :D

2012-09-21 Thread Steven Schveighoffer
On Thu, 20 Sep 2012 17:16:14 -0400, Nick Sabalausky  
 wrote:



On Thu, 20 Sep 2012 08:46:00 -0400
"Steven Schveighoffer"  wrote:


On Wed, 19 Sep 2012 17:05:35 -0400, Nick Sabalausky
 wrote:

> There's also a separate one for alarms/alerts:
>  
http://www.ipodnn.com/articles/12/01/13/user.unaware.that.alarm.going.off.was.his/


This makes sense.  Why would you ever want your alarm clock to
"alarm silently"


I don't carry around my alarm clock everywhere I go.


You don't have to use it as an alarm clock.  An alarm clock is for waking  
you up.  Why would you set it to wake you up in a music performance?



Aside from that, if it happens to be set wrong, I damn sure don't want
it going off in a library, in a meeting, at the front row of a show,
etc.


Can't help you there :)  It's *really* hard to set it wrong (just try it).

Besides, it doesn't sound like that person was using the right tool for  
the job.  If he's awake at that time, he's using it as a reminder, for  
which the reminders app is better suited.





How would you wake up?


By using a real alarm clock?


What if you don't have one?  You are camping, sleeping on the couch at a  
friends house, etc.



Besides, we can trivially both have our own ways thanks to the simple
invention of "options". Unfortunately, Apple apparently seems to think
somebody's got that patented or something.


Huh?  Just don't use it as an alarm clock?  Why do you need an option to  
prevent you from doing that?



I don't know any examples of sounds that disobey the silent switch


There is no silent switch. The switch only affects *some* sounds, and
I'm not interested in memorizing which ones just so I can try to avoid
the others.


s/some/nearly all

Again, I gave you the *two* incidental sounds it doesn't affect.  Sorry  
you can't be bothered to learn them.



The only "silent switch" is the one I use: Just leave the fucking thing
in the car.


That works too, but doesn't warrant rants about how you haven't learned  
how to use the fucking thing :)



> It's just unbelievably convoluted, over-engineered, and as far from
> "simple" as could possibly be imagined. Basically, you have "volume
> up" and "volume down", but there's so much damn modality (something
> Apple *loves*, but it almost universally bad for UI design) that
> they work pretty much randomly.

I think you exaggerate.  Just a bit.



Not really (and note I said "pretty much randomly" not "truly
randomly").

Try listing out all the different volume rules (that you're *aware* of -
who knows what other hidden quirks there might be), all together, and I
think you may be surprised just how much complexity there is.


1. ringer volume affects all sounds except for music/video/games
2. Silent switch will ringer volume to 0 for all sounds except for  
find-my-iphone and alarm clock
3. If playing a game/video/music, the volume buttons affect that volume,  
otherwise, they affect ringer volume.


Wow, you are right, three whole rules.  That's way more than 1.  I stand  
corrected :)



Then compare that to, for example, a walkman or other portable music
player (iTouch doesn't count, it's a PDA) which is 100% predictable and
trivially simple right from day one. You never even have to think about
it, the volume **just works**, period. The fact that the ijunk has
various other uses besides music is immaterial: It could have been
simple and easy and worked well, and they instead chose to make it
complex.

Not only that, but it would have been trivial to just offer an *option*
to turn that "smart" junk off. But then allowing a user to configure
their own property to their own liking just wouldn't be very "Apple",
now would it?


I detect a possible prejudice against Apple here :)


Well, for music/video, the volume buttons *do* work in locked mode.



More complexity and modality! Great.


This is the one thing I agree with you on -- the volume buttons should  
just work in locked mode, following the rules of when the phone is not  
locked.  I can't envision how the volume buttons would accidentally get  
pressed.



> How often has anyone ever had a volume POT go bad? I don't think
> I've *ever* even had it happen. It's a solid, well-established
> technology.

I have had several sound systems where the volume knob started
misbehaving, due to corrosion, dust, whatever.  You can hear it
mostly when you turn the knob, and it has a scratchy sound coming
from the speakers.



Was that before or after the "three year old" mark?


Not sure.  I don't have any of these things anymore :)  POTs aren't used  
very much any more.




The *only* thing unfortunately missing without a mac is submission to
the Big Brother store.


I have recently
experienced the exact opposite.  I love my mac, and I would never go
back to Windows.


Not trying to "convert" you, just FWIW:

You might like Win7. It's very Mac-like out-of-the-box which is exactly
why I hate it ;)


No, it's nowhere near the same level.  I have Win 7, had it from t

Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jens Mueller
Johannes Pfau wrote:
> Am Fri, 21 Sep 2012 11:11:49 +0200
> schrieb Jacob Carlborg :
> 
> > On 2012-09-20 21:11, Johannes Pfau wrote:
> > 
> > > Oh right, I thought that interface was more restrictive. So the only
> > > changes necessary in druntime are to adapt to the new compiler
> > > interface.
> > >
> > > The new dmd code is still necessary, as it allows to access
> > > all unittests of a module individually. The current code only
> > > provides one function for all unittests in a module.
> > 
> > Yes, exactly. There where some additional data, like file and module 
> > name as well in your compiler changes?
> > 
> 
> The modulename can already be obtained from the moduleinfo. My proposal
> adds fileName and line information for every unittest. It's also
> prepared for unittest names: The name information is passed to the
> runtime, but currently it's always an empty string.

Why do you need filename and line information of a unittest. If a
unittest fails you'll get the relevant information. Why do you want the
information when a unittest succeeded? I only care about failed
unittests. A count of the number of executed unittests and total number
is enough, I think.

Jens


Re: Infer function template parameters

2012-09-21 Thread Jonathan M Davis
On Friday, September 21, 2012 13:14:56 Jonas Drewsen wrote:
> Maybe I wasn't clear in my suggestion. The new syntax in simply a
> way to define a templated function - not a non-templated one ie:
> 
> auto foo(a,b) {}
> is exactly the same as
> auto foo(A,B)(A a, B b) {}

So all it does is save you a few characters? I don't think that that's even 
vaguely worth it. It complicates the language and doesn't add any 
functionality whatsoever.

And when you consider that it then makes it _harder_ to quickly see that a 
function is templated, and it potentially makes it easier to accidentally 
templatize a function, I think that it's a net loss even without considering 
the fact that it complicates the language further. And _with_ considering it, 
I think that it's definitely more trouble than it's worth.

- Jonathan M Davis


Re: Infer function template parameters

2012-09-21 Thread deadalnix
I'll add that delegate literals already allow a similar syntax, so, for 
consistency reasons, this is something that make sense.


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Johannes Pfau
Am Fri, 21 Sep 2012 11:11:49 +0200
schrieb Jacob Carlborg :

> On 2012-09-20 21:11, Johannes Pfau wrote:
> 
> > Oh right, I thought that interface was more restrictive. So the only
> > changes necessary in druntime are to adapt to the new compiler
> > interface.
> >
> > The new dmd code is still necessary, as it allows to access
> > all unittests of a module individually. The current code only
> > provides one function for all unittests in a module.
> 
> Yes, exactly. There where some additional data, like file and module 
> name as well in your compiler changes?
> 

The modulename can already be obtained from the moduleinfo. My proposal
adds fileName and line information for every unittest. It's also
prepared for unittest names: The name information is passed to the
runtime, but currently it's always an empty string.

It could also allow to mark unittests as @disable, if we wanted that.

Here's the dmd pull request
https://github.com/D-Programming-Language/dmd/pull/1131

For user code, druntime changes are more interesting:
https://github.com/D-Programming-Language/druntime/issues/308

Custom test runner: http://dpaste.dzfl.pl/046ed6fb
Sample unittests: http://dpaste.dzfl.pl/517b1088
Output: http://dpaste.dzfl.pl/2780939b


Re: Infer function template parameters

2012-09-21 Thread Jonas Drewsen
On Thursday, 20 September 2012 at 21:39:31 UTC, Jonathan M Davis 
wrote:

On Thursday, September 20, 2012 21:57:47 Jonas Drewsen wrote:

In foreach statements the type can be inferred:

foreach (MyFooBar fooBar; fooBars) writeln(fooBar);
same as:
foreach (foobar; fooBars) writeln(fooBar);

This is nice and tidy.
Wouldn't it make sense to allow the same for function templates
as well:

auto min(L,R)(L a, R b)
{
return a < b;
}

same as:

auto min(a,b)
{
return a < b;
}

What am I missing (except some code that needs chaging because
only param type and not name has been specified in t?


You don't want everything templated. Templated functions are 
fundamentally
different. They don't exist until they're instantiated, and 
they're only
instantiated because you call them. Sometimes, you want 
functions to always
exist regardless of whether any of your code is calling them 
(particularly

when dealing with libraries).


I agree. And in that case just use a non-templated version that 
specifies the types as always.


Another result of all of this is that templated functions can't 
be virtual, so
your proposal would be a _huge_ problem for classes. Not to 
mention, errors
with templated functions tend to be much nastier than with 
non-templated

functions even if it's not as bad as C++.


I don't see how the terser syntax for templated functions has 
anything to do with this. The things you mention are simply facts 
about templated functions and nothing special for the suggested 
syntax.



Also, your prosposal then means that
we'd up with templated functions without template constraints 
as a pretty
normal thing, which would mean that such functions would 
frequently get called
with types that don't work with them. To fix that, you'd have 
to add template
constraints to such functions, which would be even more verbose 
than just

giving the types like we do now.


By looking at the two examples I provided, both the existing 
syntax and the new one suffers from that. The new one is just 
nicer on the eyes I think.


You really need to be able to control when something is 
templated or not. And
your proposal is basically just a terser template syntax. Is it 
really all

that more verbose to do

auto min(L, R)(L a, R b) {...}

rather than

auto min(a, b) {...}


Some people would love to be able to use D as a scripting 
language using e.g. rdmd. This is the kind of thing that would 
make it very attractive for scripting.


I am _not_ suggesting to replace the existing syntax since that 
really should be used for things like phobos where everything 
must be checked by the type system as much as possible upfront. 
But for many programs (especially in the prototyping/exploratory 
phases) the same kind of thoroughness is not within the resource 
limits.


That is probably why many use dynamically typed languages like 
python/ruby for prototyping and first editions and end up 
sticking with those languages in the end. D has already taken 
great steps in that direction and this is just a suggestion to 
make it even more attractive.


And even if we added your syntax, we'd still need the current 
syntax, because
you need to able to indicate which types go with which 
parameters even if it's

just to say that two parameters have the same type.


As mentioned before this suggestion is an addition. Not a 
replacement.


Also, what happens if you put types on some parameters but not 
others? Are
those parameters given templated types? If so, a simple type 
could silently
turn your function into a templated function without you 
realizing it.


Maybe I wasn't clear in my suggestion. The new syntax in simply a 
way to define a templated function - not a non-templated one ie:


auto foo(a,b) {}
is exactly the same as
auto foo(A,B)(A a, B b) {}

The semantic of what should happen if one of the parameters had 
its type provided is up for discussion. But I think that it 
should be allowed and just lock that template parameter to that 
type. This would not change it from templated to non-templated in 
any case afaik.


Then there's function overloading. If you wanted to overload a 
function in
your proposal, you'd have to either still give the types or use 
template
constraints, meaning that it can't be used with overloaded 
functions.


Yes. As with the the existing template syntax.

Another thing to consider is that in languages like Haskell 
where all
parameter types are inferred, it's often considered good 
practice to give the
types anyway (assuming that the language lets you - Haskell 
does), because the
functions are then not only easier to understand, but the error 
messages are

more sane.


And the good practice is done on stable code or when you are sure 
about what you are doing from the start. I do not think it is 
uncommon to try out a solution and refactor and iterate until 
things gets nice and tidy. This is definitely not the only way to 
work, but for some problem domains (especially the ones you are 
not well vers

Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Jens Mueller
Iain Buclaw wrote:
> On 21 September 2012 11:29, Iain Buclaw  wrote:
> > On 21 September 2012 11:17, Bernard Helyer  wrote:
> >> On Friday, 21 September 2012 at 10:04:00 UTC, Jens Mueller wrote:
> >>>
> >>> Andrei Alexandrescu wrote:
> 
>  I've met Matt Goldbolt, the author of the GCC Explorer at
>  http://gcc.godbolt.org - a very handy online disassembler for GCC.
> >>>
> >>>
> >>> This is not a disassembler. It just stops compilation before the
> >>> assembler (gcc -S). A dissembler would create the assembler code given
> >>> only the machine code.
> >>
> >>
> >> You are both correct and incredibly pedantic. :P
> >
> > Half correct and incredibly pedantic. :-)
> >
> > There's two modes.  One is assembler output, the other is objdump
> > output (which is a disassembler).
> >
> 
> And if it doesn't then I must be incredibly confused at this hour in
> the morning (yawns).

How do I use the objdump mode in the web interface?

Jens


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Iain Buclaw
On 21 September 2012 11:29, Iain Buclaw  wrote:
> On 21 September 2012 11:17, Bernard Helyer  wrote:
>> On Friday, 21 September 2012 at 10:04:00 UTC, Jens Mueller wrote:
>>>
>>> Andrei Alexandrescu wrote:

 I've met Matt Goldbolt, the author of the GCC Explorer at
 http://gcc.godbolt.org - a very handy online disassembler for GCC.
>>>
>>>
>>> This is not a disassembler. It just stops compilation before the
>>> assembler (gcc -S). A dissembler would create the assembler code given
>>> only the machine code.
>>
>>
>> You are both correct and incredibly pedantic. :P
>
> Half correct and incredibly pedantic. :-)
>
> There's two modes.  One is assembler output, the other is objdump
> output (which is a disassembler).
>

And if it doesn't then I must be incredibly confused at this hour in
the morning (yawns).


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Iain Buclaw
On 21 September 2012 11:17, Bernard Helyer  wrote:
> On Friday, 21 September 2012 at 10:04:00 UTC, Jens Mueller wrote:
>>
>> Andrei Alexandrescu wrote:
>>>
>>> I've met Matt Goldbolt, the author of the GCC Explorer at
>>> http://gcc.godbolt.org - a very handy online disassembler for GCC.
>>
>>
>> This is not a disassembler. It just stops compilation before the
>> assembler (gcc -S). A dissembler would create the assembler code given
>> only the machine code.
>
>
> You are both correct and incredibly pedantic. :P

Half correct and incredibly pedantic. :-)

There's two modes.  One is assembler output, the other is objdump
output (which is a disassembler).

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Jens Mueller
Timon Gehr wrote:
> On 09/20/2012 11:22 PM, Jens Mueller wrote:
> >Hi,
> >
> >I do not understand the following error message given the code:
> >
> >string foo(string f)
> >{
> > if (f == "somestring")
> > {
> > return "got somestring";
> > }
> > return bar!(foo("somestring"));
> >}
> >
> >template bar(string s)
> >{
> > enum bar = s;
> >}
> >
> >I'll with dmd v2.060 get:
> >test.d(7):called from here: foo("somestring")
> >test.d(7):called from here: foo("somestring")
> >test.d(7):called from here: foo("somestring")
> >test.d(7): Error: expression foo("somestring") is not a valid template value 
> >argument
> >test.d(12):called from here: foo("somestring")
> >test.d(12):called from here: foo("somestring")
> >test.d(7): Error: template instance test.bar!(foo("somestring")) error 
> >instantiating
> >
> >In line 7 I call the template bar. But I call with the string that is
> >returned by the CTFE of foo("somestring") which should return "got
> >somestring" but instead it seems that an expression is passed. How do I
> >force the evaluation foo("somestring")?
> >I haven't found a bug on this.
> >
> >Jens
> >
> 
> You can file a diagnostics-bug.
> 
> The issue is that CTFE can only interpret functions that are fully
> analyzed and therefore the analysis of foo depends circularly on
> itself. The compiler should spit out an error that indicates the
> issue.

That is true. I will file such a diagnostics bug with low priority.

> You could post an enhancement request to allow interpretation of
> incompletely-analyzed functions, if you think it is of any use.

I think it is.
What do you think?

Jens


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Jonathan M Davis
On Friday, September 21, 2012 10:44:11 Jens Mueller wrote:
> Is it also illegal to do
> 
> int foo(char[] s) {
>   if (__ctfe)
> return mixin(s);
>   else
> return ""; // or assert(false)
> }
> 
> ?
> 
> Because this is executable at run time.

It's not executable at runtime. The __ctfe branch may very well be optimized 
out when compiled for runtime, but that doesn't mean that the code in it can 
be invalid. The function must be fully compiled in either case. Removing the 
__ctfe branch would merely be an optimization. It's an if after all, not a 
static if (and __ctfe can't be used in static if).

> > Normal recursion avoids this, because it only depends on the function's
> > signature, but what you're doing requires that the function be _run_ as
> > part of the process of defining it. That's an unbreakable circular
> > dependency and will never work. You need to redesign your code so that
> > you don't require a function to call itself while it's being defined.
> > Being called at compile time is fine, but being called while it's being
> > compiled is not.
> 
> But if the function wasn't compiled but interpreted at compile time it
> would be possible, wouldn't it?

D isn't an intepreted language, and I expect that trying to treat it at such 
would complicate things considerably. All CTFE does is make it possible to run 
functions at compile time in order to initialize stuff which must be known at 
compile time (which does include the possibility of doing stuff like passing 
function results as template arguments). It doesn't fundamentally change how 
things work. If anything, it's _more_ restrictive, not less. So, I don't think 
that it makes any sense at all to try and make it act in an interpretive 
fashion.

- Jonathan M Davis


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Jens Mueller
Jens Mueller wrote:
> Jonathan M Davis wrote:
> > On Friday, September 21, 2012 00:11:51 Jens Mueller wrote:
> > > I thought foo is interpreted at compile time.
> > > There seems to be a subtle difference I'm not getting.
> > > Because you can do the factorial using CTFE even though you have
> > > recursion. I.e. there you have a call to the function itself. I.e. it
> > > can be compiled because you just insert a call to the function. But for
> > > a template you cannot issue something like call for instantiation.
> > > Have to think more about it. But your answer helps a lot. Pushes me in
> > > the right direction.
> > 
> > Okay. Straight up recursion works. So, with this code
> > 
> > int func(int value)
> > {
> >  if(value < 10)
> >  return func(value + 1);
> >  return value;
> > }
> > 
> > enum var = func(5);
> > 
> > var would be 10. The problem is that you're trying to pass the result of a 
> > recursive call as a template argument. As far as a function's behavior 
> > goes, 
> > it's identical regardless of whether it's run at compile time or runtime 
> > (save 
> > that __ctfe is true at compile time but not runtime). To quote the docs:
> > 
> > --
> > Any func­tions that ex­e­cute at com­pile time must also be ex­e­cutable at 
> > run time. The com­pile time eval­u­a­tion of a func­tion does the 
> > equiv­a­lent 
> > of run­ning the func­tion at run time. This means that the se­man­tics of a 
> > func­tion can­not de­pend on com­pile time val­ues of the func­tion. For ex­
> > am­ple:
> > 
> > int foo(char[] s) {
> >  return mixin(s);
> > }
> > 
> > const int x = foo("1");
> > 
> > is il­le­gal, be­cause the run­time code for foo() can­not be gen­er­ated. 
> > A 
> > func­tion tem­plate would be the ap­pro­pri­ate method to im­ple­ment this 
> > sort of thing.
> > --
> 
> Is it also illegal to do
> 
> int foo(char[] s) {
>   if (__ctfe)
> return mixin(s);
>   else
> return ""; // or assert(false)
> }
> 
> ?
> 
> Because this is executable at run time.

Just read the docs again. And __ctfe is used to exclude running code at
runtime. It seems it really is a variable that is false at runtime. I
thought it more a value known at compile time. But then you would write
static if anyway.

Somehow I find these restrictions unnecessary. I believe they can be
solved. Then you could combine CTFE with all other compile-time
mechanisms. But this needs more time to think about. Currently I will
try to work around this. Let's see ...

Jens


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Bernard Helyer

On Friday, 21 September 2012 at 10:04:00 UTC, Jens Mueller wrote:

Andrei Alexandrescu wrote:

I've met Matt Goldbolt, the author of the GCC Explorer at
http://gcc.godbolt.org - a very handy online disassembler for 
GCC.


This is not a disassembler. It just stops compilation before the
assembler (gcc -S). A dissembler would create the assembler 
code given

only the machine code.


You are both correct and incredibly pedantic. :P


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Jens Mueller
Andrei Alexandrescu wrote:
> I've met Matt Goldbolt, the author of the GCC Explorer at
> http://gcc.godbolt.org - a very handy online disassembler for GCC.

This is not a disassembler. It just stops compilation before the
assembler (gcc -S). A dissembler would create the assembler code given
only the machine code.
But it's nice to have source code and assembly side by side.

Jens


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Tove
On Friday, 21 September 2012 at 04:44:58 UTC, Andrei Alexandrescu 
wrote:

Andrei Alexandrescu  wrote:
My claim is unremarkable. All I'm saying is the minimum running 
time of an algorithm on a given input is a stable and 
indicative proxy for the behavior of the algorithm in general. 
So it's a good target for optimization.


I reached the same conclusion and use the same method at work.

Considering min will converge towards a stable value quite 
quickly... would it not be a reasonable default to auto detect 
when the min is stable with some degree of statistical 
certainty...?




Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 05:47, Andrei Alexandrescu wrote:

I've met Matt Goldbolt, the author of the GCC Explorer at
http://gcc.godbolt.org - a very handy online disassembler for GCC.

We got to talk a bit about D and he hacked together support for D by
using gdc. Take a look at http://d.godbolt.org, I think it's pretty darn
cool! I'm talking to him about integrating his work with our servers.


That's pretty cool.

--
/Jacob Carlborg


Re: Review of Andrei's std.benchmark

2012-09-21 Thread Jacob Carlborg

On 2012-09-21 06:23, Andrei Alexandrescu wrote:


For a very simple reason: unless the algorithm under benchmark is very
long-running, max is completely useless, and it ruins average as well.


I may have completely misunderstood this but aren't we talking about 
what do include in the output of the benchmark? In that case, if you 
don't like max and average just don't look at it.



For virtually all benchmarks I've run, the distribution of timings is a
half-Gaussian very concentrated around the minimum. Say you have a
minimum of e.g. 73 us. Then there would be a lot of results close to
that; the mode of the distribution would be very close, e.g. 75 us, and
the more measurements you take, the closer the mode is to the minimum.
Then you have a few timings up to e.g. 90 us. And finally you will
inevitably have a few outliers at some milliseconds. Those are orders of
magnitude larger than anything of interest and are caused by system
interrupts that happened to fall in the middle of the measurement.

Taking those into consideration and computing the average with those
outliers simply brings useless noise into the measurement process.


After your replay to one of Manu's post, I think I misunderstood the 
std.benchmark module. I was thinking more of profiling. But are these 
quite similar tasks, couldn't std.benchmark work for both?


--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-20 23:14, Jonathan M Davis wrote:


Running more unittest blocks after a failure is similarly flawed, but at least
in that case, you know that had a failure earlier in the module, which should
then tell you that you may not be able to trust further tests (but if you
still run them, it's at least then potentially possible to fix further failures
at the same time - particularly if your tests don't rely on external state).
So, while not necessarily a great idea, it's not as bad to run subsequent
unittest blocks after a failure (especially if programmers are doing what
they're supposed to and making their unit tests independent).


I don't agree. I think that if you designed your unittests blocks so 
they depend on other unittest blocks are equally flawed. There's a 
reason for that most testing frameworks have "setup" and "teardown" 
functions that are called before and after each test. With these 
function you can restore the environment to a known state and have the 
tests keep running.


On the other hand, if there's a failure in a test, continue running that 
test would be quite bad.


--
/Jacob Carlborg


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Jacob Carlborg

On 2012-09-20 21:11, Johannes Pfau wrote:


Oh right, I thought that interface was more restrictive. So the only
changes necessary in druntime are to adapt to the new compiler
interface.

The new dmd code is still necessary, as it allows to access
all unittests of a module individually. The current code only
provides one function for all unittests in a module.


Yes, exactly. There where some additional data, like file and module 
name as well in your compiler changes?


--
/Jacob Carlborg


Re: From APL

2012-09-21 Thread Simen Kjaeraas
On Thu, 20 Sep 2012 13:55:48 +0200, bearophile   
wrote:


Section 2.3 is about Scan operations, that are like reduce or fold, but  
keep all the intermediate results too:


+\ of 3 1 2 4

is 3 4 6 10

Some lazy scans are present in the Haskell Prelude too (and in  
Mathematica) (the Prelude contains functions and constants that are  
loaded on default):

http://zvon.org/other/haskell/Outputprelude/scanl_f.html

I think scans are not present in Phobos. Maybe one or two are worth  
adding.


See attached. Mayhaps I should post this as a pull request?



--

Section 2.5 suggests to generalize the dot product:

If the + and * of the Fortran 77 DO loops for inner product are  
replaced by other functions, a whole family of interesting inner  
products appear,<


Some examples of usage (in J language):

Associative search   *./..=y
Inverted associative x+./..~:y
Minima of residues for primesx<./..|y
Transitive closure step on Booleans  y+./..*.y
Minima of maxima x<./..>.y

Maybe a higher order function named "dot" (that takes two callables) is  
worth adding to Phobos. But I am not sure.


So that's basically mapReduce, right? We've got that, just not as succinct.


--
Simen

scan.d
Description: Binary data


Re: Extending unittests [proposal] [Proof Of Concept]

2012-09-21 Thread Johannes Pfau
Am Thu, 20 Sep 2012 22:41:23 +0400
schrieb Dmitry Olshansky :

> On 20-Sep-12 22:18, bearophile wrote:
> > Johannes Pfau:
> >
> >> The perfect solution:
> >> Would allow user defined attributes on tests, so you could name
> >> them, assign categories, etc. But till we have those user defined
> >> attributes, this seems to be a good solution.
> >
> > We have @disable, maybe it's usable for unittests too :-)
> >
> We have version(none)
> 

Actually @disable is better. version(none) completely ignores the test.
But @disable could set a disabled bool in the UnitTest struct (and set
the function pointer to null). This way you can easily get all disabled
unittests:

./unittest --show-disabled
=== core.thread 
src/core/thread.d:1761
...

This can be implemented with 3 lines of additional code. The real
question is if it's ok to reuse @disable for this.


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread Jens Mueller
Jonathan M Davis wrote:
> On Friday, September 21, 2012 00:11:51 Jens Mueller wrote:
> > I thought foo is interpreted at compile time.
> > There seems to be a subtle difference I'm not getting.
> > Because you can do the factorial using CTFE even though you have
> > recursion. I.e. there you have a call to the function itself. I.e. it
> > can be compiled because you just insert a call to the function. But for
> > a template you cannot issue something like call for instantiation.
> > Have to think more about it. But your answer helps a lot. Pushes me in
> > the right direction.
> 
> Okay. Straight up recursion works. So, with this code
> 
> int func(int value)
> {
>  if(value < 10)
>  return func(value + 1);
>  return value;
> }
> 
> enum var = func(5);
> 
> var would be 10. The problem is that you're trying to pass the result of a 
> recursive call as a template argument. As far as a function's behavior goes, 
> it's identical regardless of whether it's run at compile time or runtime 
> (save 
> that __ctfe is true at compile time but not runtime). To quote the docs:
> 
> --
> Any func­tions that ex­e­cute at com­pile time must also be ex­e­cutable at 
> run time. The com­pile time eval­u­a­tion of a func­tion does the 
> equiv­a­lent 
> of run­ning the func­tion at run time. This means that the se­man­tics of a 
> func­tion can­not de­pend on com­pile time val­ues of the func­tion. For ex­
> am­ple:
> 
> int foo(char[] s) {
>  return mixin(s);
> }
> 
> const int x = foo("1");
> 
> is il­le­gal, be­cause the run­time code for foo() can­not be gen­er­ated. A 
> func­tion tem­plate would be the ap­pro­pri­ate method to im­ple­ment this 
> sort of thing.
> --

Is it also illegal to do

int foo(char[] s) {
  if (__ctfe)
return mixin(s);
  else
return ""; // or assert(false)
}

?

Because this is executable at run time.

> You're doing something very similar to passing a function argument to a mixin 
> statement, but in this case, it's passing the result of calling a function 
> which doesn't exist yet (since it hasn't been fully compiled) to a template.
> 
> In order for your foo function to be called, it must be fully compiled first 
> (including its entire body, since CTFE needs the full definition of the 
> function, not just its signature). The body cannot be fully compiled until 
> the 
> template that it's using is instantiated. But that template can't be compiled 
> until foo has been compiled, because you're passing a call to foo to it as a 
> template argument. So, you have a circular dependency.

I see. That's is clear to me now. Thanks.

> Normal recursion avoids this, because it only depends on the function's 
> signature, but what you're doing requires that the function be _run_ as part 
> of the process of defining it. That's an unbreakable circular dependency and 
> will never work. You need to redesign your code so that you don't require a 
> function to call itself while it's being defined. Being called at compile 
> time 
> is fine, but being called while it's being compiled is not.

But if the function wasn't compiled but interpreted at compile time it
would be possible, wouldn't it?

Jens


Re: CTFE calling a template: Error: expression ... is not a valid template value argument

2012-09-21 Thread deadalnix

Le 21/09/2012 01:13, Timon Gehr a écrit :

You could post an enhancement request to allow interpretation of
incompletely-analyzed functions, if you think it is of any use.



I predict tricky implementation.


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Andrej Mitrovic
On 9/21/12, Andrei Alexandrescu  wrote:
> snip

Integrating this with dpaste would be aweee..sooome!


Re: GDC Explorer - an online disassembler for D

2012-09-21 Thread Iain Buclaw
On 21 September 2012 04:47, Andrei Alexandrescu
 wrote:
> I've met Matt Goldbolt, the author of the GCC Explorer at
> http://gcc.godbolt.org - a very handy online disassembler for GCC.
>
> We got to talk a bit about D and he hacked together support for D by using
> gdc. Take a look at http://d.godbolt.org, I think it's pretty darn cool! I'm
> talking to him about integrating his work with our servers.
>
>
> Andrei

That's awesome. :-)


-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';