Re: mago-mi: GDB/MI compatible frontend for Mago debugger

2016-05-19 Thread Vadim Lopatin via Digitalmars-d-announce

On Friday, 20 May 2016 at 03:15:46 UTC, E.S. Quinn wrote:
Unfortunately in this particular case, CDT's debugging is 
pretty fancy and is going to need most if not all of the MI.


I also don't know which MI commands need to be supported to 
have it work with DDT. The thing is I didn't write the GDB 
debugger integration for DDT, I just reused the one from CDT. 
So I'm not that familiar with those internals.


BTW, the MI integration is fairly language agnostic, so in 
theory your debugger could be used by CDT to debug C/C++ 
programs too, no? At least those generated by DMC. Maybe 
Visual Studio ones too?


I'm experimenting with the build of mago-mi that comes with the 
current ~master for dlangide, and it seems to throw an 
unrecognized parameter error when on the --log-level=TRACE 
parameter.


And it seems that the version string it returns upsets eclipse, 
as it throws the following error:


 Could not determine GDB version using command: 
D:\WinHome\\AppData\Roaming\dub\packages\dlangide-master\bin\mago-mi.exe --version

 Unexpected output format:

 mago-mi debugger v0.1


Though, from my experience using it in Linux, eclipse-CDT's 
debugger seems pretty full-featured; it will likely require 
large swaths of mi functionality to be fully useful.


Need to show version like lldb-mi does - writes gnu gdb version, 
then says that really it's not a GDB


Let me check problem with --log-level



Re: mago-mi: GDB/MI compatible frontend for Mago debugger

2016-05-19 Thread E.S. Quinn via Digitalmars-d-announce
Unfortunately in this particular case, CDT's debugging is pretty 
fancy and is going to need most if not all of the MI.


On Thursday, 19 May 2016 at 13:29:14 UTC, Bruno Medeiros wrote:

On 19/05/2016 08:41, Vadim Lopatin wrote:
On Wednesday, 18 May 2016 at 18:02:12 UTC, Bruno Medeiros 
wrote:
While DDT technically work oks with GDB (the GDB from 
mingw-w64 that
is), you are right, there isn't a compiler on Windows that 
supplies

debug info in the way GDB understands. See
https://wiki.dlang.org/Debuggers.

DMD produces debug info COFF or OMF format, which GDB doesn't 
know
anything about (nor ever will). LDC should in theory work 
with DWARF
info, but this is broken somehow. Not because of LLVM though, 
since
for example Rust on Windows works. As for GDC, it doesn't 
even supply
binaries for Windows (that target Windows) -  it is not a 
supported

platform.

BTW, Eclipse/DDT should in theory work with mago-mi as well, 
at least
if the protocol is implemented correctly. Have you tried it? 
I dunno

how complete your MI implementation is.


So it looks like mago-mi might be helpful for DDT on Windows.
mago-mi supports subset of GDB/MI commands enough for 
DlangIDE, but it

can be easy extended.

Currenlty supported commands can be shown using help command 
(use
--interpreter=mi2 when running mago-mi, otherwise it will 
print non-MI

commands). Also commands are listed in readme file
https://github.com/buggins/mago/blob/master/MagoMI/mago-mi/README.md

I didn't try DDT with mago-mi, and so I'm not sure which 
commands must

be supported by debugger to get it working with DDT.

To get list of commands DDT tries to use you can either add
--log-file=magomi.log --log-level=TRACE to mago-mi command 
line or use

debug build of mago-mi.
It will all STDIN data to log file, and report errors for 
unsupported

commands.



I also don't know which MI commands need to be supported to 
have it work with DDT. The thing is I didn't write the GDB 
debugger integration for DDT, I just reused the one from CDT. 
So I'm not that familiar with those internals.


BTW, the MI integration is fairly language agnostic, so in 
theory your debugger could be used by CDT to debug C/C++ 
programs too, no? At least those generated by DMC. Maybe Visual 
Studio ones too?


I'm experimenting with the build of mago-mi that comes with the 
current ~master for dlangide, and it seems to throw an 
unrecognized parameter error when on the --log-level=TRACE 
parameter.


And it seems that the version string it returns upsets eclipse, 
as it throws the following error:


 Could not determine GDB version using command: 
D:\WinHome\\AppData\Roaming\dub\packages\dlangide-master\bin\mago-mi.exe --version

 Unexpected output format:

 mago-mi debugger v0.1


Though, from my experience using it in Linux, eclipse-CDT's 
debugger seems pretty full-featured; it will likely require large 
swaths of mi functionality to be fully useful.


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Jens Müller via Digitalmars-d-announce
On Thursday, 19 May 2016 at 22:04:56 UTC, Andrei Alexandrescu 
wrote:

On 05/19/2016 05:36 PM, Jens Müller wrote:
I removed the code to optimize for large gaps. Because it is 
only
confusing. I may generate some benchmark data with larger gaps 
later to

see whether it is worthwhile for such data.


For skipping large gaps quickly, check galloping search (google 
for it, we also have it in phobos). -- Andrei


Sure. I've already seen this. It's nice. But you have to include 
it in the sparse dot product (or list intersection) algorithm 
somehow. Then you require random access and galloping is only 
beneficial if the gaps are large. As a library writer this is a 
difficult position because this turns easily into over 
engineering. Optimally one just exposes the primitives and the 
user plugs them together. Ideally without having to many knobs 
per algorithm.


Jens


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Jens Müller via Digitalmars-d-announce
On Thursday, 19 May 2016 at 22:02:53 UTC, Andrei Alexandrescu 
wrote:

On 05/19/2016 05:36 PM, Jens Müller wrote:

I'm not seeing it. Let me explain.
Consider the input a = [1] and b = [2, 3] (I only write the 
indices).
The smallest back index is 1, i.e., a.back is the chosen 
sentinel.


Nonono, you stamp the largest index over the smaller index. So 
you overwrite a = [3] and you leave b = [2, 3] as it is.


Now you know that you're multiplying two correct sparse vectors 
in which _definitely_ the last elements have equal indexes. So 
the inner loop is:


if (a[i].idx < b[j].idx) {
  i++; // no check needed
} else if (a[i].idx > b[j].idx) {
  j++; // no check needed
} else {
  // check needed
  r += a[i].val * b[j].val;
  if (i == imax || j == jmax) break;
  ++i;
  ++j;
}

At the end you need a fixup to make sure you account for the 
last index that you overwrote (which of course you need to save 
first).


Makes sense?


What if you stomped over an index in a that has as an equal index 
in b (it could be anywhere in b). After the loop finishes you 
restore the index in a. But how do you address the values for the 
stomped over index if needed?

For instance test it on
a = [2]
b = [2,3]
Note the 2 in b could be anywhere.

I think you can check for
if (a[i].idx == sentinelIdx) break;
instead of
if (i == imax || j == jmax) break;

Jens


Re: D's Auto Decoding and You

2016-05-19 Thread John Carter via Digitalmars-d-announce

On Tuesday, 17 May 2016 at 14:06:37 UTC, Jack Stouffer wrote:

http://jackstouffer.com/blog/d_auto_decoding_and_you.html


There are lots of places where invalid Unicode is either 
commonplace or legal, e.g. Linux file names, and therefore 
auto decoding cannot be used. It turns out in the wild that 
pure Unicode is not universal - there's lots of dirty Unicode 
that should remain unmolested because it's user data, and auto 
decoding does not play well with that mentality.


As a slightly tangential aside.

https://lwn.net/Articles/686392/

There exists a proposal for a linux kernel module to render the 
creation of such names impossible.


I for one will install it on all my systems as soon as I can.

However, until then, my day job requires me to find, scan and 
analyze and work with whatever crud, the herd of cats I work 
with, throws into the repo.


And no, sadly I can't just rewrite everything because they (or 
some tool they use) doesn't understand UTF8.


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 05/19/2016 05:36 PM, Jens Müller wrote:

I removed the code to optimize for large gaps. Because it is only
confusing. I may generate some benchmark data with larger gaps later to
see whether it is worthwhile for such data.


For skipping large gaps quickly, check galloping search (google for it, 
we also have it in phobos). -- Andrei


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 05/19/2016 05:36 PM, Jens Müller wrote:

I'm not seeing it. Let me explain.
Consider the input a = [1] and b = [2, 3] (I only write the indices).
The smallest back index is 1, i.e., a.back is the chosen sentinel.


Nonono, you stamp the largest index over the smaller index. So you 
overwrite a = [3] and you leave b = [2, 3] as it is.


Now you know that you're multiplying two correct sparse vectors in which 
_definitely_ the last elements have equal indexes. So the inner loop is:


if (a[i].idx < b[j].idx) {
  i++; // no check needed
} else if (a[i].idx > b[j].idx) {
  j++; // no check needed
} else {
  // check needed
  r += a[i].val * b[j].val;
  if (i == imax || j == jmax) break;
  ++i;
  ++j;
}

At the end you need a fixup to make sure you account for the last index 
that you overwrote (which of course you need to save first).


Makes sense?


Andrei


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Jens Müller via Digitalmars-d-announce
On Thursday, 19 May 2016 at 12:04:31 UTC, Andrei Alexandrescu 
wrote:

On 5/19/16 4:12 AM, Jens Müller wrote:

---
if (a.length == 0 || b.length == 0)
return 0;
const amax = a.length - 1, bmax = b.length - 1;
size_t i,j = 0;
double sum = 0;
for (;;)
{
if (a[i].index < b[j].index) {
if (i++ == amax) break;
}
else if (a[i].index > b[j].index) {
bumpJ: if (j++ == bmax) break;
}
else
{
assert(a[i].index == b[j].index);
sum += a[i].value * b[j].value;
if (i++ == amax) break;
goto bumpJ;
}
}
return sum;
---

Then if you add the sentinel you only need the bounds tests in 
the third case.


I'm not seeing it. Let me explain.
Consider the input a = [1] and b = [2, 3] (I only write the 
indices). The smallest back index is 1, i.e., a.back is the 
chosen sentinel. Now I assume that we set b.back to a.back 
restoring it after the loop. Now in the case a[i].index < 
b[j].index I have to check whether a[i].index == a.back.index to 
break because otherwise i is incremented (since a[i].index = 1 
and b[j].index = 2, for i = 0 and j = 0 respectively). In the 
last case I only check a[i].index == a.back.index, since this 
implies b[j].index == a.back.index.
So in sum I have two bounds tests. But I think this is not what 
you are thinking of.

This does not look right.
Here are the plots for the implementations.
https://www.scribd.com/doc/313204510/Running-Time
https://www.scribd.com/doc/313204526/Speedup

dot1 is my baseline, which is indeed worse than your baseline 
(dot2). But only on gdc. I choose dot2 as the baseline for 
computing the speedup. dot3 is the sentinel version.


I removed the code to optimize for large gaps. Because it is only 
confusing. I may generate some benchmark data with larger gaps 
later to see whether it is worthwhile for such data.

It looks much more regular now (ldc is still strange).

Jens


Re: D's Auto Decoding and You

2016-05-19 Thread Taylor Hillegeist via Digitalmars-d-announce

On Tuesday, 17 May 2016 at 14:06:37 UTC, Jack Stouffer wrote:

http://jackstouffer.com/blog/d_auto_decoding_and_you.html

Based on the recent thread in General, I wrote this blog post 
that's designed to be part beginner tutorial, part objective 
record of the debate over it, and finally my opinions on the 
matter.


When I first learned about auto-decoding, I was kinda miffed 
that there wasn't anything in the docs or on the website that 
went into more detail. So I wrote this in order to introduce 
people who are getting into D to the concept, it's benefits, 
and downsides. When people are confused in Learn why 
typeof(s.front) == dchar then this can just be linked to them.


If you think there should be any more information included in 
the article, please let me know so I can add it.


I ran into an auto decoding problem earlier. Honestly I was 
upset, and I think I was rightly upset. The programming language 
moved my cheese without telling me. I tend to believe in the 
route of least surprise. if as a newbie I am doing something 
stupid and find out i was wrong, that is one thing. but if i 
continue to do something wrong and find out that the programming 
language thinks I am stupid that's another thing.


If people want auto coding behavior shouldn't they just use or 
convert to dchar?


Re: D's Auto Decoding and You

2016-05-19 Thread H. S. Teoh via Digitalmars-d-announce
On Thu, May 19, 2016 at 09:21:40AM -0400, Steven Schveighoffer via 
Digitalmars-d-announce wrote:
> On 5/17/16 8:36 PM, H. S. Teoh via Digitalmars-d-announce wrote:
> > On Tue, May 17, 2016 at 08:19:48PM +, Vladimir Panteleev via 
> > Digitalmars-d-announce wrote:
> > > On Tuesday, 17 May 2016 at 17:26:59 UTC, Steven Schveighoffer wrote:
> > > > However, it's perfectly legal for a front function not to be
> > > > tagged @property.
> > > 
> > > BTW, where is this coming from? Is it simply an emergent property
> > > of the existing implementations of isInputRange and ElementType,
> > > or is it actually by design?
> > 
> > This is very bad. The range API does not mandate that .front must be
> > a function. I often write ranges where .front is an actual struct
> > variable that gets updated by .popFront.  Now you're saying that my
> > range won't work with some code, because they call .front() (which
> > is a compile error when .front is a variable, not a function)?
> 
> My goodness no!
> 
> People, please, my point is simply that is(typeof(someRange.front) ==
> ElementType!(typeof(someRange))) DOESN'T ALWAYS WORK.

OK, so the point is, use ElementType!(typeof(range)) instead of
typeof(range.front)? That works for me. Sorry for the noise. :-P


[...]
> > In the old days (i.e., 1-2 years ago), isForwardRange!R will return
> > false if .save is not marked @property. I thought isInputRange!R did
> > the same for .front, or am I imagining things?  Did somebody change
> > this recently?
> 
> You are imagining that someInputRange.front ever required that. In
> fact, it would have had to go out of its way to do so (because
> isInputRange puts no requirements on the *type* of front, except that
> it returns a non-void value).
> 
> But you are right that save did require @property at one time. Not (In
> my opinion) because it meant to, but because it happened to check the
> type of r.save against a type (namely, that .save returns its own
> type).

Ah, so that's where it came from. Now I remember that there were bugs
caused by .save returning something other than the original range type,
which broke certain algorithms. That's probably where the whole .save
requiring @property thing came from.


> At the same time, I fixed all the isXXXRange traits so @property is
> not required anywhere. In particular, isRandomAccessRange required
> r.front to be @property, even when isInputRange didn't (again, IMO
> unintentionally). Here is the PR:
> https://github.com/dlang/phobos/pull/3276
[...]

Thanks for the info!


T

-- 
"Maybe" is a strange word.  When mom or dad says it it means "yes", but when my 
big brothers say it it means "no"! -- PJ jr.


Re: mago-mi: GDB/MI compatible frontend for Mago debugger

2016-05-19 Thread Bruno Medeiros via Digitalmars-d-announce

On 19/05/2016 08:41, Vadim Lopatin wrote:

On Wednesday, 18 May 2016 at 18:02:12 UTC, Bruno Medeiros wrote:

While DDT technically work oks with GDB (the GDB from mingw-w64 that
is), you are right, there isn't a compiler on Windows that supplies
debug info in the way GDB understands. See
https://wiki.dlang.org/Debuggers.

DMD produces debug info COFF or OMF format, which GDB doesn't know
anything about (nor ever will). LDC should in theory work with DWARF
info, but this is broken somehow. Not because of LLVM though, since
for example Rust on Windows works. As for GDC, it doesn't even supply
binaries for Windows (that target Windows) -  it is not a supported
platform.

BTW, Eclipse/DDT should in theory work with mago-mi as well, at least
if the protocol is implemented correctly. Have you tried it? I dunno
how complete your MI implementation is.


So it looks like mago-mi might be helpful for DDT on Windows.
mago-mi supports subset of GDB/MI commands enough for DlangIDE, but it
can be easy extended.

Currenlty supported commands can be shown using help command (use
--interpreter=mi2 when running mago-mi, otherwise it will print non-MI
commands). Also commands are listed in readme file
https://github.com/buggins/mago/blob/master/MagoMI/mago-mi/README.md

I didn't try DDT with mago-mi, and so I'm not sure which commands must
be supported by debugger to get it working with DDT.

To get list of commands DDT tries to use you can either add
--log-file=magomi.log --log-level=TRACE to mago-mi command line or use
debug build of mago-mi.
It will all STDIN data to log file, and report errors for unsupported
commands.



I also don't know which MI commands need to be supported to have it work 
with DDT. The thing is I didn't write the GDB debugger integration for 
DDT, I just reused the one from CDT. So I'm not that familiar with those 
internals.


BTW, the MI integration is fairly language agnostic, so in theory your 
debugger could be used by CDT to debug C/C++ programs too, no? At least 
those generated by DMC. Maybe Visual Studio ones too?


--
Bruno Medeiros
https://twitter.com/brunodomedeiros


Re: D's Auto Decoding and You

2016-05-19 Thread Steven Schveighoffer via Digitalmars-d-announce

On 5/17/16 8:36 PM, H. S. Teoh via Digitalmars-d-announce wrote:

On Tue, May 17, 2016 at 08:19:48PM +, Vladimir Panteleev via 
Digitalmars-d-announce wrote:

On Tuesday, 17 May 2016 at 17:26:59 UTC, Steven Schveighoffer wrote:

However, it's perfectly legal for a front function not to be tagged
@property.


BTW, where is this coming from? Is it simply an emergent property of
the existing implementations of isInputRange and ElementType, or is it
actually by design?


This is very bad. The range API does not mandate that .front must be a
function. I often write ranges where .front is an actual struct variable
that gets updated by .popFront.  Now you're saying that my range won't
work with some code, because they call .front() (which is a compile
error when .front is a variable, not a function)?


My goodness no!

People, please, my point is simply that is(typeof(someRange.front) == 
ElementType!(typeof(someRange))) DOESN'T ALWAYS WORK.


Here is the (long standing) definition of isInputRange:

template isInputRange(R)
{
enum bool isInputRange = is(typeof(
(inout int = 0)
{
R r = R.init; // can define a range object
if (r.empty) {}   // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}

Not there is no check for is(typeof(r.front)) to be some certain thing.

So this is a valid range:

struct AllZeros
{
int front() { return 0; }
enum empty = false;
void popFront() {}
}

Yet, is(typeof(AllZeros.init.front) == int) will be false. This is the 
line of code from the article that I suggested to add the parens to. 
Because in that particular case, string.front is a function, not a 
field. The code in question is NOT GENERIC, it's just showing that 
string.front is not the same as string[0]. It's very specific to string.




In the old days (i.e., 1-2 years ago), isForwardRange!R will return
false if .save is not marked @property. I thought isInputRange!R did the
same for .front, or am I imagining things?  Did somebody change this
recently?


You are imagining that someInputRange.front ever required that. In fact, 
it would have had to go out of its way to do so (because isInputRange 
puts no requirements on the *type* of front, except that it returns a 
non-void value).


But you are right that save did require @property at one time. Not (In 
my opinion) because it meant to, but because it happened to check the 
type of r.save against a type (namely, that .save returns its own type).


At the same time, I fixed all the isXXXRange traits so @property is not 
required anywhere. In particular, isRandomAccessRange required r.front 
to be @property, even when isInputRange didn't (again, IMO 
unintentionally). Here is the PR: https://github.com/dlang/phobos/pull/3276


-Steve


Re: D's Auto Decoding and You

2016-05-19 Thread jmh530 via Digitalmars-d-announce

On Thursday, 19 May 2016 at 12:10:36 UTC, Jonathan M Davis wrote:


[snip]


Very informative, as always.

I had not realized the implication of front being called without 
parens, such as front being something that isn't an @property 
function (esp. variable). Are there any ranges in phobos (you can 
think of) that do this?


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Jens Müller via Digitalmars-d-announce
On Thursday, 19 May 2016 at 12:04:31 UTC, Andrei Alexandrescu 
wrote:

On 5/19/16 4:12 AM, Jens Müller wrote:



What test data did you use?


An instance for benchmarking is generated as follows. Given nnz 
which is the sum of non-zero indices in input vector a and b.


auto lengthA = uniform!"[]"(0, nnz, gen);
auto lengthB = nnz - lengthA;

auto a = iota(0, nnz).randomSample(lengthA, gen).map!(i => 
Pair(i, 10)).array();
auto b = iota(0, nnz).randomSample(lengthB, gen).map!(i => 
Pair(i, 10)).array();


So I take a random sample of (0, ..., nnz) for each input.
Any better idea? I've seen that people generate vectors with 
larger gaps.


10%-20% win on dot product is significant because for many 
algorithms dot product is a kernel and dominates everything 
else. For those any win goes straight to the bottom line.


Sure. Still I wasn't sure whether I got the idea from your talk. 
So maybe there is/was more.


The base line (dot1 in the graphs) is the straightforward 
version


---
size_t i,j = 0;
double sum = 0;
while (i < a.length && j < b.length)
{
 if (a[i].index < b[j].index) i++;
 else if (a[i].index > b[j].index) j++;
 else
 {
 assert(a[i].index == b[j].index);
 sum += a[i].value * b[j].value;
 i++;
 j++;
 }
}
return sum;
---


That does redundant checking. There's a better baseline:

---
if (a.length == 0 || b.length == 0)
return 0;
const amax = a.length - 1, bmax = b.length - 1;
size_t i,j = 0;
double sum = 0;
for (;;)
{
if (a[i].index < b[j].index) {
if (i++ == amax) break;
}
else if (a[i].index > b[j].index) {
bumpJ: if (j++ == bmax) break;
}
else
{
assert(a[i].index == b[j].index);
sum += a[i].value * b[j].value;
if (i++ == amax) break;
goto bumpJ;
}
}
return sum;
---


I check that.

Then if you add the sentinel you only need the bounds tests in 
the third case.


I post the sentinel code later. Probably there is something to 
improve there as well.



BTW the effects vary greatly for different compilers.
For example with dmd the optimized version is slowest. The 
baseline is
best. Weird. With gdc the optimized is best and gdc's code is 
always
faster than dmd's code. With ldc it's really strange. Slower 
than dmd. I

assume I'm doing something wrong here.

Used compiler flags
dmd v2.071.0
-wi -dw -g -O -inline -release -noboundscheck
gdc (crosstool-NG 203be35 - 20160205-2.066.1-e95a735b97) 5.2.0
-Wall  -g -O3 -fomit-frame-pointer -finline-functions -frelease
-fno-bounds-check -ffast-math
ldc (0.15.2-beta2) based on DMD v2.066.1 and LLVM 3.6.1
-wi -dw -g -O3 -enable-inlining -release -boundscheck=off

Am I missing some flags?


These look reasonable.


But ldc looks so bad.
Any comments from ldc users or developers? Because I see this in 
many other measurements as well. I would love to have another 
compiler producing efficient like gdc.

For example what's equivalent to gdc's -ffast-math in ldc.


I uploaded my plots.
- running time 
https://www.scribd.com/doc/312951947/Running-Time

- speed up https://www.scribd.com/doc/312951964/Speedup


What is dot2? Could you please redo the experiments with the 
modified code as well?


dot2 is an optimization for jumping over gaps more quickly 
replacing the first two if statements with while statements.
But my benchmark tests have no large gaps but interestingly it 
does make things worse.


Jens


Re: Battle-plan for CTFE

2016-05-19 Thread Daniel Murphy via Digitalmars-d-announce

On 19/05/2016 3:50 AM, Stefan Koch wrote:

I am currently designing an IR to feed into the CTFE Evaluator.
I am aware that this could potentially make it harder to get things
merged since DMD already has the glue-layer.



It's always more difficult to justify merging more complexity.  But if 
you can present a working and superior solution the specific 
implementation is less important.  It is still important that it matches 
the existing style of the compiler, especially with respect to adding 
dependencies.


Also be aware that even with agreement on the eventual goal, it is still 
very slow to get big changes approved.  eg ddmd took more than twice as 
long as it should have.  This is why I suggested looking for incremental 
improvements, so you can overlap getting earlier things merged and 
developing later parts.  I would be on the lookout for things that can 
be justified on their own (untangling AssignExp :) ) and try to push 
those through first.


Re: D's Auto Decoding and You

2016-05-19 Thread Jonathan M Davis via Digitalmars-d-announce
On Thursday, May 19, 2016 09:05:53 Kagamin via Digitalmars-d-announce wrote:
> On Wednesday, 18 May 2016 at 20:10:09 UTC, Jonathan M Davis wrote:
> > So, while we do have enforcement of how ranges _can_ be used,
> > we don't have enforcement of how they _are_ used, and I don't
> > expect that we'll ever get that.
>
> It would help if there was documented standard testing procedure
> (and used for all algorithms).

We really need solid tools for testing an algorithm with a variety of ranges
to verify that it's doing the right thing as well as tools to test that a
range behaves how ranges are supposed to behave. The closest that we have to
that is that std.range has some internal helpers for testing ranges, but
they're not that great, and I don't think that any of it's public. And we
don't have anything for testing that a range acts correctly - just that it
follows the right API syntactically with the minimal semantic checking that
can be done with typeof. So, there's work to be done. I'd started some of it
a while back, but I never got very far, and no one else has done anything
like it AFAIK.

- Jonathan M Davis



Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 5/18/16 7:42 AM, Manu via Digitalmars-d-announce wrote:

On 16 May 2016 at 23:46, Andrei Alexandrescu via
Digitalmars-d-announce  wrote:

Uses D for examples, showcases Design by Introspection, and rediscovers a
fast partition routine. It was quite well received.
https://www.youtube.com/watch?v=AxnotgLql0k

Andrei


This isn't the one you said you were going to "destroy concepts" is it?
At dconf, you mentioned a talk for release on some date I can't
remember, and I got the impression you were going to show how C++'s
concepts proposal was broken, and ideally, propose how we can nail it
in D?


That's the one - sorry to disappoint :o). -- Andrei



Re: D's Auto Decoding and You

2016-05-19 Thread Jonathan M Davis via Digitalmars-d-announce
On Wednesday, May 18, 2016 22:23:45 jmh530 via Digitalmars-d-announce wrote:
> On Wednesday, 18 May 2016 at 20:10:09 UTC, Jonathan M Davis wrote:
> > At this point, if anyone ever calls front with parens, they're
> > doing it wrong.
>
> Is this true of all @property functions? Should this be noted in
> the spec? Should it be an error? If it shouldn't be an error, is
> it really such a bad thing?

It makes _no_ sense to use parens on a typical @property function. The whole
point of properties is that they act like variables. If you're marking a
function with @property, you're clearly indicating that it's intended to be
treated as if it were a variable and not as a function. So, in principle, if
you're using parens on a property function, the parens should be used on the
return value and _not_ the function. That being said, we've never ended up
with property enforcement of any kind being added to the language.  So,
while the compiler _should_ require that an @property function be called
without parens, it doesn't. And without that requirement, @property really
doesn't do much. If we want properties to work where the type is a callable
like a delegate (e.g. you have a range of delegates, so front returns a
delegate), then it's going to need to change so that using parens on an
@property function actually uses the paren on the return value, and without
that @property is nothing more than documentation. So, right now, @property
is pretty much just documentation about how the person who wrote the code
expects you to use it. It doesn't really do anything. It does have some
affect with regards to typeof, but overall, it does nothing.

Now, that being said, when I was talking about calling front with parens, I
wasn't really talking about property functions - though front is frequently
a property function. Rather, my point was that isInputRange requires that
this code compile:

R r = R.init; // can define a range object
if (r.empty) {}   // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range

If that code compiles with a given type, then that type can be used as an
input range. That API is in the API defined for input ranegs. It does _not_
call front with parens nor does it call empty with parens. Rather, it
explicitly uses them _without_ parens. So, they could be property functions,
or variables, or normal functions that just don't get called with parens, or
anything else that can be used without parens and compile with that code.
So, if you write a range-based algorithm that uses parens on empty or front,
then you're writing an algorithm that does not follow the range API and
which will not work with many ranges.  The range API does _not_ guarantee
that either front or empty can be used with parens. It guarantees that they
can be used _without_ them. So, if your code ever uses parens on front or
empty, then it's using the range API incorrectly and risks not compiling
with many ranges.

- Jonathan M Davis



Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 5/19/16 4:12 AM, Jens Müller wrote:

The code applying the sentinel optimization assumes mutability of
the input. That needs to be checked for.


Indeed. As I mentioned after discussing find, I didn't worry about those 
checks assuming they were obvious.



That's fine for partition
because that is assumed to be in-place. But for other algorithms it's
not so obvious. It's sad that the optimization works only for
non-const input.


Several optimizations only apply to mutable data. Others apply to 
immutable data. It's the way things go.



I didn't get the idea behind sentinels for sparse dot product. I picked the
smallest of the last elements (so you need bidirectional ranges) and fix
up as
needed. For gdc I get a speedup (baseline over new implementation) of
1.2 in
best case and >1.0 in worst case. On average it's about 1.1 I would say. I
expected more. How would you approach sentinels with the sparse dot
product. Can
you elaborate the idea from the video? I didn't get it.


That's the idea - to only need to bounds check on one of the three 
possibilities.


What test data did you use?

10%-20% win on dot product is significant because for many algorithms 
dot product is a kernel and dominates everything else. For those any win 
goes straight to the bottom line.



The base line (dot1 in the graphs) is the straightforward version

---
size_t i,j = 0;
double sum = 0;
while (i < a.length && j < b.length)
{
 if (a[i].index < b[j].index) i++;
 else if (a[i].index > b[j].index) j++;
 else
 {
 assert(a[i].index == b[j].index);
 sum += a[i].value * b[j].value;
 i++;
 j++;
 }
}
return sum;
---


That does redundant checking. There's a better baseline:

---
if (a.length == 0 || b.length == 0)
return 0;
const amax = a.length - 1, bmax = b.length - 1;
size_t i,j = 0;
double sum = 0;
for (;;)
{
if (a[i].index < b[j].index) {
if (i++ == amax) break;
}
else if (a[i].index > b[j].index) {
bumpJ: if (j++ == bmax) break;
}
else
{
assert(a[i].index == b[j].index);
sum += a[i].value * b[j].value;
if (i++ == amax) break;
goto bumpJ;
}
}
return sum;
---

Then if you add the sentinel you only need the bounds tests in the third 
case.



BTW the effects vary greatly for different compilers.
For example with dmd the optimized version is slowest. The baseline is
best. Weird. With gdc the optimized is best and gdc's code is always
faster than dmd's code. With ldc it's really strange. Slower than dmd. I
assume I'm doing something wrong here.

Used compiler flags
dmd v2.071.0
-wi -dw -g -O -inline -release -noboundscheck
gdc (crosstool-NG 203be35 - 20160205-2.066.1-e95a735b97) 5.2.0
-Wall  -g -O3 -fomit-frame-pointer -finline-functions -frelease
-fno-bounds-check -ffast-math
ldc (0.15.2-beta2) based on DMD v2.066.1 and LLVM 3.6.1
-wi -dw -g -O3 -enable-inlining -release -boundscheck=off

Am I missing some flags?


These look reasonable.


I uploaded my plots.
- running time https://www.scribd.com/doc/312951947/Running-Time
- speed up https://www.scribd.com/doc/312951964/Speedup


What is dot2? Could you please redo the experiments with the modified 
code as well?



Thanks!

Andrei



Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Andrei Alexandrescu via Digitalmars-d-announce

On 5/16/16 9:46 AM, Andrei Alexandrescu wrote:

Uses D for examples, showcases Design by Introspection, and rediscovers
a fast partition routine. It was quite well received.
https://www.youtube.com/watch?v=AxnotgLql0k


This talk took a big gambit and it seems to have worked well. Per 
https://www.youtube.com/channel/UCJhay24LTpO1s4bIZxuIqKw/videos?sort=p=0=grid, 
"There's Treasure Everywhere" is the most watched talk of the ACCU 2016 
conference with 5276 views with a large margin (next 1874, median 339).


Andrei


Re: D's Auto Decoding and You

2016-05-19 Thread Kagamin via Digitalmars-d-announce

On Wednesday, 18 May 2016 at 20:10:09 UTC, Jonathan M Davis wrote:
So, while we do have enforcement of how ranges _can_ be used, 
we don't have enforcement of how they _are_ used, and I don't 
expect that we'll ever get that.


It would help if there was documented standard testing procedure 
(and used for all algorithms).


Re: My ACCU 2016 keynote video available online

2016-05-19 Thread Jens Müller via Digitalmars-d-announce

On Monday, 16 May 2016 at 13:46:11 UTC, Andrei Alexandrescu wrote:
Uses D for examples, showcases Design by Introspection, and 
rediscovers a fast partition routine. It was quite well 
received. https://www.youtube.com/watch?v=AxnotgLql0k


Andrei


Nice presentation.

The code applying the sentinel optimization assumes mutability of 
the input.
That needs to be checked for. That's fine for partition because 
that is assumed
to be in-place. But for other algorithms it's not so obvious. 
It's sad that the
optimization works only for non-const input. It is in conflict 
with the advice
to make input const if the function doesn't change it. This makes 
the
optimization less likely to be applicable. One might though relax 
the const
requirement to mean "the input is identical at return of the 
function to its
beginning". But that's a different story, I'll guess. Coming up 
with another
implementation might also work, using chain or so. But typically 
the sentinel

optimization assumes mutability.

I didn't get the idea behind sentinels for sparse dot product. I 
picked the
smallest of the last elements (so you need bidirectional ranges) 
and fix up as
needed. For gdc I get a speedup (baseline over new 
implementation) of 1.2 in
best case and >1.0 in worst case. On average it's about 1.1 I 
would say. I
expected more. How would you approach sentinels with the sparse 
dot product. Can

you elaborate the idea from the video? I didn't get it.

The base line (dot1 in the graphs) is the straightforward version

---
size_t i,j = 0;
double sum = 0;
while (i < a.length && j < b.length)
{
if (a[i].index < b[j].index) i++;
else if (a[i].index > b[j].index) j++;
else
{
assert(a[i].index == b[j].index);
sum += a[i].value * b[j].value;
i++;
j++;
}
}
return sum;
---

BTW the effects vary greatly for different compilers.
For example with dmd the optimized version is slowest. The 
baseline is
best. Weird. With gdc the optimized is best and gdc's code is 
always
faster than dmd's code. With ldc it's really strange. Slower than 
dmd. I

assume I'm doing something wrong here.

Used compiler flags
dmd v2.071.0
-wi -dw -g -O -inline -release -noboundscheck
gdc (crosstool-NG 203be35 - 20160205-2.066.1-e95a735b97) 5.2.0
-Wall  -g -O3 -fomit-frame-pointer -finline-functions -frelease 
-fno-bounds-check -ffast-math

ldc (0.15.2-beta2) based on DMD v2.066.1 and LLVM 3.6.1
-wi -dw -g -O3 -enable-inlining -release -boundscheck=off

Am I missing some flags?

I uploaded my plots.
- running time https://www.scribd.com/doc/312951947/Running-Time
- speed up https://www.scribd.com/doc/312951964/Speedup

*Disclaimer*
I hope most of this makes sense but take it with a grain of salt.

Jens

PS
It seems the mailinglist interface does not work. I cannot send 
replies anymore via mail. I wrote Brad Roberts but no answer yet.


Re: mago-mi: GDB/MI compatible frontend for Mago debugger

2016-05-19 Thread Vadim Lopatin via Digitalmars-d-announce

On Wednesday, 18 May 2016 at 18:02:12 UTC, Bruno Medeiros wrote:
While DDT technically work oks with GDB (the GDB from mingw-w64 
that is), you are right, there isn't a compiler on Windows that 
supplies debug info in the way GDB understands. See 
https://wiki.dlang.org/Debuggers.


DMD produces debug info COFF or OMF format, which GDB doesn't 
know anything about (nor ever will). LDC should in theory work 
with DWARF info, but this is broken somehow. Not because of 
LLVM though, since for example Rust on Windows works. As for 
GDC, it doesn't even supply binaries for Windows (that target 
Windows) -  it is not a supported platform.


BTW, Eclipse/DDT should in theory work with mago-mi as well, at 
least if the protocol is implemented correctly. Have you tried 
it? I dunno how complete your MI implementation is.


So it looks like mago-mi might be helpful for DDT on Windows.
mago-mi supports subset of GDB/MI commands enough for DlangIDE, 
but it can be easy extended.


Currenlty supported commands can be shown using help command (use 
--interpreter=mi2 when running mago-mi, otherwise it will print 
non-MI commands). Also commands are listed in readme file 
https://github.com/buggins/mago/blob/master/MagoMI/mago-mi/README.md


I didn't try DDT with mago-mi, and so I'm not sure which commands 
must be supported by debugger to get it working with DDT.


To get list of commands DDT tries to use you can either add 
--log-file=magomi.log --log-level=TRACE to mago-mi command line 
or use debug build of mago-mi.
It will all STDIN data to log file, and report errors for 
unsupported commands.