Re: Will D ever get optional named parameters?

2014-10-13 Thread Cliff via Digitalmars-d

On Monday, 13 October 2014 at 19:18:39 UTC, Walter Bright wrote:

On 10/13/2014 7:23 AM, Ary Borenszweig wrote:

On 10/13/14, 5:47 AM, Walter Bright wrote:

On 10/13/2014 1:29 AM, 岩倉 澪 wrote:
Are there good reasons not to add something like this to the 
language,

or is it
simply a matter of doing the work? Has it been discussed 
much?


Named parameters interact badly with overloading.


Could you give an example?


Nothing requires function overloads to use the same names in 
the same order for parameters. color can be the name for 
parameter 1 in one overload and for parameter 3 in another and 
not be there at all for a third.


Parameters need not be named in D:

   int foo(long);
   int foo(ulong x);

Named parameters are often desired so that default arguments 
need not be in order at the end:


   int foo(int x = 5, int y);
   int foo(int y, int z);

To deal with all this, a number of arbitrary rules will have to 
be created. Overloading is already fairly complex, with the 
implemented notions of partial ordering. Even if this could all 
be settled, is it worth it? Can anyone write a document 
explaining this to people? Do people really want pages and 
pages of specification for this?


The only thing I like named parameters for is to avoid the 
following


foo(5 /* count */, true /* enableSpecialFunctionality */)

I like the documentation, but comments in the middle does feel 
cumbersome.  Tooling could add that automatically of course.  The 
C# syntax is slightly better:


foo(count: 5, enableSpecialFunctionality: true)

I don't care for or need the ability to reorder parameters, nor 
do I want additional rules to remember vis-a-vis overloading and 
optional parameters.  And I don't want a trivial name change in 
parameters to break my code - functions already have complete 
signatures, enforcing names just adds one more thing which could 
break people for no real benefit.


Sometimes I think features are proposed for the language which 
more rightly belong in tooling.


Re: how to call class' template constructor

2014-10-12 Thread Cliff via Digitalmars-d-learn
On Sunday, 12 October 2014 at 19:46:41 UTC, ketmar via 
Digitalmars-d-learn wrote:

Hello.

please, how to call template constructor of a class? it's 
completely

escaped my mind. i.e. i have this class:

  class A {
this(alias ent) (string name) {
  ...
}
  }

and i want to do:

  void foo () { ... }
  auto a = new A!foo(xFn);

yet compiler tells me that

template instance A!foo A is not a template declaration, it is 
a class


yes, i know that i can rewrite constructor to something like 
this:


  this(T) (string name, T fn) if (isCallable!T) {
...
  }

and then use autodeduction, but i want the first form! ;-)


How about a static factory method?  Or do you know there is a 
syntax for invoking a templatized constructor and just can't 
remember it?


Re: Program logic bugs vs input/environmental errors

2014-10-05 Thread Cliff via Digitalmars-d
On Sunday, 5 October 2014 at 05:46:56 UTC, ketmar via 
Digitalmars-d wrote:

On Sun, 05 Oct 2014 03:47:31 +
Cliff via Digitalmars-d digitalmars-d@puremagic.com wrote:

This is a great feature where we lack a really solid IDE 
experience (which would have intellisense and auto-completion 
that could be accurate and prevent such errors from occurring 
in the first place.)  Otherwise it would probably be redundant.
i'm not using IDEs for more than a decade (heh, i'm using 
mcedit to
write code). yet this feature drives me mad: it trashes my 
terminal
with useless garbage output. it was *never* in help, there were 
no
moment when i looked at suggested identifier and thinked: aha, 
THAT is
the bug! but virtually each time i see usggestion i'm 
thinking: oh,

well, i know. 'cmon, why don't you just shut up?!

it's like colorizing the output, yet colorizing can be turned 
off, and

suggestions can't.


That you even make the bug at all which triggers the error is an 
indication the developer workflow you use is fundamentally 
flawed.  This is something which should be caught much earlier - 
when you are at the point the typo was made - not after you have 
committed a change to disk and presented it to the compiler, 
where your train of thought may be significantly different.


I'd much rather energy be directed at the prevention of mistakes, 
not the suppression of help in fixing them - if I had to choose.  
But I wouldn't object to having a switch to turn off the help if 
it bothers you that much.  Seems like a very small thing to add.


Re: Program logic bugs vs input/environmental errors

2014-10-04 Thread Cliff via Digitalmars-d

On Sunday, 5 October 2014 at 03:34:31 UTC, Walter Bright wrote:

On 10/4/2014 2:45 PM, Andrei Alexandrescu wrote:

On 10/3/14, 9:26 PM, ketmar via Digitalmars-d wrote:
yes. DMD attempts to 'guess' what identifier i mistyped 
drives me
crazy. just shut up and stop after unknown identifier, you 
robot,

don't try to show me your artificial idiocity!


awesome feature -- Andrei


I agree, I like it very much.


This is a great feature where we lack a really solid IDE 
experience (which would have intellisense and auto-completion 
that could be accurate and prevent such errors from occurring in 
the first place.)  Otherwise it would probably be redundant.


Re: D Parsing (again)/ D grammar

2014-10-02 Thread Cliff via Digitalmars-d
On Thursday, 2 October 2014 at 15:47:04 UTC, Vladimir Kazanov 
wrote:
On Thursday, 2 October 2014 at 15:01:13 UTC, Ola Fosheim 
Grøstad wrote:


Cool, GLL is the way to go IMO, but I am also looking at 
Earley-parsers. What is the advantage of GLL over Earley if 
you use a parser generator? I think they both are O(3) or 
something like that?


They are somewhat similar in terms of asymptotic complexity on 
complicated examples. Constant is better though. But there's a 
nice property of all generalized parsers: for LL (for GLL) and  
LR (for GLR) parts of grammars they go almost as fast as LL/LR 
parsers do. On ambiguities they slow down, of course.


There are four properties I really like:

1. GLL should be faster than Earley's (even the modern 
incarnations of it), but this is something I have yet to test.


2. It is fully general.

3. The automatically generated code repeats the original 
grammar structure - the same way recursive decent parsers do.


4. The core parser is still that simple LL/RD parser I can 
practically debug.


This comes at a price, as usual... I would not call it obvious 
:-) But nobody can say that modern Earley's flavours are 
trivial.


From the discussion I found out that D parser is a hand-made 
RD-parser with a few tricks(c).


I think D is close to LL(2) for the most part. But I suppose a 
GLL parser could allow keywords to be used as symbol names in 
most cases? That would be nice.


This is possible, I guess, the same way people do it in GLR 
parsers.


What has steered you down the path of writing your own parser 
generator as opposed to using an existing one such as ANTLR?  
Were there properties you wanted that it didn't have, or 
performance, or...?


Re: D Parsing (again)/ D grammar

2014-10-02 Thread Cliff via Digitalmars-d
On Thursday, 2 October 2014 at 17:43:45 UTC, Vladimir Kazanov 
wrote:

On Thursday, 2 October 2014 at 17:17:53 UTC, Cliff wrote:



What has steered you down the path of writing your own parser 
generator as opposed to using an existing one such as ANTLR?  
Were there properties you wanted that it didn't have, or 
performance, or...?


Like I said in the introducing post, this is a personal 
experiment of sorts. I am aware of most alternatives, such as 
ANTLR's ALL(*) and many, MANY others. :) And I would never 
write something myself as a part of my full-time job.


But right now I am writing an article on generalized parsers, 
toying with implementations I could lay my hands on, 
implementing others. GLL is a rather exotic LL flavor which 
looks attractive in theory. I want to see it in practice.


Very cool - post the GitHub or equivalent when you get the chance 
(assuming you are sharing).  This is an area of interest for me 
as well.


Re: RFC: moving forward with @nogc Phobos

2014-10-01 Thread Cliff via Digitalmars-d

On Wednesday, 1 October 2014 at 18:37:50 UTC, Sean Kelly wrote:

On Wednesday, 1 October 2014 at 17:53:43 UTC, H. S. Teoh via
Digitalmars-d wrote:


But Sean's idea only takes strings into account. Strings 
aren't the only
allocated resource Phobos needs to deal with. So extrapolating 
from that
idea, each memory management struct (or whatever other 
aggregate we end
up using), say call it MMP, will have to define MMP.string, 
MMP.jsonNode
(since parseJSON() need to allocate not only strings but JSON 
nodes),

MMP.redBlackTreeNode, MMP.listNode, MMP.userDefinedNode, ...

Nope, still don't see how this could work. Please clarify, 
kthx.


Assuming you're willing to take the memoryModel type as a
template argument, I imagine we could do something where the 
user

can specialize the memoryModel for their own types, a bit like
how information is derived for iterators in C++.  The problem is
that this still means passing the memoryModel in as a template
argument.  What I'd really want is for it to be a global, except
that templated virtuals is logically impossible.  I guess
something could maybe be sorted out via a factory design, but
that's not terribly D-like.  I'm at a loss for how to make this
memoryModel thing work the way I'd actually want it to if I were
to use it.


If you were to forget D restrictions for a moment, and consider 
an idealized language, how would you express this?  Maybe 
providing that will trigger some ideas from people beyond what we 
have seen so far by removing implied restrictions.


Re: So what exactly is coming with extended C++ support?

2014-09-30 Thread Cliff via Digitalmars-d

On Tuesday, 30 September 2014 at 21:19:44 UTC, Ethan wrote:


Hello. AAA developer (Remedy) here using D. Custom tech, with a 
custom binding solution written originally by Manu and 
continued by myself.


A GC itself is not a bad thing. The implementation, however, is.

With a codebase like ours (mostly C++, some D), there's a few 
things we need. Deterministic garbage collection is a big one - 
when our C++ object is being destroyed, we need the D object to 
be destroyed at the same time in most cases. This can be 
handled by calling GC.collect() often, but that's where the 
next thing comes in - the time the GC needs. If the time isn't 
being scheduled at object destruction, then it all gets lumped 
together in the GC collect. It automatically moves the time 
cost to a place where we may not want it.


Not a GC specialist here, so maybe the thought arises - why not
turn off automatic GC until such times in the code where you can
afford the cost of it, then call GC.collect explicitly -
essentially eliminating the opportunity for the GC to run at
random times and force running at deterministic times?  Is memory
usage so constrained that failing to execute runs in-between
those deterministic blocks could lead to OOM?  Does such a
strategy have other nasty side-effects which make it impractical?


Re: Program logic bugs vs input/environmental errors

2014-09-28 Thread Cliff via Digitalmars-d
On Sunday, 28 September 2014 at 20:58:20 UTC, H. S. Teoh via 
Digitalmars-d wrote:
I do not condone adding file/line to exception *messages*. 
Catch blocks
can print / translate those messages, which can be made 
user-friendly,
but if the program failed to catch an exception, you're already 
screwed

anyway so why not provide more info rather than less?

Unless, of course, you're suggesting that we put this around 
every

main() function:

void main() {
try {
...
} catch(Exception e) {
assert(0, Unhandled exception: I screwed up);
}
}



In our production C# code, we had a few practices which might be 
applicable here:


1. main() definitely had a top-level try/catch handler to produce 
useful output messages.  Because throwing an uncaught exception 
out to the user *is* a bug, we naturally want to not just toss 
out a stack trace but information on what to do with it should a 
user encounter it.  Even better if there is additional runtime 
information which can be provided for a bug report.


2. We also registered a top-level unhandled exception handler on 
the AppDomain (equivalent to a process in .NET, except that 
multiple AppDomains may exist within a single OS process), which 
allows the catching to exceptions which would otherwise escape 
background threads.  Depending on the nature of the application, 
these could be logged to some repository to which the user could 
be directed.  It's hard to strictly automate this because exactly 
what you can do with an exception which escapes a thread will be 
application dependent.  In our case, these exceptions were 
considered bugs, were considered to be unrecoverable and resulted 
in a program abort with a user message indicating where to find 
the relevant log outputs and how to contact us.


3. For some cases, throwing an exception would also trigger an 
application dump suitable for post-mortem debugging from the 
point the exception was about to be thrown.  This functionality 
is, of course, OS-specific, but helped us on more than a few 
occasions by eliminating the need to try to pre-determine which 
information was important and which was not so the exception 
could be usefully populated.


I'm not a fan of eliminating the stack from exceptions.  While 
exceptions should not be used to catch logic errors, an uncaught 
exception is itself a logic error (that is, one has omitted some 
required conditions in their code) and thus the context of the 
error needs to be made available somehow.


Localizing a D application - best practices?

2014-09-28 Thread Cliff via Digitalmars-d-learn
Coming from the C# world, all of localization we did was based on 
defining string resource files (XML-formatted source files which 
were translated into C# classes with named-string accessors by 
the build process) that would get included in the final 
application.  For log messages, exception messages (because 
unhandled exceptions could make it to the user in the case of a 
bug) and format strings used for the above we would create a 
string table entry and this file would eventually get localized 
by the appropriate team.


Is there a recommended pattern for applications in D that wish to 
do localization?


Thanks.


Re: What are the worst parts of D?

2014-09-26 Thread Cliff via Digitalmars-d

On Friday, 26 September 2014 at 07:56:57 UTC, Marco Leise wrote:

Am Wed, 24 Sep 2014 23:56:24 +

You do know that your email is in plain text in the news
message header? :p


Actually I did not, as I am not presently using a newsreader to
access the forums, just the web page.  I keep forgetting to
install a proper reader :)  Thanks!


Re: Read-only property without @property

2014-09-26 Thread Cliff via Digitalmars-d
On Friday, 26 September 2014 at 19:47:15 UTC, Steven 
Schveighoffer wrote:
I wanted to bring this over from D.learn, because I've never 
seen this before, and it's an interesting solution to creating 
a property without much boilerplate.


So here it is:

class Foo
{
   union
   {
  private int _a; // accessible only in this module
  public const int a; // accessible from anywhere, but read 
only

   }
}

And it works now, probably has for a while.

Thoughts? This can easily be boilerplated in something like 
roprop!(int, a)


I am really not sure what union does to compiler optimization 
or runtime concerns, if it has any significant drawbacks. From 
what I can tell, it's a valid solution.


Credit to Mark Schütz for the idea.

-Steve


This is a clever syntax, but I can't say I particularly care for 
it since it aliases two names for the same location which differ 
only in their visibility, and this feels... wrong to me somehow.


In C# this is a sufficiently common practice that the property 
syntax allows for it directly:


class Foo
{
int A { get; private set; }
}

The compiler automatically creates a (hidden) backing property 
(this is an implementation detail of course), both internal and 
external customers use the same name, and there is no redundancy. 
 If I were to compare the D way and the C# way, I would prefer to 
C# way for this trivial-property case.  What I would NOT want is 
C#'s special handling of properties to go along with it - a D 
analog would preserve A's access methods and handling as if it 
were a field if that was the user's wish.


That's my $0.02.


Re: Object.factory from shared libraries

2014-09-26 Thread Cliff via Digitalmars-d-learn

On Friday, 26 September 2014 at 15:45:11 UTC, Jacob Carlborg
wrote:

On 2014-09-26 16:24, krzaq wrote:

That would be satisfactory to me, except for the linux-only 
part.


In that case, I think I'll simply try to call filename() as 
the factory

function in each library - that should work everywhere, right?


Dynamic libraries only work properly on Linux. This has nothing 
to do with Object.factory.


What is the nature of D's so/dll support?  Or is there a page
describing it?


Re: What are the worst parts of D?

2014-09-25 Thread Cliff via Digitalmars-d

On Thursday, 25 September 2014 at 17:42:09 UTC, Jacob Carlborg
wrote:

On 2014-09-25 16:23, H. S. Teoh via Digitalmars-d wrote:


That's the hallmark of make-based projects.


This was Ninja actually. But how would the build system know 
I've updated the compiler?


The compiler is an input to the build rule.  Consider the rule:

build: $(CC) my.c -o my.o

what are the dependencies for the rule build?

my.c obviously.  Anything the compiler accesses during the
compilation of my.c.  And *the compiler itself*, referenced here
as $(CC).  From a dependency management standpoint, executables
are not special except as running them leads to the discovery of
more dependencies than may be statically specified.


Re: What are the worst parts of D?

2014-09-25 Thread Cliff via Digitalmars-d

On Thursday, 25 September 2014 at 18:51:13 UTC, H. S. Teoh via
Digitalmars-d wrote:

You don't know if
recompiling after checking out a previous release of your code 
will
actually give you the same binaries that you shipped 2 months 
ago.


To be clear, even if nothing changed, re-running the build may
produce different output.  This is actually a really hard problem
- some build tools actually use entropy when producing their
outputs, and as a result running the exact same tool with the
same parameters in the same [apparent] environment will produce a
subtly different output.  This may be intended (address
randomization) or semi-unintentional (generating a unique GUID
inside a PDB so the debugger can validate the symbols match the
binaries.)  Virtually no build system in use can guarantee the
above in all cases, so you end up making trade-offs - and if you
don't really understand those tradeoffs, you won't trust your
build system.

What else may mess up the perfection of repeatability of your
builds?  Environment variables, the registry (on Windows), any
source of entropy (the PRNG, the system clock/counters, any
network access), etc.

Build engineers themselves don't trust the build tooling because
for as long as we have had the tooling, no one has invested
enough into knowing what is trustworthy or how to make it that
way.  It's like always coding without a typesafe language, but
which gets the job done.  Until you've spent some time in the
typesafe environment, maybe you can't realize the benefit.
You'll say well, now I have to type a bunch more crap, and in
most cases it wouldn't have helped me anyway right up until you
are sitting there at 3AM the night before shipping the product
trying to track down why your Javascript program - I mean build
process - isn't doing what you thought it did.  Just because you
CAN build a massive software system in Javascript doesn't mean
the language is per-se good - it may just mean you are sufficient
motivated to suffer through the pain.  I'd rather make the whole
experience *enjoyable* (hello TypeScript?)

Different people will make different tradeoffs, and I am not here
to tell Andrei or Walter that they *need* a new build system for
D to get their work done - they don't right now.  I'm more
interested in figuring out how to provide a platform to realize
the benefits for build like we have for our modern languages, and
then leveraging that in new ways (like better sharing between the
compiler, debugger, IDEs, test and packaging.)



Re: What are the worst parts of D?

2014-09-25 Thread Cliff via Digitalmars-d

On Thursday, 25 September 2014 at 23:04:55 UTC, eles wrote:
On Thursday, 25 September 2014 at 22:56:56 UTC, Sean Kelly 
wrote:
On Thursday, 25 September 2014 at 22:49:06 UTC, Andrei 
Alexandrescu wrote:

On 9/25/14, 2:10 PM, eles wrote:


Why not, for God's sake, stripFront and stripBack?


Because they are called stripLeft and stripRight. -- Andrei


Psh, they should be called stripHead and stripFoot.  Or 
alternately, unHat and unShoe.


stripLady and stripGentleman?...


Ah, now this thread is going to interesting places! :P


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d
On Wednesday, 24 September 2014 at 05:44:15 UTC, ketmar via 
Digitalmars-d wrote:

On Tue, 23 Sep 2014 21:59:53 -0700
Brad Roberts via Digitalmars-d digitalmars-d@puremagic.com 
wrote:



I understand quite thoroughly why c++ support is a big win

i believe it's not.

so-called enterprise will not choose D for many reasons, and 
c++

interop is on the bottom of the list.

seasoned c++ developer will not migrate to D for many reasons 
(or he
already did that, but then he is not c++ developer anymore), 
and c++

interop is not on the top of the list, not even near the top.

all that gory efforts aimed to c++ interop will bring three 
and a
half more users. there will be NO massive migration due to 
better c++
interop. yet this feature is on the top of the list now. i'm 
sad.


seems that i (we?) have no choice except to wait until people 
will get
enough of c++ games and will became focused on D again. porting 
and
merging CDGC is much better target which help people already 
using D,

but... but imaginary future adopters seems to be the highest
priority. too bad that they will never arrive.


Why does anyone have to *wait* for anything?  I'm not seeing the 
blocking issues regarding attempts to fix the language.  People 
are making PRs, people are discussing and testing ideas, and 
there appear to be enough people to tackle several problems at 
once (typedefs, C++ interop, GC/RC issues, weirdness with ref and 
auto, import symbol shadowing, etc.)  Maybe things aren't moving 
as swiftly as we would like in the areas which are most impactful 
*to us* but that is the nature of free software.  Has it ever 
been any other way than that the things which get the most 
attention are the things which the individual contributors are 
the most passionate about (whether their passion is justified or 
not?)


Analysis of programming languages on Rosetta

2014-09-24 Thread Cliff via Digitalmars-d

The study doesn't analyze D, but the relationships between
languages may be interesting and in some cases surprising.

http://se.inf.ethz.ch/people/nanz/research/rosettacode.html

NOTE: The link contains only a summary, there is a pointer to the
full paper there however.


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d

On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
wrote:

On 2014-09-24 12:16, Walter Bright wrote:

I've never heard of a non-trivial project that didn't have 
constant
breakage of its build system. All kinds of reasons - add a 
file, forget
to add it to the manifest. Change the file contents, neglect 
to update
dependencies. Add new dependencies on some script, script 
fails to run

on one configuration. And on and on.


Again, if changing the file contents breaks the build system 
you're doing it very, very wrong.


People do it very, very wrong all the time - that's the problem
:)  Build systems are felt by most developers to be a tax they
have to pay to do what they want to do, which is write code and
solve non-build-related problems.  Unfortunately, build
engineering is effectively a specialty of its own when you step
outside the most trivial of systems.  It's really no surprise how
few people can get it right - most people can't even agree on
what a build system is supposed to do...


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d

On Wednesday, 24 September 2014 at 20:12:40 UTC, H. S. Teoh via
Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 07:36:05PM +, Cliff via 
Digitalmars-d wrote:

On Wednesday, 24 September 2014 at 19:26:46 UTC, Jacob Carlborg
wrote:
On 2014-09-24 12:16, Walter Bright wrote:

I've never heard of a non-trivial project that didn't have 
constant
breakage of its build system. All kinds of reasons - add a 
file,
forget to add it to the manifest. Change the file contents, 
neglect
to update dependencies. Add new dependencies on some script, 
script

fails to run on one configuration. And on and on.

Again, if changing the file contents breaks the build system 
you're

doing it very, very wrong.

People do it very, very wrong all the time - that's the 
problem :)
Build systems are felt by most developers to be a tax they 
have to pay

to do what they want to do, which is write code and solve
non-build-related problems.


That's unfortunate indeed. I wish I could inspire them as to 
how cool a
properly-done build system can be. Automatic parallel building, 
for
example. Fully-reproducible, incremental builds (never ever do 
`make

clean` again). Automatic build + packaging in a single command.
Incrementally *updating* packaging in a single command. 
Automatic
dependency discovery. And lots more. A lot of this technology 
actually
already exists. The problem is that still too many people think 
make
whenever they hear build system.  Make is but a poor, 
antiquated
caricature of what modern build systems can do. Worse is that 
most

people are resistant to replacing make because of inertia. (Not
realizing that by not throwing out make, they're subjecting 
themselves

to a lifetime of unending, unnecessary suffering.)


Unfortunately, build engineering is effectively a specialty of 
its own
when you step outside the most trivial of systems.  It's 
really no
surprise how few people can get it right - most people can't 
even

agree on what a build system is supposed to do...


It's that bad, huh?

At its most fundamental level, a build system is really nothing 
but a
dependency management system. You have a directed, acyclic 
graph of
objects that are built from other objects, and a command which 
takes
said other objects as input, and produces the target object(s) 
as
output. The build system takes as input this dependency graph, 
and runs
the associated commands in topological order to produce the 
product(s).
A modern build system can parallelize independent steps 
automatically.
None of this is specific to compiling programs, in fact, it 
works for

any process that takes a set of inputs and incrementally derives
intermediate products until the final set of products are 
produced.


Although the input is the (entire) dependency graph, it's not 
desirable
to specify this graph explicitly (it's far too big in 
non-trivial
projects); so most build systems offer ways of automatically 
deducing
dependencies. Usually this is done by scanning the inputs, and 
modern
build systems would offer ways for the user to define new 
scanning
methods for new input types.  One particularly clever system, 
Tup
(http://gittup.org/tup/), uses OS call proxying to discover the 
*exact*

set of inputs and outputs for a given command, including hidden
dependencies (like reading a compiler configuration file that 
may change

compiler behaviour) that most people don't even know about.

It's also not desirable to have to derive all products from its 
original

inputs all the time; what hasn't changed shouldn't need to be
re-processed (we want incremental builds).  So modern build 
systems
implement some way of detecting when a node in the dependency 
graph has

changed, thereby requiring all derived products downstream to be
rebuilt. The most unreliable method is to scan for file change
timestamps (make). A reliable (but slow) method is to compare 
file hash
checksums.  Tup uses OS filesystem change notifications to 
detect
changes, thereby cutting out the scanning overhead, which can 
be quite
large in complex projects (but it may be unreliable if the 
monitoring

daemon isn't running / after rebooting).

These are all just icing on the cake; the fundamental core of a 
build

system is basically dependency graph management.


T


Yes, Google in fact implemented must of this for their internal
build systems, I am led to believe.  I have myself written such a
system before.  In fact, the first project I have been working on
in D is exactly this, using OS call interception for
validating/discovering dependencies, building execution graphs,
etc.

I haven't seen TUP before, thanks for pointing it out.


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d
On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 10:18:29PM +, Atila Neves via 
Digitalmars-d wrote:

[...]
If I were to write a build system today that had to spell out 
all of
its commands, I'd go with tup or Ninja. That CMake has support 
for
Ninja is the icing on the cake for me. I wrote a Ninja build 
system

generator the other day, that thing is awesome.

[...]

P.S. I've thought of writing a build system in D, for which the
configuration language would be D. I still might. Right now, 
dub is

serving my needs.


I've been thinking of that too! I have in mind a hybrid between 
tup and
SCons, integrating the best ideas of both and discarding the 
bad parts.


For example, SCons is notoriously bad at scalability: the need 
to scan
huge directory structures of large projects when all you want 
is to
rebuild a tiny subdirectory, is disappointing. This part should 
be

replaced by Tup-style OS file change notifications.

However, Tup requires arcane shell commands to get anything 
done --
that's good if you're a Bash guru, but most people are not. For 
this, I
find that SCon's architecture of fully-customizable plugins may 
work

best: ship the system with prebaked rules for common tasks like
compiling C/C++/D/Java/etc programs, packaging into tarballs / 
zips,
etc., and expose a consistent API for users to make their own 
rules

where applicable.

If the scripting language is D, that opens up a whole new realm 
of

possibilities like using introspection to auto-derive build
dependencies, which would be so cool it'd freeze the sun.

Now throw in things like built-in parallelization ala SCons 
(I'm not
sure if tup does that too, I suspect it does), 
100%-reproducible builds,

auto-packaging, etc., and we might have a contender for Andrei's
winner build system.



P.S.S autotools is the worse GNU project I know of


+100! It's a system of hacks built upon patches to broken 
systems built
upon other hacks, a veritable metropolis of cards that will 
entirely
collapse at the slightest missing toothpick in your shell 
environment /
directory structure / stray object files or makefiles leftover 
from
previous builds, thanks to 'make'. It's pretty marvelous for 
what it
does -- autoconfigure complex system-dependent parameters for 
every
existing flavor of Unix that you've never heard of -- when it 
works,
that is. When it doesn't, you're in for days -- no, weeks -- 
no, months,
of hair-pulling frustration trying to figure out where in the 
metropolis
of cards the missing toothpick went. The error messages help -- 
in the
same way stray hair or disturbed sand helps in a crime 
investigation --

if you know how to interpret them. Which ordinary people don't.


T


If you have a passion and interest in this space and would like 
to collaborate, I would be thrilled.  We can also split this 
discussion off of this thread since it is not D specific.


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d
On Wednesday, 24 September 2014 at 23:20:00 UTC, H. S. Teoh via 
Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 11:02:51PM +, Cliff via 
Digitalmars-d wrote:

On Wednesday, 24 September 2014 at 22:49:08 UTC, H. S. Teoh via
Digitalmars-d wrote:
On Wed, Sep 24, 2014 at 10:18:29PM +, Atila Neves via 
Digitalmars-d

wrote:
[...]
If I were to write a build system today that had to spell 
out all of
its commands, I'd go with tup or Ninja. That CMake has 
support for
Ninja is the icing on the cake for me. I wrote a Ninja build 
system

generator the other day, that thing is awesome.
[...]
P.S. I've thought of writing a build system in D, for which 
the
configuration language would be D. I still might. Right now, 
dub is

serving my needs.

I've been thinking of that too! I have in mind a hybrid 
between tup
and SCons, integrating the best ideas of both and discarding 
the bad

parts.

[...]
If you have a passion and interest in this space and would 
like to
collaborate, I would be thrilled.  We can also split this 
discussion

off of this thread since it is not D specific.


I'm interested. What about Atila?


T


Yes, whoever has a passionate interest in this space and (of 
course) an interest in D.  Probably the best thing to do is take 
this to another forum - I don't want to further pollute this 
thread.  Please g-mail to: cliff s hudson.  (I'm assuming you are 
a human and can figure out the appropriate dotted address from 
the preceding :) )


Re: What are the worst parts of D?

2014-09-24 Thread Cliff via Digitalmars-d
Actually you can't do this for D properly without enlisting the 
help of the compiler. Scoped import is a very interesting 
conditional dependency (it is realized only if the template is 
instantiated).


Also, lazy opening of imports is almost guaranteed to have a 
huge good impact on build times.


Your reply confirms my worst fear: you're looking at yet 
another general build system, of which there are plenty of 
carcasses rotting in the drought left and right of highway 101.




This is one of my biggest frustrations with existing build 
systems - which really are nothing more than glorified makes 
with some extra syntax and - for the really advanced ones - ways 
to help you correctly specify your makefiles by flagging errors 
or missing dependencies.


The build system that will be successful for D will cooperate 
with the compiler, which will give it fine-grained dependency 
information. Haskell does the same with good results.



Andrei


The compiler has a ton of precise information useful for build 
tools, IDEs and other kinds of analysis tools (to this day, it 
still bugs the crap out of me that Visual Studio has effectively 
*two* compilers, one for intellisense and one for the 
command-line and they do not share the same build environment or 
share the work they do!)  Build is more than just producing a 
binary - it incorporates validation through testing, packaging 
for distribution, deployment and even versioning.  I'd like to 
unlock the data in our tools and find ways to leverage it to 
improve automation and the whole developer workflow.  Those ideas 
and principles go beyond D and the compiler of course, but we do 
have a nice opportunity here because we can work closely with the 
compiler authors, rather than having to rely *entirely* on 
OS-level process introspection through e.g. detours (which is 
still valuable from a pure dependency discovery process of 
course.)


If we came out of this project with tup-for-D I'd consider that 
an abject failure.


Re: can't understand why code do not working

2014-09-22 Thread Cliff via Digitalmars-d-learn

On Monday, 22 September 2014 at 20:12:28 UTC, Suliman wrote:

void worker()
{
int value = 0;
while (value =10)
{

value = receiveOnly!int();
writeln(value);
int result = value * 3;
ownerTid.send(result);
}
}

give me:

Running .\app1.exe
2
6
3
9
4
12
5
15
6
18
std.concurrency.OwnerTerminated@std\concurrency.d(234): Owner 
terminated


0x00405777 in pure @safe void 
std.concurrency.receiveOnly!(int).receiveOnly().__
lambda3(std.concurrency.OwnerTerminated) at 
C:\DMD\dmd2\windows\bin\..\..\src\ph

obos\std\concurrency.d(730)
0x0040B88D in @safe void std.concurrency.Message.map!(pure 
@safe void function(s
td.concurrency.OwnerTerminated)*).map(pure @safe void 
function(std.concurrency.O
wnerTerminated)*) at 
C:\DMD\dmd2\windows\bin\..\..\src\phobos\std\concurrency.d(

158)
0x0040B0B6 in 
D3std11concurrency10MessageBox151__T3getTDFNbNfiZvTPFNaNfC3std11co
ncurrency14LinkTerminatedZvTP8047E12172B30CAF110369CD57C78A37 
at C:\DMD\dmd2\win

dows\bin\..\..\src\phobos\std\concurrency.d(1159)


Is stdout threadsafe?


Re: can't understand why code do not working

2014-09-22 Thread Cliff via Digitalmars-d-learn

On Monday, 22 September 2014 at 21:28:25 UTC, Cliff wrote:
On Monday, 22 September 2014 at 21:24:58 UTC, monarch_dodra 
wrote:
On Monday, 22 September 2014 at 21:19:37 UTC, Steven 
Schveighoffer wrote:

On 9/22/14 4:37 PM, Cliff wrote:



Is stdout threadsafe?


Yes, stdout is thread safe, it's based on C's stdout which is 
thread safe.


-Steve


Techinallly, though thread safe, concurrent writes will 
create garbled text. D goes one step further in the sense that 
it prevents concurrent writes entirely.


The reason I ask is that the first iteration of his loop does 
not show up in stdout as presented.  I would expect 1 and 3 to 
be the first two lines of the output.


For that matter I also don't expect the 6 and 18 - nevermind, 
there must be something else going on with that loop.  He must 
have changed the loop limits.


Re: can't understand why code do not working

2014-09-22 Thread Cliff via Digitalmars-d-learn

On Monday, 22 September 2014 at 21:24:58 UTC, monarch_dodra wrote:
On Monday, 22 September 2014 at 21:19:37 UTC, Steven 
Schveighoffer wrote:

On 9/22/14 4:37 PM, Cliff wrote:



Is stdout threadsafe?


Yes, stdout is thread safe, it's based on C's stdout which is 
thread safe.


-Steve


Techinallly, though thread safe, concurrent writes will 
create garbled text. D goes one step further in the sense that 
it prevents concurrent writes entirely.


The reason I ask is that the first iteration of his loop does not 
show up in stdout as presented.  I would expect 1 and 3 to be the 
first two lines of the output.


Re: RFC: reference counted Throwable

2014-09-21 Thread Cliff via Digitalmars-d

On Sunday, 21 September 2014 at 04:59:12 UTC, Paulo Pinto wrote:

Am 21.09.2014 04:50, schrieb Andrei Alexandrescu:

On 9/20/14, 7:10 PM, bearophile wrote:

Andrei Alexandrescu:

Rust looked a lot more exciting when I didn't know much 
about it.


I didn't remember ever seeing you excited about Rust :-) In 
past you
(rightfully) didn't comment much about Rust. But do you have 
more
defined ideas about it now? Do you still think D has a chance 
against

Rust?


I don't think Rust has a chance against D. -- Andrei



The real question is which language, from all that want to 
replace C++, will eventually get a place at an OS vendors SDK.


So far, the winning ones seem to be Swift on Apple side, and 
.NET Native-C++/CLX on Microsoft side (who knows what are they 
doing with M#).


Maybe someone in the commercial UNIX (FOSS is too bound with 
C), real time or embedded OS space?


--
Paulo


Interop, interop, interop.  Walter and Andrei are right when they 
talk about the importance of C++ interop - not only do you get to 
leverage those libraries, but it reduces the barrier to entry for 
D in more environments.


Swift will never be more important than Objective C was - which 
is to say it'll be the main development language on Apple 
products and probably nothing else.  That has real value, but the 
limits on it are pretty hard and fast (which says more about 
Apple than the language itself.)


.NET suffers a similar problem in spite of the community's best 
efforts with Mono - it'll always be a distant 2nd (or 5th or 
20th) on other platforms.  And on Windows, C++ won't get 
supplanted by .NET absent a sea-change in the mindset of the 
Windows OS group - which is notoriously resistant to change (and 
they have a colossal existing code base which isn't likely to 
benefit from the kind of inflection point Apple had moving to a 
BSD and porting/rewriting scads of code.)


So C/C++ is it for universal languages, really (outside of the 
web server space, where you have a large Java deployment.)  I 
don't think D needs to be the next .NET (of any flavor) or the 
next Swift, and I don't see as it is being positioned that way 
either - the target to me is clearly C/C++.  It doesn't need to 
compete with languages that have lesser universality, though it 
should (and does) borrow the good ideas from those languages.


I don't think D needs to look at *replacing* C++ in the near or 
mid term either - it still needs to convince people it deserves a 
place at the table.  And the easiest way to do that is to get 
this C++ interop story really nailed down, and make sure D's 
warts are smaller than C++'s.  And, of course, the GC strawman 
that native programmers always claim is more important than it 
really is.  I like the threads going on currently about ARC and 
related technologies - there's a real chance to innovate here.


Re: RFC: reference counted Throwable

2014-09-21 Thread Cliff via Digitalmars-d

On Sunday, 21 September 2014 at 23:32:29 UTC, deadalnix wrote:
On Sunday, 21 September 2014 at 20:57:24 UTC, Peter Alexander 
wrote:
No improvements to the GC can fix this. @nogc needs to be 
usable, whether you are a GC fan or not.


True.

To fix this, we need to add a pile of hack that take care of 
this specific use case. The end goal is to have a pile of hack 
for every single use case, as this is what C++ has and C++ is 
successful and we want to be successful.


Introducing a construct to manage ownership would obviously 
avoid the whole pile of hack, but hey, this would introduce 
complexity to the language, when the pile of hack do not.


AMIRITE ?


The devolution of conversation in this thread to snide remarks, 
extreme sarcasm and thinly (if at all) veiled name calling is 
counterproductive to solving the problem.  Reasonable people can 
disagree without resorting to passive-aggression.


Re: Dependency management in D

2014-09-19 Thread Cliff via Digitalmars-d

On Friday, 19 September 2014 at 18:56:20 UTC, ketmar via
Digitalmars-d wrote:

On Fri, 19 Sep 2014 17:38:20 +
Scott Wilson via Digitalmars-d digitalmars-d@puremagic.com 
wrote:



That CTFE is used randomly everywhere?
CTFE *can* be used alot. this is one of D killer features (our 
regexp
engine, for example, not only very fast, but regexps can be 
compiled to

native code thru D in *compile* *time* without external tools).

all in all it heavily depends of your libraries, of course. if 
you will
do that with care ;-), it will work. but i just can't see any 
reason to
compilcate build process with .di generation. D compilers 
usually are
fast enough on decent boxes, and building can be done on 
background

anyway.


As someone with some expertise in this subject, I can say with
certainty that builds can almost never be fast enough.  If D
becomes successful - something we all desire I think - then it
will require large organizations to use it for large projects -
which means large code bases and long(er) compile times.  Build
labs seem to always be under pressure to churn out official bits
as quickly as possible for testing, deployment, analysis, etc.
More holistically, it's important that the bits produced in the
official process and the dev box process be as similar as
possible, if not entirely identical.  You can imagine such builds
feeding back into intellisense and analysis locally in the
developer's IDE, and these processes need to be fast and
(generally) lightweight.  Taken to the Nth degree, such work is
only ever done once for a change anywhere in the organization and
the results are available for subsequent steps immediately.

I don't know what all of the blockers to good incremental builds
under D are, but as D grows in influence, we can be sure people
will start to complain about build times, and they will start to
ask pointed questions about incrementality, reliability and
repeatability in builds.  Having a good handle on these issues
will allow us to at least plan and give good answers to people
who want to take it to the next level.


Re: Dependency management in D

2014-09-19 Thread Cliff via Digitalmars-d

On Friday, 19 September 2014 at 19:22:22 UTC, ketmar via
Digitalmars-d wrote:

On Fri, 19 Sep 2014 19:07:16 +
Cliff via Digitalmars-d digitalmars-d@puremagic.com wrote:

that's why dedicating people to work solely on build scripts and
infrastructure is good, yet almost nobody does that. ah, 
enterprise

BS again. fsck enterprise.

as for build times: we always can write parsed and analyzed 
ASTs to
disk (something like delphi .dcu), thus skipping the most work 
next
time. and then we can compare cached AST with new if the source 
was
changed to see what exactly was changed and report that (oh, 
there is
new function, one function removed, one turned to template, and 
so on).

this will greatly speed up builds without relying on ugly
header/implementation model.

the only good thing enterprise can do is to contribute and 
then

support such code. but i'm sure they never will, they will only
complaing about how they want faster build times. hell with 
'em.


In a sense I sympathize with your antipath toward enterprises,
but the simple fact is they have a lot of money and command a lot
of developers.  For us, developers = mind share = more libraries
for us to use and more ideas to go around.  That's all goodness.
Leverage it for what it's worth.

I'm definitely a fan of finding ways to improve build speeds that
don't involve the creation of unnecessary (or worse, redundant)
and user maintained artifacts.  I'm also a fan of simplifying the
build process so that we don't have to have experts maintain
build scripts.


Re: code cleanup in druntime and phobos

2014-09-18 Thread Cliff via Digitalmars-d

I feel like this whole thread's diversion onto the relative
merits of GitHub is pretty pointless.  Would it be difficult to
write a small automation tool that users could run (maybe
distributed as part of the DMD package or something) that lets
them submit patches/PRs mostly automatically?  Or have a process
which scans Bugzilla and produces such things automatically?  I
know I am not totally up on the infrastructure capabilities, but
lowering the barrier to entry is almost always a good thing, and
the religious arguments can be saved for alt.github.die.die.die
or something.


Re: Interop with C++ library - what toolchain do you use?

2014-09-18 Thread Cliff via Digitalmars-d-learn

On Thursday, 18 September 2014 at 08:27:07 UTC, Szymon Gatner
wrote:

On Wednesday, 17 September 2014 at 22:28:44 UTC, Cliff wrote:

So I am trying to use a C++ library with D.  My toolchain is
currently Visual Studio 2013 with Visual D, using the DMD
compiler.  When trying to link, I obviously ran into the OMF 
vs.
COFF issue, which makes using the C++ library a bit of a trial 
to

say the least (I played around with some lib format converters
but perhaps unsurprisingly this led to somewhat unpredictable
behavior.)  I'd like to fix up my toolchain to avoid having 
this

issue.

For those of you who are on Windows and who do D and C++ 
interop,

what toolchain have you had success with?

Additionally, I have heard tell that D now allows calling C++
non-virtual class methods but I have not found documentation on
how to make this work (how do I define the C++ class layout in 
D

- I know how to do it for vtable entries but not non-virtual
methods.)  Pointers to docs or samples would be much 
appreciated.


Thanks!


I am using Visual Studio 2012 (in x64 bit mode that is).

Binary D distribution also comes with phobos64.lib that C++ 
executable has to link. With VisualD plugin you can just add D 
static library project to the solution and the link C++ exe to 
it. It is very easy to make back-and-forth function calls 
between C++/D. I recommend Adam Ruppe's excellent D Cookbook 
Integration chapter on the details on how to expose this for 
cross-lang usage.


It does not all work correctly tho. I reported my issues on 
learn subforum but didn't get much help. In short: it seems 
not everything in D run-time (Phobos) gets properly initialized 
even after successful rt_init() call. In my case simple call to 
writeln() causes a crash because stdout is not properly 
initialized on D'd side.


Happy hybridizing!


Thanks guys.  I do have that book, but I was unaware of the COFF
capabilities of the 64-bit DMD, I'll take a look at it.


Re: Where should D programs look for .so files on Linux (by default)?

2014-09-17 Thread Cliff via Digitalmars-d

On Wednesday, 17 September 2014 at 17:13:07 UTC, H. S. Teoh via
Digitalmars-d wrote:

As for how it works on Windows, I have no idea at all. It's 
probably
completely different from Posix, which is more reason to leave 
it up to
plugin framework implementors to implement, rather than 
hard-coding an

incomplete / inconsistent implementation in druntime.


T


FYI: Windows rules are here:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms682586(v=vs.85).aspx


Re: Code doesn't work - why?

2014-09-17 Thread Cliff via Digitalmars-d-learn

On Wednesday, 17 September 2014 at 21:33:01 UTC, Robin wrote:
Here is the fully working code for everyone experiencing 
similar bugs or problems with pointers and value types. =)


struct DeterministicState {
public:
	this(string name, bool isFinal, DeterministicState *[char] 
transits...) {

this.name = name;
this.finalState = isFinal;
this.addTransits(transits);
}

this(string name, bool isFinal) {
this.name = name;
this.finalState = isFinal;
}

this(bool isFinal, DeterministicState *[char] transits...) {
this(, isFinal, transits);
}

this(DeterministicState *[char] transits...) {
this(, false, transits);
}

void addTransits(DeterministicState *[char] newTransits) {
foreach (immutable key; newTransits.keys) {
transits[key] = newTransits[key];
}
}

string getName() const {
return name;
}

bool isFinalState() const {
return finalState;
}

bool hasNext(char input) const {
return (input in transits) ? true : false;
}

DeterministicState * getNext(char input) {
return transits[input];
}

string toString() const {
return name;
}

private:
string name;
DeterministicState *[char] transits;
bool finalState;
}

struct DeterministicFiniteAutomaton {
public:
DeterministicState *[] input(char[] input) {
DeterministicState *[] trace = [ start ];
auto currentState = trace[0];
foreach (immutable c; input) {
if (!currentState.hasNext(c)) {
writeln(currentState.toString() ~  has no next for  ~ 
to!string(c));

break;
} else {
writeln(currentState.toString() ~  has next for  ~ 
to!string(c));

}
currentState = currentState.getNext(c);
trace ~= currentState;
}
return trace;
}

this(DeterministicState * start) {
this.start = start;
}

private:
DeterministicState * start;
}

void main()
{
auto s0 = DeterministicState(s0, false);
auto s1 = DeterministicState(s1, false);
auto s2 = DeterministicState(s2, true);
s0.addTransits(['0' :  s1, '1' :  s2]);
s1.addTransits(['0' :  s0, '1' :  s2]);
s2.addTransits(['0' :  s2, '1' :  s2]);
auto dfa = DeterministicFiniteAutomaton( s0);
auto trace = dfa.input(0001.dup);
foreach (t; trace) {
writeln(t.toString());
}
writeln(Trace Length =  ~ to!string(trace.length));
}

Regards,
Rob


Out of curiosity, why did you decide to stick with structs
instead of simply using classes?  To avoid heap allocations?


Re: Code doesn't work - why?

2014-09-17 Thread Cliff via Digitalmars-d-learn

On Wednesday, 17 September 2014 at 21:45:01 UTC, Robin wrote:
This is actually a good question as this code isn't really 
complex or doesn't require the best possible performance.
But in case I will ever need optimum performance I should have 
learned how to handle tasks with value types which is the main 
reason why I chose them instead of reference types - for 
learning purposes.


- can't hurt! ;)

Regards,
Rob


Probably also has applicability when creating compile-time data
structures that have scope-limited lifetimes.


Interop with C++ library - what toolchain do you use?

2014-09-17 Thread Cliff via Digitalmars-d-learn

So I am trying to use a C++ library with D.  My toolchain is
currently Visual Studio 2013 with Visual D, using the DMD
compiler.  When trying to link, I obviously ran into the OMF vs.
COFF issue, which makes using the C++ library a bit of a trial to
say the least (I played around with some lib format converters
but perhaps unsurprisingly this led to somewhat unpredictable
behavior.)  I'd like to fix up my toolchain to avoid having this
issue.

For those of you who are on Windows and who do D and C++ interop,
what toolchain have you had success with?

Additionally, I have heard tell that D now allows calling C++
non-virtual class methods but I have not found documentation on
how to make this work (how do I define the C++ class layout in D
- I know how to do it for vtable entries but not non-virtual
methods.)  Pointers to docs or samples would be much appreciated.

Thanks!


Re: std.experimental.logger: practical observations

2014-09-16 Thread Cliff via Digitalmars-d

On Monday, 15 September 2014 at 22:47:57 UTC, Robert burner
Schadek wrote:
On Monday, 15 September 2014 at 22:39:55 UTC, David Nadlinger 
wrote:
Issues like threading behavior and (a)synchronicity guarantees 
are part of the API, though, and need to be clarified as part 
of the std.logger design.


the threading behavior has been clarified in the api docs.

the (a)synchronicity guarantees is part of the concrete Logger 
impl. the Logger api does not force synchronize or asynchronize 
behavior, it allows both to be implemented by every subclass of 
Logger.


Alright.  BTW, thanks for undertaking this project - every app of
reasonable size needs logging and I think having a standard
logging library is just one more of those ecosystem improvements
that gets people up and running quickly.


Re: Increasing D's visibility

2014-09-16 Thread Cliff via Digitalmars-d

On Tuesday, 16 September 2014 at 21:21:08 UTC, Martin Drasar via
Digitalmars-d wrote:

On 16.9.2014 20:07, Anonymous via Digitalmars-d wrote:

Dlang on 4chan

http://boards.4chan.org/g/thread/44196390/dlang


Yeah, and the discussion is just in line with typical 4chan 
discussions :-)


A1) Andrei is fucking hot and he's not russian

A2) @A1: Andrei will never be your husbando
Why bother living?


Also:

A) GC bad!  I can manage memory myself, and multithreading is
child's-play - people who use D must be slow and stupid...

*snort*  Ok, you and your delusions of competence are excused
from the conversation now...


Re: compile automation, makefile, or whatever?

2014-09-16 Thread Cliff via Digitalmars-d-learn

On Tuesday, 16 September 2014 at 19:00:05 UTC, K.K. wrote:

Hey I have a quick question: Does D have it's own version of
makefiles or anything (preferably simpler)?
So instead of typing in PowerShell dmd file1.d file2.d
lib\foo.lib -Isrc\ . I could just type most of that into a
file and then just type dmd file.X

I've seen some people make really complex .d files that have a
lot of interchangeability but at the moment I wouldn't really
need something of that scale. Also, I'm not using DUB; I'd 
prefer

to just use the command line.

..Can pragma's help with this, aside from linking just the libs?


I want to say somewhere on the forums are some descriptions of
using CMake for this.  Might try searching for that.


Re: compile automation, makefile, or whatever?

2014-09-16 Thread Cliff via Digitalmars-d-learn

On Tuesday, 16 September 2014 at 20:29:12 UTC, K.K. wrote:

On Tuesday, 16 September 2014 at 19:26:29 UTC, Cliff wrote:

I want to say somewhere on the forums are some descriptions of
using CMake for this.  Might try searching for that.


Yeah I just looked up the CMake thing. It definitely seems worth
playing with, though I'm not really sure how extensive it's D
support currently is :S


Out of curiosity, why are you not using dub (on the command-line)?


Re: compile automation, makefile, or whatever?

2014-09-16 Thread Cliff via Digitalmars-d-learn

On Tuesday, 16 September 2014 at 20:45:29 UTC, K.K. wrote:

On Tuesday, 16 September 2014 at 20:31:33 UTC, Cliff wrote:
Out of curiosity, why are you not using dub (on the 
command-line)?


I'm not against using it or anything, but I've found that it
didn't help me significantly nor did I have the patience to
figure out it's whole set of issues, D by itself is already
enough trouble xD

Plus with my spastic work style, it kinda slowed me down.

However, it is something I may consider when I have an actually
organized project with a final goal. A lot of what I'm doing
right now is just experiments. Though, if Cmake + D does the
trick then I might not use DUB in the end. Hard to say atm.


Would you be willing to provide some more detail on what about it
you didn't like (errors, missing features, etc.)?  I ask because
build systems are a particular interest of mine and I have
projects in this area which can always use more user data.


Re: compile automation, makefile, or whatever?

2014-09-16 Thread Cliff via Digitalmars-d-learn

On Tuesday, 16 September 2014 at 21:05:18 UTC, K.K. wrote:

On Tuesday, 16 September 2014 at 20:53:08 UTC, Cliff wrote:
Would you be willing to provide some more detail on what about 
it
you didn't like (errors, missing features, etc.)?  I ask 
because

build systems are a particular interest of mine and I have
projects in this area which can always use more user data.


I'll try, but I haven't used it at all since maybe.. April?

One of the main things that annoyed me about it was how 
sensitive

it could be. The best comparison I can give is that it reminded
me ALOT of Haxe. Both are very flimsy, and very poorly
documented. (Nothing beats a good manual as far as I'm 
concerned!)


The other thing, as I briefly mentioned, was it really didn't
speed anything up, unless maybe you were working on a larger
project.


Obviously I'm not a master of any sort, but the main point I'd
take from this is it wasn't inviting.

Hope that helps a bit :3


Yep, that's useful information to me.  Over the years I have
found that build systems *generally* tend to be uninviting.  My
suspicion is that comes down to a few reasons:

1. Builds end up being a LOT more complicated that you would
expect as soon as you step out of a single project with a few
source files and default options
2. Build tooling is typically built and maintained by people who
end up being relatively specialist - either they are part of the
small cabal of people who know the tooling intimately, or they
have been forced into it and know just enough to get by for their
organization.
3. Most build tooling is designed to solve a particular subset of
actual build-related problems, with much less though given to how
it fits holistically into the entire developer workflow.
4. Build tooling is almost never treated like an actual product -
documentation is written for wizards, not lay-people.

As a result, the casual user is a bit SOL.

(NOTE: This is not a rant specifically aimed at DUB, but my
general observation on the state of build tooling.)


Re: std.experimental.logger: practical observations

2014-09-15 Thread Cliff via Digitalmars-d

On Monday, 15 September 2014 at 18:24:07 UTC, Marco Leise wrote:

Ah, so you avoid recursion issues by separating the calls to
error() et altera from the actual process of writing to disk
or sending via the network.

Behind error() there would be a fixed implementation
controlled by the author of the logging library that just
appends the payloads to a list.
Another thread would pick items from that list and push them
into our Logger classes where we can happily use the logging
functionalities ourselves, because they would just get
appended to the list and wait for their time instead of
causing an immediate recursive call.

So basically your idea is message passing between the
application and a physical (probably low priority) logging
thread. This should also satisfy those who don't want to wait
for the logging calls to finish while serving web requests.
How do such systems handle a full inbox? In Phobos we have
http://dlang.org/phobos/std_concurrency.html#.OnCrowding


In MSBuild (where we used a custom but extensible logging system)
we had this exact issue with some of our larger customers who
were logging to remote databases.  Our solution in that case was
to block the caller and force a flush down to a certain backlog
level (we had a limit and hysteresis).  This is not unlike
garbage collector behavior.  In fact, you almost certainly want
to have a configuration of this sort so that users of your
library can help ensure that the logging subsystem does not
starve the rest of the application of memory.

Another alternative (for completeness) is to drop the messages,
but this is almost invariably NOT desirable.


Re: std.experimental.logger: practical observations

2014-09-15 Thread Cliff via Digitalmars-d

On Monday, 15 September 2014 at 22:39:55 UTC, David Nadlinger
wrote:
On Monday, 15 September 2014 at 22:33:46 UTC, Robert burner 
Schadek wrote:

and you can do all that with std.logger.

again, the idea of std.logger is not to give you everything, 
because nobody knows what that even is, the idea is to make it 
possible to do everything and have it understandable later and 
use transparently


Issues like threading behavior and (a)synchronicity guarantees 
are part of the API, though, and need to be clarified as part 
of the std.logger design.


David


This is *really* what I am getting at.  Even if not another line
of code is written nor feature added, its important to state what
is actually intended so people know where the limits are a
priori, rather than finding an implicit contract and then running
with it (leaving you with legacy behavior to potentially maintain
later.)


Re: std.experimental.logger: practical observations

2014-09-14 Thread Cliff via Digitalmars-d

On Sunday, 14 September 2014 at 07:22:52 UTC, Marco Leise wrote:

Am Sat, 13 Sep 2014 14:34:16 +
schrieb Robert burner Schadek rburn...@gmail.com:

On Friday, 12 September 2014 at 16:08:42 UTC, Marco Leise 
wrote:


 Remember that the stdlog is __gshared? Imagine we set the
 LogLevel to off and while executing writeLogMsg ...

 * a different thread wants to log a warning to stdlog
 * a different thread wants to inspect/set the log level

 It is your design to have loggers shared between threads.
 You should go all the way to make them thread safe.

 * catch recursive calls from within the same thread,
   while not affecting other threads' logging
 * make Logger a shared class and work with atomicLoad/Store,
   a synchronized class or use the built-in monitor field
   through synchronized(this) blocks.

hm, I don't know of any magic pill for that. I guess this 
would require some dataflow analysis.


Why so complicated? In general - not specific to std.logger -
I'd wrap those calls in some function that acquires a mutex
and then check a recursion flag to abort the logging if this
thread has already been here.

synchronized(loggingMutex) {
  if (isRecursion) return;
  isRecursion = true;
  scope(exit) isRecursion = false;
  logger.writeLogMsg(...);
}


 I know when to throw an exception, but I never used logging
 much. If some function throws, would I also log the same
 message with error() one line before the throw statement?
 Or would I log at the place where I catch the exception?
 What to do about the stack trace when I only have one line 
 per

 log entry?
 You see, I am a total newbie when it comes to logging and 
 from

 the question that arose in my head I figured exceptions and
 logging don't really mix. Maybe only info() and debug() 
 should

 be used and actual problems left to exception handling alone.

that is depended on what your program requires. You can write 
more than one line, just indent it by a tab or two. again no 
magic pill as far as I know


Ok, I'll experiment a bit and see what works best.


I'd like to throw my oar in here:

On the subject of recursion, this is only a problem if the 
logging contract is that log methods are fully synchronous - was 
this an explicit design choice?


Loggers are not *necessarily* also debuggers.  When used for 
post-mortem analysis (the typical case), it is not generally 
important that log data has been written by the time any given 
log method has returned - if the caller *intends* that, the 
logging system can have a sync/flush method similar to I/O 
behavior, or a configuration option to force fully synchronized 
behavior.


Personally I am not a huge fan of any potential I/O calls being 
by-default synchronous - particularly when those calls may easily 
result in long-running operations e.g. a network call, wait on a 
contended resource, etc.  Coming from the .NET world and having 
seen far too many large programs with user-facing components, 
blocking I/O by-default leads to poor user experiences as their 
program starts to stutter or be subject to timeouts the original 
author did not test for or intend.  With an extensible logging 
system, the same can - I mean *will* - come about.  Logging to my 
mind is usually a fire-and-forget utility - I want to see what 
happened (past tense) not what is happening now (that's what a 
debugger is for).


A way to solve this is to make the (or some) logging methods 
asynchronous.  logger.writeLogAsync(...) which returns 
immediately.  As an implementation detail, the log request gets 
posted to an internal queue serviced by a logging thread (thread 
pool thread is probably fine for this).  Since requests are 
*conceptually* independent from each other, this breaks the 
unintentional semantic dependence which occurs when recursion is 
introduced within the logging system itself.  I think this is 
*generally* the behavior you want, and specialized methods can be 
used to enforce synchronized semantics on top of this.  This 
system also guarantees log message ordering within a given thread.


If this queue is serviced by a threadpool thread, then the next 
logical problem then is to ensure that thread does not get tied 
up by one of the endpoints. There are several ways to solve this 
as well.


- Cliff


Re: Idiomatic async programming like C# async/await

2014-09-14 Thread Cliff via Digitalmars-d-learn

On Sunday, 14 September 2014 at 09:19:11 UTC, Kagamin wrote:

On Friday, 12 September 2014 at 03:59:58 UTC, Cliff wrote:
...but std.parallelism.Task requires parameterization on the 
function which the task would execute - that is clearly an 
implementation detail of the store.


I think, you can wrap the Task in a class.

abstract class CTask
{
  abstract void wait();
}

abstract class CTask(TResult)
{
  abstract TResult result();
}

class CTTask(TTask): CTask(TResult)
{
  TTask task; //std.parallelism.Task
  override void wait(){ ... }
  override TResult result(){ ... }
}


Yep, that's what I figured.  Thanks :)



Re: Idiomatic async programming like C# async/await

2014-09-12 Thread Cliff via Digitalmars-d-learn

On Friday, 12 September 2014 at 07:15:33 UTC, Kagamin wrote:
async/await is not so much about futures/promises, but 
optimization of IO-bound operations, i.e. when you wait on 
network/disk, you don't consume stack, threads and similar 
resources, an analog in D is vibe.d


I should have been more clear - it's not the async/await bit I am
interested in so much as the Task behavior - that I have some
object which represents the (future) completed state of a task
without the recipient of that object having to know what the type
of the task function is as they are only interested in the task
result.

I'll take a closer look at vibe.d and see if they already have a
system representing this before I cook up my own.


Idiomatic async programming like C# async/await

2014-09-11 Thread Cliff via Digitalmars-d-learn

(New to D, old hand at software engineering...)

I come from .NET and have made heavy use of the async/await 
programming paradigm there.  In particular, the Task mechanism 
(futures/promises) lets one encapsulate the future result of some 
work and pass that around.  D seems to have something similar in 
std.parallelism.Task, but this seems to additionally encapsulate 
and expose the actual work to do.


What I want to do is be able to define an interface that performs 
certain possibly-slow operations and presents a Task-based 
interface.  Example in C#:


interface MyStore
{
TaskKey Store(byte[] content);
Taskbyte[] Retrieve(Key key);
}

What I feel like I *want* to do in D is something roughly similar:

interface MyDStore
{
Task!Key Store(InputRange!ubyte content);
Task!(InputRange!ubyte) Retrieve(Key key);
}

...but std.parallelism.Task requires parameterization on the 
function which the task would execute - that is clearly an 
implementation detail of the store.


What is the correct D idiom to use in this case?



Re: [OT] Microsoft filled patent applications for scoped and immutable types

2014-08-26 Thread Cliff via Digitalmars-d

On Tuesday, 26 August 2014 at 19:47:25 UTC, Casper Færgemand
wrote:

How would this even work?


It looks like this applies only to the inference of immutability
based on the structure of the type and its methods, as opposed to
a declaration of immutability.


Re: [OT] Microsoft filled patent applications for scoped and immutable types

2014-08-26 Thread Cliff via Digitalmars-d

On Tuesday, 26 August 2014 at 20:27:55 UTC, Timon Gehr wrote:

On 08/26/2014 10:13 PM, Cliff wrote:

On Tuesday, 26 August 2014 at 19:47:25 UTC, Casper Færgemand
wrote:

How would this even work?


It looks like this applies only to the inference of 
immutability
based on the structure of the type and its methods, as opposed 
to

a declaration of immutability.


It does not look like that to me.


Hmm, I went and re-read more closely, and it appears the Summary
differs from the claims in that very important detail...  that
sucks.