Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Denis Shelomovskij

08.04.2012 21:31, Andrei Alexandrescu пишет:

On 4/8/12 11:59 AM, Denis Shelomovskij wrote:

Very good but minimum isn't a best guess. Personally I (and there will
be a lot of such maniacs I suppose) will think that this (minimum) time
can be significantly smaller than average time.


I've analyzed this quite a bit at work and the average and median are
not very informative. You need the mode, but in most benchmarks the mode
is very close to the minimum, so using the minimum is even better.

In speed measurements, all noise is additive (there's no noise that may
make a benchmark appear to run faster). There are also quite a few
outliers. Recording the average will include a fair amount of noise.


Yes of course. I mean than an algorithm itself should follow some 
restrictions for such measurement. It should run with the same speed 
every time. Many algorithms follows this convention, but not all.


Why will recording the average produce so much noise? As I see, floating 
point arithmetic is now used without a strong reason so it looks like a 
time of this part isn't valuable. Or is it just a temporary solution?


Anyway it should be configurable using a CT parameter, so it will not 
abuse one who doesn't need it.




Clearly there is noise during normal use as well, but incorporating it
in benchmarks as a matter of course reduces the usefulness of benchmarks
as a mean to improve performance of the benchmarked code.


A graph is needed exactly because of that. Without a graph it really 
gives very little.





So a parameter (probably with a default value) should be added.
Something like enum of flags telling what we want to know. At least
these looks usable: minTime, , maxTime,
standardDeviation, graph (yes, good old ASCII art).


Standard deviation is also not very useful because it includes all
outliers (some of which are very far away from the mode).


So a graph is needed.




Yes, graph is needed.


I am not sure about that. We may provide the raw measurement data for
programs that want to plot things, but plotting is beyond the charter of
std.benchmark.



Sorry, meant a histogram, not a curve. A histogram can be shown in a 
console very well. And it is needed because its the easiest way to show 
benchmarked program behaviour (and noise behaviour). It also requires 
only about 80 integers to store information and shouldn't produce much 
noise.


IMHO, a histogram gives lots of information and will be a good addition.


--
Денис В. Шеломовский
Denis V. Shelomovskij


Re: A modest proposal: eliminate template code bloat

2012-04-09 Thread Dmitry Olshansky

On 09.04.2012 5:11, Daniel Murphy wrote:

"Dmitry Olshansky"  wrote in message
news:jlsmka$22ce$1...@digitalmars.com...


The refinement is merging prefixes and suffixes of course.
And for that one needs to calculate hashes for all of prefixes and all of
suffixes. I will define _all_ later on.



I think you'll find that this is better done in the compiler instead of the
linker.  Merging prefixes is problematic because at some point you will need
to work out which tail to execute, so you'll always need to modify the
generated code.


"Easy": just add a hidden pointer argument to functions that have merged 
prefix (call it dispatch). The prefix part of code is followed by an 
indirect jump to this pointer.
Compiler arranges so that every time function is called the correct 
dispatch address is passed behind the scenes.


BTW there are no extra checks and such it's one naked indirect jump, and 
it's totally predictable unlike say switch jump.
(well unless you use a few copy-paste-susceptible functions in the same 
loop that turn out to have prefixes merged)


It still implies that prefix merging should be applied with more care 
then suffix merging. Once this tested and working even merging arbitrary 
parts of functions is doable with this approach.




Merging suffixes is easier, you can merge all returning blocks with
identical code, and then merge all blocks that always jump to the same
blocks, etc.
This will need to happen after code generation if you want to merge int/uint
code, which would be difficult in dmd's back end.



Any chance to fit this into IR-->CodeGen step? Like use alternative 
comparison then memcmp. Or better do basically the same algorithm but on 
map!()(IR) that morphs things so that some identical ops (e.g. 
uint/int == and !=) are considered the same. In fact this might be even 
faster then generating useless machine code!



Merging functions with identical bodies is of course much easier, and can be
done in the linker without needing to modify any code (just the
relocations).





--
Dmitry Olshansky


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Don

On 08.04.2012 07:56, Andrei Alexandrescu wrote:


For this to happen, we need to start an effort of migrating built-in
arrays into runtime, essentially making them templates that the compiler
lowers to. So I have two questions:


vote -= real.infinity.

That would kill D.



Re: Discussion on Go and D

2012-04-09 Thread Manu
On 9 April 2012 04:09, Andrej Mitrovic  wrote:

> On 4/9/12, Manu  wrote:
> > I don't follow. Can you give an example that shows this insecurity?
>
> I mean escaping references to locals:
>
> ref int xref;
> void foo() {
>   int x;
>   xref = x;
> }
>
> or
>
> ref int foo() {
>   int x;
>   ref int xref = x;
>   return xref;
> }
>
> I mean a ref would basically be a pointer with some syntax sugar, no?
> It would have the same drawbacks as a pointer.
>

Nobody returns a ref to a local from a function, and the compiler can
easily warn about that.
Sure, but that's all this was ever meant to be right? alias as a sugar to
simplify long expressions... except alias is unsafe too, but in a different
and more subtle way.


Re: Hitchikers Guide to Porting Phobos / D Runtime to other architectures

2012-04-09 Thread Johannes Pfau
Am Sun, 08 Apr 2012 21:08:52 +0200
schrieb "Iain Buclaw" :

> I got asked whether there are any porting hints for phobos on 
> other architectures the other day from the debian GCC 
> maintainers.  So I gathered this must be at least a dedicated 
> wiki or article to be written up on the subject. :)
> 
> I know there are a few working on porting gdc and associated 
> libraries over to ARM (with my assistance from the compiler 
> side).  So please tell, what are your experiences? Successes?  
> Failures?  What tips would you give to someone wanting to port to 
> their own architecture?
> 
> Regards
> Iain

(This is mostly about porting to a different C library. I don't
remember many issues when porting to a different CPU architecture)

Issues I hit with druntime:

* Adapting the core.stdc bindings to something different than the
  currently supported C libraries sucks: The version blocks are
  sometimes completely wrong. For example Android's bionic is a C
  library based on BSD code, but running on Linux. As a result
  sometimes the version(FreeBSD) blocks apply for bionic, but sometimes
  the version(linux) blocks are right. I basically had to rewrite
  the complete core.stdc bindings. This is an issue because druntime
  and phobos do not distinguish between OS/Kernel and C library.

* Wrong constants or macros in the C bindings are very hard to spot -
  you'll only notice those at runtime

* When statically linking the phobos/druntime library you are no warned
  about missing symbols - For shared libraries  -Wl,--no-undefined can
  be used, however, there are some issues with that as well:
  
(http://stackoverflow.com/questions/2356168/force-gcc-to-notify-about-undefined-references-in-shared-libraries
  second answer)

* Bionic just implements some functions as macros and never exports
  those as functions (htons, etc). Because of the last point it's easy
  to miss that

Ideally all of the core.stdc bindings should be generated
automatically. This is possible if we can run code (using offsetof,
alignof, etc) but it's not that easy for cross compilation. I thought
about hooking into the GCC C frontend to do that, but I had no time to
look at it yet.

* All those issues also apply to phobos, where phobos uses custom C
  bindings / extern(C) declarations.

* I had to edit some stuff in std.stdio (because Android has no wide
  character/fwide support). Templates can be annoying in this case:
  some if(isOutputRange!T) chains hid an error in the IO code, it took
  me some time to find that problem. The reported error was completely
  misleading (cannot put dchar[] into LockingTextWriter or something)

* When adding new, system specific code to a module and using selective
  imports, that may affect other modules (can't remember which compiler
  bug this was). This means that adding an import in one module might
  break another module on another architecture.

* Porting the GC doesn't seem to be too difficult, but some care is
  needed to get stack scanning/TLS scanning right (If you have random
  crashes, it's either the GC not working(probably not scanning
  stack/tls) or fno-section-anchors missing)

* Always use "-fno-section-anchors". It's not needed for simple code,
  but I was chasing a weird bug in derelict, till I realized I didn't
  compile derelict with "-fno-section-anchors".

* Right now, issue 284 is a little annoying. At least unittest and
  phobos/druntime as shared libraries won't work at all till that's
  fixed.

* AFAIK the unittests cannot be run when cross-compiling right now?

* There might be more issues like this one where phobos is checking for
  a wrong status code:
  (https://github.com/D-Programming-Language/phobos/pull/487)

* For systems where long double isn't available, fixing core.stdc.math
  is annoying. I have to implement a proper solution which works
  for all systems without long double.

However, all that considered most issues are when interfacing C. The D
code most of the time 'just works'.


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Timon Gehr

On 04/09/2012 10:24 AM, Don wrote:

On 08.04.2012 07:56, Andrei Alexandrescu wrote:


For this to happen, we need to start an effort of migrating built-in
arrays into runtime, essentially making them templates that the compiler
lowers to. So I have two questions:


vote -= real.infinity.

That would kill D.



Why does this even compile?

void main(){
long vote;
vote -= real.infinity;
assert(vote == long.min);
}


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Manu
On 9 April 2012 11:24, Don  wrote:

> On 08.04.2012 07:56, Andrei Alexandrescu wrote:
>
>  For this to happen, we need to start an effort of migrating built-in
>> arrays into runtime, essentially making them templates that the compiler
>> lowers to. So I have two questions:
>>
>
> vote -= real.infinity.
>
> That would kill D.


How do you figure?

After thinking on it a bit, I'm becoming a little worried about this move
for 2 rarely considered reasons:
Using lowering to a template, debug(/unoptimised) performance will probably
get a lot slower, which is really annoying. And debugging/stepping might
become considerably more annoying too, if every time I press F11 (step in)
over a function call that happens to receive an arg from an array, the
debugger then steps into the array templates index operator... We'd be no
better off than with STL, unless the language has clever ways of hiding
this magic from the debugger too, and optimising/inlining the index even in
debug builds...? But this is the built-in array, and not a library we can
optionally not use.


Re: Discussion on Go and D

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 02:21, Andrej Mitrovic wrote:

On 4/9/12, Andrei Alexandrescu  wrote:

and pass-by-alias


Speaking of alias, one killer feature would be to enable using alias
for expressions. E.g.:

struct Window { struct Point { int x, y; } Point point; }
void test() {
 Window window;
 alias window.point.x x;
 // use 'x' here which is really window.point.x
}

It makes it simpler to manipulate nested structs and their fields by
reference without involving pointers or using with statements. AFAIK
C++ can use references for this purpose (ala&int x =
window.point.x;), but I guess this isn't very efficient unless the
compiler can optimize it.

Besides myself I've also seen other people request it (I think Nick S.
wanted the feature).


I want this feature as well. I also want it to be possible to document.

--
/Jacob Carlborg


Re: Discussion on Go and D

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 02:24, Alex Rønne Petersen wrote:


Google likes to invent random useless languages. See: Dart. Both
languages are solutions looking for problems. ;)


Actually I like the idea behind Dart, to replace JavaScript. But that's 
basically the only think I like about it.


--
/Jacob Carlborg


Re: Hitchikers Guide to Porting Phobos / D Runtime to other architectures

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 11:05, Johannes Pfau wrote:


* Adapting the core.stdc bindings to something different than the
   currently supported C libraries sucks: The version blocks are
   sometimes completely wrong. For example Android's bionic is a C
   library based on BSD code, but running on Linux. As a result
   sometimes the version(FreeBSD) blocks apply for bionic, but sometimes
   the version(linux) blocks are right. I basically had to rewrite
   the complete core.stdc bindings. This is an issue because druntime
   and phobos do not distinguish between OS/Kernel and C library.


Is it possible to treat bionic as its own platform:


version (bionic) {}

else version (linux{}

and so on.

--
/Jacob Carlborg


Re: Discussion on Go and D

2012-04-09 Thread Artur Skawina
On 04/09/12 02:21, Andrej Mitrovic wrote:
> On 4/9/12, Andrei Alexandrescu  wrote:
>> and pass-by-alias
> 
> Speaking of alias, one killer feature would be to enable using alias
> for expressions. E.g.:
> 
> struct Window { struct Point { int x, y; } Point point; }
> void test() {
> Window window;
> alias window.point.x x;
> // use 'x' here which is really window.point.x
> }
> 
> It makes it simpler to manipulate nested structs and their fields by
> reference without involving pointers or using with statements. AFAIK
> C++ can use references for this purpose (ala &int x =
> window.point.x;), but I guess this isn't very efficient unless the
> compiler can optimize it.

struct Window { struct Point { int x, y; } Point point; }
void test() {
Window window;
@property ref x() { return window.point.x; }
// use 'x' here which is really window.point.x
}

And, yes, the compiler can and does optimize it away.

artur


Re: Foreach Closures?

2012-04-09 Thread Ary Manzana

On 4/9/12 7:26 AM, Kevin Cox wrote:

I was wondering about the foreach statement and when you implement
opApply() for a class it is implemented using closures.  I was wondering
if this is just how it is expressed or if it is actually syntatic
sugar.  The reason I aski is because if you have a return statement
inside a foreach it returns from the outside function not the "closure".

I was just wondering if anyone could spill the implementation details.

Thanks,
Kevin


In this video you can see what foreach with opApply gets translated to 
(at about minute 1):


http://www.youtube.com/watch?v=oAhrFQVnsrY


Re: Hitchikers Guide to Porting Phobos / D Runtime to other architectures

2012-04-09 Thread Iain Buclaw
On 9 April 2012 10:35, Jacob Carlborg  wrote:
> On 2012-04-09 11:05, Johannes Pfau wrote:
>
>> * Adapting the core.stdc bindings to something different than the
>>   currently supported C libraries sucks: The version blocks are
>>   sometimes completely wrong. For example Android's bionic is a C
>>   library based on BSD code, but running on Linux. As a result
>>   sometimes the version(FreeBSD) blocks apply for bionic, but sometimes
>>   the version(linux) blocks are right. I basically had to rewrite
>>   the complete core.stdc bindings. This is an issue because druntime
>>   and phobos do not distinguish between OS/Kernel and C library.
>
>
> Is it possible to treat bionic as its own platform:
>
>
> version (bionic) {}
>
> else version (linux{}
>
> and so on.
>

Personally I feel that people porting to specific architectures should
maintain their differences in separate files under a /ports directory
structure - lets say core.stdc.stdio as a cod example. The version for
bionic would be under /ports/bionic/core/stdc/stdio.d, and that is the
module that gets compiled into the library when building for bionic.
When installing, the build process generates a header file of the
bionic version of core.stdc.stdio and puts the file in the correct
/include/core/stdc/stdio.di location.

Though it is fine to say using version {} else version {} else static
assert(false); when dealing with a small set of architectures.  I feel
strongly this is not practical when considering there are 23+
architectures and 12+ platforms that could be in mixed combination.
The result would either be lots of code duplications everywhere, or
just a wiry long block of spaghetti code.  Every port in one file
would (eventually) make it difficult for maintainers IMO.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


Re: A modest proposal: eliminate template code bloat

2012-04-09 Thread Artur Skawina
On 04/09/12 08:21, Somedude wrote:
> Le 08/04/2012 16:18, H. S. Teoh a écrit :
>> On Sun, Apr 08, 2012 at 03:01:56PM +0400, Dmitry Olshansky wrote:
>>> I think it's been ages since I meant to ask why nobody (as in
>>> compiler vendors) does what I think is rather simple optimization.
>>>
>>> In the short term the plan is to introduce a "link-time" flavored
>>> optimization at code generation or (better) link step.
>>
>> This would be incompatible with how current (non-dmd) linkers work. But
>> I do like the idea. Perhaps if it works well, other linkers will adopt
>> it? (Just like how the gcc linker adopted duplicate template code
>> elimination due to C++ templates.)
>>
>> T
>>
> 
> Actually, in C++ (as well as D), the added benefit would be a greatly
> improved compilation speed, wouldn't it ?
> I bet if the idea works in D and proves increased compilation, compiler
> writers would be very compelled to implement it in C++.
> 

They already do.

It's a very simple and trivial optimization, the question is only about
programmer expectations. Every (memory) object having an unique address
*is* a valuable feature with clear benefits. (C++ has functions as
non-objects, that's why the compilers can get away with the optimization)
Note that that does not actually mean that everything has to be placed
at an unique address -- it only needs to behave *AS IF*, as long as the
program can't tell the difference.


On 04/09/12 02:59, Daniel Murphy wrote:
> "Artur Skawina"  wrote in message 
> news:mailman.1480.1333900846.4860.digitalmar...@puremagic.com...
>>
>> Note that my point is just that the compiler needs to emit a dummy
>> so that the addresses remain unique, eg
>>
>>   module.f!uint:
>>   jmp module.f!int
> 
> Or use a nop slide before the start of the function.  Since we're modifying 
> the object file format anyway, it would be trivial for the compiler to mark 
> functions which have their address taken as needing a unique address. 

Nice idea. Given todays amounts of alignment noops emitted it would usually
be completely free.

But I now think the optimization would be ok, and should even on by default
for the case where the identical code sequence was generated from an
identical token sequence. That would handle the template bloat issue while
avoiding most of the problems; having non-unique addresses for this case
should be harmless and would just need to be properly documented.

It's only the random-completely-unrelated-function-replacement that is 
problematic - think such functions randomly appearing in the call chain,
confusing both downstream code and programmers looking at backtraces or
perf profiles, and breakpoints that magically appear out of nowhere at random.

artur


Re: More ddoc complaints

2012-04-09 Thread Stewart Gordon

On 08/04/2012 02:08, Adam D. Ruppe wrote:

I have a pull request up to remove the big misfeature
of embedded html in ddoc, and it is pending action,
from me, to answer some of Walter's concerns.


What have you done - just made it convert < > & in documentation 
comments to < > & before processing?


What is the user who wants some output format other than HTML or XML to do?



http://arsdnet.net/web.d/std_dom.html#Form.addValueArray

It is extremely difficult to document a HTML library
when your HTML examples are mis-interpreted as
markup!


Create LT, GT and AMP macros and use them in your code examples.


Also, ddoc should outdent the code examples:

http://arsdnet.net/web.d/std_cgi.html#Cgi.request

The examples are indented in my source to line up
with the declarations, but this indentation doesn't
make sense in the output!


I agree.  Ddoc should remove the lowest common level of indentation from 
each code sample it picks up.


Stewart.

Stewart.


Re: Foreach Closures?

2012-04-09 Thread Kevin Cox
On Apr 9, 2012 5:59 AM, "Ary Manzana"  wrote:

> In this video you can see what foreach with opApply gets translated to
(at about minute 1):
>
> http://www.youtube.com/watch?v=oAhrFQVnsrY

Thanks, that's perfect. I'm definitely going to try out decent.


Re: DIP16: Transparently substitute module with package

2012-04-09 Thread Steven Schveighoffer
On Fri, 06 Apr 2012 20:25:23 -0400, Jonathan M Davis   
wrote:



DIP15 doesn't fix the explicit path problem though. You can't change
std/algorithm.d into std/algorithm/ (with sorting.d, search.d, etc.)  
without
breaking code. You could make std/algorithm.d publicly import std/alg/*  
and
then DIP15 would allow you to import std.alg to get all of its  
sub-modules,
but you're still forced to use a module to publicly import symbols as  
part of

a migration path, and you can't split a module in place.


I think either you or I am missing something.

In DIP15, if you define std/algorithm/_.d, and then import std.algorithm,  
it imports std/algorithm/_.d, which then 1. publicly imports other  
modules, and 2. aliases symbols to the name std.algorithm.symbol.  At  
least, this is how I understand the intent.  It seems equivalent to me to  
the package.d proposal, it's just using _.d instead of package.d.


If you import std.algorithm.sorting, and try and use std.algorithm.sort,  
yes it will not work.  But this does not break existing code (which does  
not import std.algorithm.sorting), and I find it odd that we want to make  
std.algorithm.sort work if you don't import std.algorithm.


-Steve


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer

On Sat, 07 Apr 2012 09:59:27 -0400, Jacob Carlborg  wrote:


On 2012-04-06 19:36, Steven Schveighoffer wrote:

so now I must define a type for every attribute? I'd rather just define
a function.

What if I have 20 string attributes, I must define a new attribute type
for each one? This seems like unneeded bloat.


If we want to be able to pass a key-value list to the attribute, I think  
a struct is needed.


What if they have nothing to do with each other?  What I'm getting at is,  
I don't want to define a struct just so I can pass a string.  It's  
unnecessary.



BTW, could both structs and functions be allowed?


Yes, I replied early on to Timon Gehr, this should be allowed.  Simply  
because a struct ctor is a function like any other function, called by a  
standard D symbol.  It doesn't make sense if you don't allow it, because  
it's so easy to create a factory method that forwards to it.


-Steve


Re: A modest proposal: eliminate template code bloat

2012-04-09 Thread H. S. Teoh
On Mon, Apr 09, 2012 at 08:21:08AM +0200, Somedude wrote:
> Le 08/04/2012 16:18, H. S. Teoh a écrit :
> > On Sun, Apr 08, 2012 at 03:01:56PM +0400, Dmitry Olshansky wrote:
> >> I think it's been ages since I meant to ask why nobody (as in
> >> compiler vendors) does what I think is rather simple optimization.
> >>
> >> In the short term the plan is to introduce a "link-time" flavored
> >> optimization at code generation or (better) link step.
> > 
> > This would be incompatible with how current (non-dmd) linkers work. But
> > I do like the idea. Perhaps if it works well, other linkers will adopt
> > it? (Just like how the gcc linker adopted duplicate template code
> > elimination due to C++ templates.)
> > 
> > T
> > 
> 
> Actually, in C++ (as well as D), the added benefit would be a greatly
> improved compilation speed, wouldn't it ?
> I bet if the idea works in D and proves increased compilation, compiler
> writers would be very compelled to implement it in C++.

Exactly my point. I *want* to give incentive to toolchain devs to add
these kinds of enhancements to linkers in general.


T

-- 
Why is it that all of the instruments seeking intelligent life in the universe 
are pointed away from Earth? -- Michael Beibl


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer
On Fri, 06 Apr 2012 18:40:29 -0400, Piotr Szturmaj   
wrote:



Steven Schveighoffer wrote:



Unused function do not make it into the EXE.


Are unused structs compiled into EXE?


Their TypeInfo_Struct is.  If they are compiled in their own module, then  
I think it's possible the linker will leave the whole object out.



foreach(name, value; __traits(getAttributes, symbol)) {...}

hereby added to the proposal.


Ok, but how do you filter that and pass the result to another template?  
It should be easy if __traits(getAttributes, symbol) would return an  
expression tuple, which is what I'd like to see.


It has to be a tuple, since the type of value may change on each  
iteration.  It likely must be a tuple of name-value tuples.



No, it doesn't generate more typeinfo that must go into the EXE. When
the EXE is built, all associated bloat should disappear, it's only
needed during compilation.


Those types are only needed during compilation too. However, I don't  
know if they're always included into binary or only when they're used.


I think they are.  I don't know if it's required though.  I don't know  
enough about the link-time optimizations available to see if they can be  
weeded out if unused.




I think you are missing how the metadata is stored as key-value pairs,
with the key being the name of the function that was used.


Ok, but it needs more work in the compiler, comparing to identifier  
search and remembering expression tuple of a symbol.


The compiler can "build" a struct if it wants to, it reduces to the  
equivalent problem.


Also, I just found a major drawback of this approach: consider  
parameterless attributes like @NotNull. What would you return from  
function named NotNull()?


void?  There is no need to store a type, it's just "is NotNull valid or  
not?".  Note that this is somewhat of a red herring, a NotNull attribute  
cannot implement what it purports to.



This is how it's done in C# by the way.


Yes I know. I don't think we need to limit ourselves this way, C# does
not have the compile-time power that D does.


I didn't state that we shouldn't use compile-time :)


My point was, maybe C# took this route specifically because their lack of  
compile-time facilities didn't allow them a better solution like mine ;)


-Steve


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer

On Sat, 07 Apr 2012 10:00:19 -0400, Jacob Carlborg  wrote:


On 2012-04-06 19:37, Steven Schveighoffer wrote:

On Fri, 06 Apr 2012 12:53:51 -0400, Piotr Szturmaj

struct Author { string name = "empty"; }
// struct Author { string name; } - this works too


I think the point is, we should disallow:

@Author int x;

-Steve


Why?


I misspoke.  The person who implemented the @Author attribute probably  
wants to disallow specifying an Author attribute without a name.  I don't  
think we should disallow that on principle, I meant in the context it  
should be disallowed.


-Steve


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer

On Sat, 07 Apr 2012 07:26:26 -0400, deadalnix  wrote:


Le 06/04/2012 22:46, Mafi a écrit :

Also, if I see:

@square(5) int foo();

How do I know that I have to use __traits(getAttribute, foo, Area)?

Another possibility:

@attribute Area area(int w, int h) { return Area(w, h);}
@attribute Area area(Area a) { return a;}

Area square(int a) { return Area(a, a);}

@area(5, 5) int foo();
@area(square(5)) int bar();

-Steve


The second possibility looks good. Especially because the lack of
@attribute on square disallows @square.

Mafi


This is adding code just for the pleasure of adding more code. Why wan't  
I construct Area directly as attribute ?


See http://forum.dlang.org/post/op.wcct2shqeav7ka@localhost.localdomain

I think you should be able to construct it by @attribute'ing a struct.   
But this sub-thread is about changing the name of the function for  
construction purposes, but keeping the type as the attribute name.


-Steve


Re: Foreach Closures?

2012-04-09 Thread Manu
OMG, DO WANT! :P
Who wrote this? I wonder if they'd be interested in adapting it to VisualD
+ MonoDevelop?

On 9 April 2012 12:56, Ary Manzana  wrote:

> On 4/9/12 7:26 AM, Kevin Cox wrote:
>
>> I was wondering about the foreach statement and when you implement
>> opApply() for a class it is implemented using closures.  I was wondering
>> if this is just how it is expressed or if it is actually syntatic
>> sugar.  The reason I aski is because if you have a return statement
>> inside a foreach it returns from the outside function not the "closure".
>>
>> I was just wondering if anyone could spill the implementation details.
>>
>> Thanks,
>> Kevin
>>
>
> In this video you can see what foreach with opApply gets translated to (at
> about minute 1):
>
> http://www.youtube.com/watch?**v=oAhrFQVnsrY
>


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer

On Sat, 07 Apr 2012 10:11:16 -0400, Jacob Carlborg  wrote:


On 2012-04-06 20:52, Steven Schveighoffer wrote:


Also, if I see:

@square(5) int foo();

How do I know that I have to use __traits(getAttribute, foo, Area)?


Isn't "square" the name of the attribute? In that case you would use:

__traits(getAttribute, foo, square)


The argument was to use the name of the type returned as the attribute  
name instead of the function.  That is not my proposal.  The suggested  
case is to be able to use a different name to build the same attribute, to  
be more intuitive.


i.e. both area and square create the Area attribute, but square only takes  
one parameter because it's a square.  Kind of like saying "the area is  
square".


So my counter point above is in the context that the type name of the  
return value becomes the attribute name.


-Steve


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 2:06 AM, Denis Shelomovskij wrote:

Why will recording the average produce so much noise?


As I explained, the average takes noise and outliers (some very large, 
e.g. milliseconds in a benchmark that takes microseconds) into account. 
The minimum is shielded from this issue. In the limit, the minimum for 
infinitely many measurements is the sought-after result.



As I see, floating
point arithmetic is now used without a strong reason so it looks like a
time of this part isn't valuable. Or is it just a temporary solution?


I don't understand "time of this part".


Anyway it should be configurable using a CT parameter, so it will not
abuse one who doesn't need it.


We'd like the framework to do the right thing, and the average does not 
seem to be it.



IMHO, a histogram gives lots of information and will be a good addition.


I disagree.


Andrei




Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 3:24 AM, Don wrote:

On 08.04.2012 07:56, Andrei Alexandrescu wrote:


For this to happen, we need to start an effort of migrating built-in
arrays into runtime, essentially making them templates that the compiler
lowers to. So I have two questions:


vote -= real.infinity.

That would kill D.


Why?

Andrei



Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Steven Schveighoffer

On Sat, 07 Apr 2012 12:48:00 -0400, Jacob Carlborg  wrote:


On 2012-04-07 05:29, Kapps wrote:


I slightly prefer this function method over the struct method because:
1) No need to generate a custom struct for everything. Plenty of things
are just a true or false, or a string. Saves a little bit of TypeInfo
generation.


But you still need to create a function.


functions are easier for the linker to deal with.  The main point here is,  
no TypeInfo needed.





2) The more important one: The possibility to eventually include an
alias template parameter. This allows things like looking up whether the
symbol with the attribute has other attributes applied, or determining
type. This allows things like constraints, and can be a nice benefit.


This can't be done for structs?


IFTI.  It possibly can be added to struct ctors (I argue it should be),  
but is not today.


I think the struct approach is fine for some attributes, and I think it  
should be doable to @attribute either functions or structs.  I just want  
the most generic, basic feature possible.  I think Timon has the best idea  
that any callable CTFE symbol should be able to be an attribute.


At this point it has become a "structs are a good solution, why not also  
allow functions?" argument.


-Steve


Re: Foreach Closures?

2012-04-09 Thread Kevin Cox
On Apr 9, 2012 9:19 AM, "Manu"  wrote:
>
> OMG, DO WANT! :P
> Who wrote this? I wonder if they'd be interested in adapting it to
VisualD + MonoDevelop?
>
>
> On 9 April 2012 12:56, Ary Manzana  wrote:
>>
>> On 4/9/12 7:26 AM, Kevin Cox wrote:
>>>
>>> I was wondering about the foreach statement and when you implement
>>> opApply() for a class it is implemented using closures.  I was wondering
>>> if this is just how it is expressed or if it is actually syntatic
>>> sugar.  The reason I aski is because if you have a return statement
>>> inside a foreach it returns from the outside function not the "closure".
>>>
>>> I was just wondering if anyone could spill the implementation details.
>>>
>>> Thanks,
>>> Kevin
>>
>>
>> In this video you can see what foreach with opApply gets translated to
(at about minute 1):
>>
>> http://www.youtube.com/watch?v=oAhrFQVnsrY
>

Unfortunately I can't get it working.  Ill have to keep fiddling.


Re: More ddoc complaints

2012-04-09 Thread Adam D. Ruppe

On Monday, 9 April 2012 at 11:05:10 UTC, Stewart Gordon wrote:
What have you done - just made it convert < > & in 
documentation comments to < > & before processing?


In ddoc's source code, there was a macro called ESCAPES
already, but it wasn't actually used.

My patch enables the use of that macro and runs the input
text, before macro processing, through it.

The default is this:
ESCAPES = //>/
  /&/&/

(check out doc.c in dmd's source, it is already there)

And you can redefine it to whatever you want in your
macro file.

My patch also removes other html specific processing
in the compiler, since I'm pretty sure it is all
obsolete with an escaping run.


If you want to output html, you just make a macro:

B = $1

and that still works.

What is the user who wants some output format other than HTML 
or XML to do?


That's the beauty of the ESCAPES macro - you can
redefine it however you want.


Create LT, GT and AMP macros and use them in your code examples.


There's two problems with that: 1) it is hideous
and 2) what if the user wants some format other
than html?

Suppose your format escapes \. Should I defensively
make a $(BACKSLASH) macro too?

What if a dot is special?

And so on, the only correct solution is proper
escaping, and as an added bonus, it looks infinitely
better in source!



Re: TickDuration.to's second template parameter

2012-04-09 Thread Steven Schveighoffer
On Sat, 07 Apr 2012 20:03:25 -0400, Jonathan M Davis   
wrote:



On Saturday, April 07, 2012 15:59:57 Andrei Alexandrescu wrote:

Whenever I use TickDuration.to, I need to add the pesky second argument,
e.g. TickDuration.to!("nsecs", uint). Would a default make sense there?


Well TickDuration.nsecs is a wrapper for TickDuration.to!("nsecs",  
long"),
TickDuration.msecs is a wrapper for TickDuration.to!("msecs, long"),  
etc. So,
that's basically how defaults were added. I question that it makes sense  
to

add defaults to the to function itself - though having long chosen as the
default doesn't really help you, since you'll either have to be explicit  
like

you have been or cast using the default version.


I think what Andrei is asking for is to change this:

T to(string units, T)() @safe const pure nothrow

Into this:

T to(string units, T = long)() @safe const pure nothrow

Which I don't think will hurt anything.

An additional annoyance that I would think is solved is you always have to  
include the parentheses.  i.e.:


td.to!"msecs"()

vs.

td.to!("msecs", long)();

-Steve


Re: Foreach Closures?

2012-04-09 Thread Kapps

On Monday, 9 April 2012 at 13:19:32 UTC, Manu wrote:

OMG, DO WANT! :P
Who wrote this? I wonder if they'd be interested in adapting it 
to VisualD

+ MonoDevelop?

On 9 April 2012 12:56, Ary Manzana  wrote:


On 4/9/12 7:26 AM, Kevin Cox wrote:

I was wondering about the foreach statement and when you 
implement
opApply() for a class it is implemented using closures.  I 
was wondering
if this is just how it is expressed or if it is actually 
syntatic
sugar.  The reason I aski is because if you have a return 
statement
inside a foreach it returns from the outside function not the 
"closure".


I was just wondering if anyone could spill the implementation 
details.


Thanks,
Kevin



In this video you can see what foreach with opApply gets 
translated to (at

about minute 1):

http://www.youtube.com/watch?**v=oAhrFQVnsrY



That was Descent, a plugin for Eclipse. They did it by porting
DMD, with changes, to Java. A horribly painful task I'd imagine.
I wonder if it'd be easier by just creating bindings for DMD for
the language of choice.

That being said, if MonoDevelop's parser gets to the point where
it can evaluate this stuff well, I think that'd work just as
nicely. You won't quite be able to see the actual compiler's
representation of it, but you'd be able to expand mixins and such.

Note that Descent hasn't been updated in almost a year now 
unfortunately.


Re: Precise GC

2012-04-09 Thread Steven Schveighoffer
On Sat, 07 Apr 2012 21:56:09 -0400, Walter Bright  
 wrote:


Of course, many of us have been thinking about this for a looong time,  
and what is the best way to go about it. The usual technique is for the  
compiler to emit some sort of table for each TypeInfo giving the layout  
of the object, i.e. where the pointers are.


The general problem with these is the table is non-trivial, as it will  
require things like iterated data blocks, etc. It has to be compressed  
to save space, and the gc then has to execute a fair amount of code to  
decode it.


It also requires some significant work on the compiler end, leading of  
course to complexity, rigidity, development bottlenecks, and the usual  
bugs.


An alternative Andrei and I have been talking about is to put in the  
TypeInfo a pointer to a function. That function will contain customized  
code to mark the pointers in an instance of that type. That custom code  
will be generated by a template defined by the library. All the compiler  
has to do is stupidly instantiate the template for the type, and insert  
an address to the generated function.


The compiler need know NOTHING about how the marking works.

Even better, as ctRegex has demonstrated, the custom generated code can  
be very, very fast compared with a runtime table-driven approach. (The  
slow part will be calling the function indirectly.)


And best of all, the design is pushed out of the compiler into the  
library, so various schemes can be tried out without needing compiler  
work.


I think this is an exciting idea, it will enable us to get a precise gc  
by enabling people to work on it in parallel rather than serially  
waiting for me.


I think this is a really good idea.

I would like to go further and propose that there be an arbitrary way to  
add members to the TypeInfo types using templates.  Not sure how it would  
be implemented, but I don't see why this has to be specific to GCs.  Some  
way to signify "hey compiler, please initialize this member with template  
X given the type being compiled".


This could be a huge bridge between compile-time and runtime type  
information.


-Steve


Re: A modest proposal: eliminate template code bloat

2012-04-09 Thread Daniel Murphy
"H. S. Teoh"  wrote in message 
news:mailman.1518.1333937643.4860.digitalmar...@puremagic.com...
>
> Why is it so important to have unique addresses for functions?
>

Just because I can't think of a use case doesn't mean nobody is relying on 
it!

But I guess there really isn't one. 




Re: malloc in core.memory.GC

2012-04-09 Thread Steven Schveighoffer
On Sun, 08 Apr 2012 16:33:01 -0400, Alex Rønne Petersen  
 wrote:


APPENDABLE is, IIRC, mostly an internal attribute used for the array  
append cache. You can ignore it entirely (we should document this).


It's used to flag that the block of GC data is actually an appendable  
array.  If this flag is missing, and you attempt to append data that  
points at the block, it will always reallocate.  If this flag is present,  
it assumes it has a valid the "used" length in the block, and proceed.  Do  
NOT set this flag unless you know what you are doing.  Let the runtime do  
it.


FINALIZE is only relevant if the type allocated in the block has a  
destructor (it simply specifies that finalization is then desired).


I'll add that it not only identifies the type stored has a dtor, the GC  
expects that the layout of the block is of type Object.  This is an  
important distinction, because it will use that information to traverse  
the vtable looking for dtors.  For example, just setting this flag for a  
struct that has a dtor will not work, because a struct has no vtable.


-Steve


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Manfred Nowak
Andrei Alexandrescu wrote:


> all noise is additive (there's no noise that may make a benchmark
> appear to run faster)

This is in doubt, because you yourself wrote "the machine itself has 
complex interactions". This complex interactions might lower the time 
needed for an operation of the benchmarked program.

Examples that come to mind:
a) needed data is already in a (faster) cache because it belongs to a 
memory block, from which some data is needed by some program not 
belonging to the benchmarked set---and that block isnt replaced yet.
b) needed data is stored in a hdd whose I/O scheduler uses the elevator 
algorithm and serves the request by pure chance instantly, because the 
position of the needed data is between two positions accessed by some 
programs not belonging to the benchmarked set.
 
Especially a hdd, if used, will be responsible for a lot of noise you 
define as "quantization noise (uniform distribution)" even if the head 
stays at the same cylinder. Not recognizing this noise would only mean 
that the data is cached and interpreting the only true read from the 
hdd as a jerky outlier sems quite wrong.
 

>> 1) The "noise during normal use" has to be measured in order to
>> detect the sensibility of the benchmarked program to that noise.
> How do you measure it, and what 
> conclusions do you draw other than there's a more or less other
> stuff going on on the machine, and the machine itself has complex
> interactions? 
> 
> Far as I can tell a time measurement result is:
> 
> T = A + Q + N

For example by running more than one instance of the benchmarked 
program in paralell and use the thereby gathered statistical routines 
to spread T into the additiv components A, Q and N.


>> 2) The noise the benchmarked program produces has to be measured
>> too, because the running benchmarked program probably increases
>> the noise for all other running programs.
> 
> How to measure that?

Similar to the above note.


> Also, that noise does not need to be measured
> as much as eliminated to the extent possible.

I wouldn't define two programs to be equivalent based on the time until 
completion only. That time might be identical for both programs, but if 
only one of the programs increases the answering time of the machine  
to inacceptability I would choose the other. 

-manfred


Re: Discussion on Go and D

2012-04-09 Thread Steven Schveighoffer
On Sat, 07 Apr 2012 12:45:44 -0400, Rainer Schuetze   
wrote:





On 4/6/2012 6:20 PM, deadalnix wrote:

Le 06/04/2012 18:07, Andrei Alexandrescu a écrit :

A few more samples of people's perception of the two languages:

http://news.ycombinator.com/item?id=3805302


Andrei


I did some measurement on that point for D lately :
http://www.deadalnix.me/2012/03/05/impact-of-64bits-vs-32bits-when-using-non-precise-gc/



I studied the GC a bit more and noticed a possible issue:

- memory allocations are aligned up to a power of 2 <= page size
- the memory area beyond the actually requested size is left untouched  
when allocating


No, it's zeroed if the block is requested without the NO_SCAN bit set.

see:  
https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gcx.d#L479


Note to Sean, it's not in the no-sync part (makes sense, why hold the lock  
while you are memsetting).


-Steve


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Steven Schveighoffer

Added to trello.

-Steve


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Marco Leise
Am Mon, 09 Apr 2012 09:13:51 -0400
schrieb "Steven Schveighoffer" :

> On Sat, 07 Apr 2012 10:00:19 -0400, Jacob Carlborg  wrote:
> 
> > On 2012-04-06 19:37, Steven Schveighoffer wrote:
> >> On Fri, 06 Apr 2012 12:53:51 -0400, Piotr Szturmaj
> >>> struct Author { string name = "empty"; }
> >>> // struct Author { string name; } - this works too
> >>
> >> I think the point is, we should disallow:
> >>
> >> @Author int x;
> >>
> >> -Steve
> >
> > Why?
> 
> I misspoke.  The person who implemented the @Author attribute probably  
> wants to disallow specifying an Author attribute without a name.  I don't  
> think we should disallow that on principle, I meant in the context it  
> should be disallowed.
> 
> -Steve

Yes, when libraries start to offer attributes, their authors likely want to add 
some static checking. Either as an invariant() with the struct solution, or 
static asserts in the function.

Java and C# also offer attributes for attributes to:
- allow multiple attributes of the same kind on a symbol
- restrict the attribute to certain symbol types (function, struct, ...)
- inherit attributes down a class hierarchy
I thought I'd just mention it all here in one go as "attribute constraints".

-- 
Marco



Re: A modest proposal: eliminate template code bloat

2012-04-09 Thread H. S. Teoh
On Mon, Apr 09, 2012 at 11:58:01PM +1000, Daniel Murphy wrote:
> "H. S. Teoh"  wrote in message 
> news:mailman.1518.1333937643.4860.digitalmar...@puremagic.com...
> >
> > Why is it so important to have unique addresses for functions?
> >
> 
> Just because I can't think of a use case doesn't mean nobody is
> relying on it!
> 
> But I guess there really isn't one. 
[...]

Somebody brought up the matter of stacktraces. Which could be a valid
concern, I suppose, although I'm tempted to just say, use a
non-optimized build for debugging purposes. (But I suppose that is
arguable.)


T

-- 
Heuristics are bug-ridden by definition. If they didn't have bugs,
they'd be algorithms.


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Steven Schveighoffer
On Sun, 08 Apr 2012 01:56:38 -0400, Andrei Alexandrescu  
 wrote:


Walter and I discussed today about using the small string optimization  
in string and other arrays of immutable small objects.


On 64 bit machines, string occupies 16 bytes. We could use the first  
byte as discriminator, which means that all strings under 16 chars need  
no memory allocation at all.


It turns out statistically a lot of strings are small. According to a  
variety of systems we use at Facebook, the small buffer optimization is  
king - it just works great in all cases. In D that means better speed,  
better locality, and less garbage.


For this to happen, we need to start an effort of migrating built-in  
arrays into runtime, essentially making them templates that the compiler  
lowers to. So I have two questions:


1. What happened to the new hash project? We need to take that to  
completion.


2. Is anyone willing to start the effort of migrating built-in slices  
into templates?


No, this would suck.

A better solution - make an *actual* string type that does this, and fixes  
all the shitty problems that we have from shoehorning arrays into UTF  
strings.  Then alias that type to string.


I'm so sick of phobos trying to pretend char[] is not an array, and this  
would just be another mark against D.


-Steve


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 4:21 AM, Manu wrote:

After thinking on it a bit, I'm becoming a little worried about this
move for 2 rarely considered reasons:
Using lowering to a template, debug(/unoptimised) performance will
probably get a lot slower, which is really annoying. And
debugging/stepping might become considerably more annoying too, if every
time I press F11 (step in) over a function call that happens to receive
an arg from an array, the debugger then steps into the array templates
index operator... We'd be no better off than with STL, unless the
language has clever ways of hiding this magic from the debugger too, and
optimising/inlining the index even in debug builds...? But this is the
built-in array, and not a library we can optionally not use.


I agree. So we have the counterarguments:

1. Lowering would treat array primitives as sheer D code, subject to 
refusal of inlining. That means worse performance.


2. Unless the compiler takes special measures, source-level debuggers 
will trace through core, uninteresting code for array operations.


3. There are patterns that attempt to optimize by e.g. using .ptr, but 
end up pessimizing code because they trigger multiple memory allocations.



Andrei


Re: Foreach Closures?

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 15:19, Manu wrote:

OMG, DO WANT! :P
Who wrote this? I wonder if they'd be interested in adapting it to
VisualD + MonoDevelop?


That would be Ary Manzana. I think one of the reasons why he stopped 
working on this was that he ported the DMD frontend to Java and it's 
just a pain to stay updated with DMD.


This comes back to us again, again and again. We _badly need_ a compiler 
that is usable as a library. Preferably with a stable API which it 
possible to create bindings for other languages. For that compiler to be 
stay up to date it needs to be the reference implementation, i.e. the 
one that Walter works on.


Also Walter won't just drop DMD and replace it with something else or 
start a major refactoring process on the existing code base.


BTW, Descent has a compile time debugger as well, if I recall correctly.

--
/Jacob Carlborg


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 15:20, Steven Schveighoffer wrote:


The argument was to use the name of the type returned as the attribute
name instead of the function. That is not my proposal. The suggested
case is to be able to use a different name to build the same attribute,
to be more intuitive.

i.e. both area and square create the Area attribute, but square only
takes one parameter because it's a square. Kind of like saying "the area
is square".

So my counter point above is in the context that the type name of the
return value becomes the attribute name.

-Steve



Aha, I see.

--
/Jacob Carlborg


Re: custom attribute proposal (yeah, another one)

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 15:29, Steven Schveighoffer wrote:


I think the struct approach is fine for some attributes, and I think it
should be doable to @attribute either functions or structs. I just want
the most generic, basic feature possible. I think Timon has the best
idea that any callable CTFE symbol should be able to be an attribute.


Using any callable CTFE symbol would make sense.

--
/Jacob Carlborg


Re: Documentation improvements

2012-04-09 Thread David Gileadi

On 4/8/12 7:41 AM, Jonas H. wrote:

Hi everyone,

I decided to give D a try yesterday and had quite some trouble with the
documentation. I want to help improve the docs on dlang.org.


I'm generally in favor of simplifying the sidebar navigation, since I 
already did a bunch of that under the guise of creating a new look for 
the D website :)  Compare with the sidebar at 
http://digitalmars.com/d/1.0/index.html


I think you may find some conflicts between the contents of the proposed 
Development and the existing Community sections; that probably needs 
some further thought.


As a technical note, in order to get the sidebar to expand correctly 
you'll need to change the CATEGORY_* macro on any pages that you move. 
For instance, if you were to hypothetically move a page from being under 
the Articles section to being under the FAQ section, you'd need to 
change its CATEGORY_ARTICLES=$0 macro to CATEGORY_FAQ=$0.


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Jakob Ovrum
On Monday, 9 April 2012 at 14:55:16 UTC, Andrei Alexandrescu 
wrote:
3. There are patterns that attempt to optimize by e.g. using 
.ptr, but end up pessimizing code because they trigger multiple 
memory allocations.



Andrei


It's important to note that this pattern is probably most common 
in glue code to C libraries, not bounds-checking related 
optimizations. There are countless of C library functions which 
receive the equivalent of an array by taking a pointer and a 
length, and implicit allocation on `foo.ptr` is completely 
unacceptable in these cases.


It's also common to avoid the `toStringz` function for strings 
you know are zero-terminated, using `.ptr` directly instead, as 
the toStringz function unconditionally appends a zero these days 
(and for good reasons, its previous optimization was extremely 
optimistic about its input).


Re: Shared library in D on Linux

2012-04-09 Thread Ellery Newcomer
Well, if you're really hankering for a shared lib, try ldc. I have 
gotten it to compile working shared libs in the past.


On 04/09/2012 01:24 AM, "Timo Westkämper" " 
wrote:

On Sunday, 8 April 2012 at 17:59:28 UTC, Timo Westkämper wrote:

Does someone know why the lib (.a) packaging instead of objects (.o)
works better in this case?


Didn't work after all with -lib. I mixed up outputs.


Re: Foreach Closures?

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 15:44, Kapps wrote:


That was Descent, a plugin for Eclipse. They did it by porting
DMD, with changes, to Java. A horribly painful task I'd imagine.
I wonder if it'd be easier by just creating bindings for DMD for
the language of choice.


That would be horribly painful as well. Since DMD is not made to be used 
as a library. It really does not fit.



That being said, if MonoDevelop's parser gets to the point where
it can evaluate this stuff well, I think that'd work just as
nicely. You won't quite be able to see the actual compiler's
representation of it, but you'd be able to expand mixins and such.


The MonoDevelop parser will have the same problem as the one for 
Descent. Either it's a port of DMD and needs to play catch up all the 
time. Or it's a completely new parser that will, most likely, not have 
the same behavior as the compiler. A new parser would also need to play 
catch up with DMD.


See my other reply:

http://forum.dlang.org/thread/mailman.1506.1333927673.4860.digitalmar...@puremagic.com#post-jlutfe:24jal:241:40digitalmars.com

--
/Jacob Carlborg


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Francois Chabot

Why is there so much emphasis on printBenchmarks()?

benchmark() and runBenchmarks() are clearly the core of this 
library, and yet they are relegated to second-class citizen: "Oh, 
I guess you can use this". Normally, I wouldn't be so picky, but 
this is a standard library. Focus should be on functionality.


Providing formatted output is a nice bonus, but to me, it's just 
a bonus. Any benchmarking part of a large project is bound to 
format the output itself (to log benchmark results against 
revisions in a database or something like that).


Also, benchmark() and runBenchmarks() are kind of confusing at 
first glance. Something along the lines of benchmarkModule() and 
benchmarkAllModules() would be more sensible.


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 10:23 AM, Francois Chabot wrote:

Why is there so much emphasis on printBenchmarks()?

benchmark() and runBenchmarks() are clearly the core of this library,
and yet they are relegated to second-class citizen: "Oh, I guess you can
use this". Normally, I wouldn't be so picky, but this is a standard
library. Focus should be on functionality.


The functionality is to make benchmark easy to use, meaningful, and easy 
to interpret. I don't want to add a complicated library for 
postprocessing benchmarks because most nobody will use it.


The first function in the documentation is what most people will want to 
bring themselves to using. The functions that provide the data are 
eminently available so I disagree with the "second-class citizen" 
characterization. You want to use them, use them. They don't need to be 
given rockstar billing.



Andrei


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Andrej Mitrovic
On 4/9/12, Jakob Ovrum  wrote:
> It's also common to avoid the `toStringz` function for strings
> you know are zero-terminated, using `.ptr` directly instead.

Yup. E.g. WinAPI text drawing functions take a wchar* and a length. I
don't have to call toUTF16z but just pass a pointer, or even a pointer
to a specific element via &arr[index] (after calling std.utf.stride,
of course).

> the toStringz function unconditionally appends a zero these days

The one taking (const(char)[] s) does this, but not the other overload
taking (string s). Whether or not that's safe I don't really know.
I've had an argument over this on github, but I don't know if it was
about toStringz or maybe toUTF16z. I haven't got the link to the
discussion.


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 9:25 AM, Manfred Nowak wrote:

Andrei Alexandrescu wrote:

all noise is additive (there's no noise that may make a benchmark
appear to run faster)


This is in doubt, because you yourself wrote "the machine itself has
complex interactions". This complex interactions might lower the time
needed for an operation of the benchmarked program.

Examples that come to mind:
a) needed data is already in a (faster) cache because it belongs to a
memory block, from which some data is needed by some program not
belonging to the benchmarked set---and that block isnt replaced yet.


Which is great, unless the program wants to measure the cache memory 
itself, in which case it would use special assembler instructions or 
large memset()s. (We do such at Facebook.)



b) needed data is stored in a hdd whose I/O scheduler uses the elevator
algorithm and serves the request by pure chance instantly, because the
position of the needed data is between two positions accessed by some
programs not belonging to the benchmarked set.

Especially a hdd, if used, will be responsible for a lot of noise you
define as "quantization noise (uniform distribution)" even if the head
stays at the same cylinder. Not recognizing this noise would only mean
that the data is cached and interpreting the only true read from the
hdd as a jerky outlier sems quite wrong.


If the goal is to measure the seek time of the HDD, the benchmark itself 
should make sure the HDD cache is cleared. (What I recall they do on 
Linux is unmounting and remounting the drive.) Otherwise, it adds a 
useless component to the timing.



1) The "noise during normal use" has to be measured in order to
detect the sensibility of the benchmarked program to that noise.

How do you measure it, and what
conclusions do you draw other than there's a more or less other
stuff going on on the machine, and the machine itself has complex
interactions?

Far as I can tell a time measurement result is:

T = A + Q + N


For example by running more than one instance of the benchmarked
program in paralell and use the thereby gathered statistical routines
to spread T into the additiv components A, Q and N.


I disagree with running two benchmarks in parallel because that exposes 
them to even more noise (scheduling, CPU count, current machine load 
etc). I don't understand the part of the sentence starting with "...use 
the thereby...", I'd be grateful if you elaborated.



Andrei


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Jakob Ovrum

On Monday, 9 April 2012 at 15:37:37 UTC, Andrej Mitrovic wrote:

On 4/9/12, Jakob Ovrum  wrote:
The one taking (const(char)[] s) does this, but not the other 
overload
taking (string s). Whether or not that's safe I don't really 
know.
I've had an argument over this on github, but I don't know if 
it was

about toStringz or maybe toUTF16z. I haven't got the link to the
discussion.


You're right, I just confirmed the optimization is still in place 
for the `string` version. The documentation is identical for both 
functions. I think this is a mistake.


It assumes that the string is either a compiler-generated literal 
or a GC allocated string, while the documentation does not 
mention such assumptions. With all the focus on manual memory 
management and pluggable allocators going on, I think the 
optimization must be removed or the documentation for the 
`string` overload changed.


This optimization can always be put back in without narrowing the 
scope of the `string` overload once the above conditions can be 
reliably checked.


Another option is to add a known-bug section to the `string` 
overload informing users that the function may fail on 
custom-allocated strings.


Re: Discussion on Go and D

2012-04-09 Thread deadalnix

Le 09/04/2012 02:24, Alex Rønne Petersen a écrit :

On 09-04-2012 02:18, Manu wrote:

On 9 April 2012 02:24, Walter Bright mailto:newshou...@digitalmars.com>> wrote:

On 4/8/2012 3:57 PM, Manu wrote:

What do you base that statistic on? I'm not arguing that fact,
just that I
haven't seen any evidence one way or the other. What causes Go
to create
significantly more garbage than D? Are there benchmarks or test
cases I should
be aware of on the topic?


The first ycombinator reference is a person who didn't run out of
memory using D. That implies far less pressure on the gc.

My understanding of Go is that when it does structural conformance,
it builds some of the necessary data at runtime on the gc heap.

Anyhow, D has a lot of facilities for putting things on the stack
rather than the heap, immutable data doesn't need to get copied, and
slices allow lots of reuse of existing objects.


"optimized D was slightly faster than Go at almost anything and consumed
up to 70% less memory"
Interesting... I don't know enough about Go to reason that finding, I
guess I assumed it has most of the same possibilities available to D.
(no immutable data? no stack structs? no references/pointers/slices?
crazy...)

The only D program I have significant experience with is VisualD, and it
hogs 1-2gb of ram for me under general usage, and eventually crashes,
after paging heavily and bringing my computer to a crawl. Not a good
sign from the first and only productive D app I've run yet ;)
This seems a lot like his experience with Go... but comparisons aside, D
still clearly isn't there yet when it comes to the GC either, and I'm
amazed Google thing Go is production ready if that guys findings are
true!


Google likes to invent random useless languages. See: Dart. Both
languages are solutions looking for problems. ;)

And yes, precise GC is more essential than most people think.



I posted some an article about that. 64bits pretty much solve the false 
positive problem.


Still, precise GC is nice, but alternative like precise on heap and 
imprecise on stack are very valid alternatives.


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Andrej Mitrovic
On 4/9/12, Jakob Ovrum  wrote:
> With all the focus on manual memory
> management and pluggable allocators going on, I think the
> optimization must be removed or the documentation for the
> `string` overload changed.

Or add a compile-time argument:
toStringz(bool ForceAllocate = true)(string s)

Or split the unsafe version into another function.


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Francois Chabot
Which is great, unless the program wants to measure the cache 
memory itself, in which case it would use special assembler 
instructions or large memset()s. (We do such at Facebook.)


I disagree. If a regression suddenly causes a function to become
heavily cache-bound, it should show up in benchmarks somehow,
regardless of the previous expected behavior of the function.

--
Francois


Windows 8 Metro support

2012-04-09 Thread Sönke Ludwig
(IMO) one of the biggest obstacles for truly broad adoption of D 
currently is the weak platform support on end user platforms. The two 
mobile platforms that came up recently (iOS and Android) are two 
examples. And indeed I think that support for mobile platforms could be 
a real stepping stone because of D's extraordinary convenience and 
language power - the alternatives to C/C++ are pretty thin here and 
cross-platform development in general has come to a grinding halt 
recently with all the proprietary languages and APIs. If D could step up 
here...


But mobile platforms aside, Windows support is something that in general 
has always been neglected a bit, especially regarding 64-bit support. 
Starting with Windows 8 there will arise additional problems because 
Metro application will only be able/allowed to use the COM based WinRT 
and the VisualStudio runtime. DMD with its use of snn.lib is out of the 
game here, just as the any other runtime library.


Right now, if we don't catch up here, D will slowly degrade to a pure 
server and command line application language which surely wouldn't do it 
justice.


In consequence this means that there is one more reason to raise the 
priority of COFF output from DMD (together with 64-bit codegen) - or 
possibly the alternative to make OptLink COFF-capable to at least be 
able to somehow link against the VS runtime.


Another such thing - although this can be worked around - would be 
direct support for Objective-C classes like in Michel Fortin's dmd 
modification. I think these GUI application related functionalities are 
by far the most important things for D's mass adoption. And personally, 
I would even be willing to donate a (for me) considerable amount of 
money to help bringing this forward because many things I would like to 
realize with D are currently (almost) impossible.


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Somedude
Le 09/04/2012 17:23, Francois Chabot a écrit :
> Why is there so much emphasis on printBenchmarks()?
> 
> benchmark() and runBenchmarks() are clearly the core of this library,
> and yet they are relegated to second-class citizen: "Oh, I guess you can
> use this". Normally, I wouldn't be so picky, but this is a standard
> library. Focus should be on functionality.
> 
> Providing formatted output is a nice bonus, but to me, it's just a
> bonus. Any benchmarking part of a large project is bound to format the
> output itself (to log benchmark results against revisions in a database
> or something like that).
> 
> Also, benchmark() and runBenchmarks() are kind of confusing at first
> glance. Something along the lines of benchmarkModule() and
> benchmarkAllModules() would be more sensible.

The printBenchmark facility is cool and should be included imho.
It helps benchmarking being as standard as unit testing.
We don't want to have to write again and again the same boilerplate code
for such trivial uses.


Re: TickDuration.to's second template parameter

2012-04-09 Thread Jonathan M Davis
On Monday, April 09, 2012 09:36:45 Steven Schveighoffer wrote:
> On Sat, 07 Apr 2012 20:03:25 -0400, Jonathan M Davis 
> 
> wrote:
> > On Saturday, April 07, 2012 15:59:57 Andrei Alexandrescu wrote:
> >> Whenever I use TickDuration.to, I need to add the pesky second argument,
> >> e.g. TickDuration.to!("nsecs", uint). Would a default make sense there?
> > 
> > Well TickDuration.nsecs is a wrapper for TickDuration.to!("nsecs",
> > long"),
> > TickDuration.msecs is a wrapper for TickDuration.to!("msecs, long"),
> > etc. So,
> > that's basically how defaults were added. I question that it makes sense
> > to
> > add defaults to the to function itself - though having long chosen as the
> > default doesn't really help you, since you'll either have to be explicit
> > like
> > you have been or cast using the default version.
> 
> I think what Andrei is asking for is to change this:
> 
> T to(string units, T)() @safe const pure nothrow
> 
> Into this:
> 
> T to(string units, T = long)() @safe const pure nothrow
> 
> Which I don't think will hurt anything.
> 
> An additional annoyance that I would think is solved is you always have to
> include the parentheses. i.e.:
> 
> td.to!"msecs"()
> 
> vs.
> 
> td.to!("msecs", long)();

We could add that, but why? td.msecs already does what td.to!"msecs"() would 
do if to defaulted to long. I don't see any reason to use to directly unless 
you're using something other than long. And if you use a type of than long, 
you're going to have to provide the whole thing anyway - e.g. td.to!("msecs", 
uint)().

- Jonathan M Davis


Re: DIP16: Transparently substitute module with package

2012-04-09 Thread Jonathan M Davis
On Monday, April 09, 2012 08:55:27 Steven Schveighoffer wrote:
> On Fri, 06 Apr 2012 20:25:23 -0400, Jonathan M Davis 
> 
> wrote:
> > DIP15 doesn't fix the explicit path problem though. You can't change
> > std/algorithm.d into std/algorithm/ (with sorting.d, search.d, etc.)
> > without
> > breaking code. You could make std/algorithm.d publicly import std/alg/*
> > and
> > then DIP15 would allow you to import std.alg to get all of its
> > sub-modules,
> > but you're still forced to use a module to publicly import symbols as
> > part of
> > a migration path, and you can't split a module in place.
> 
> I think either you or I am missing something.
> 
> In DIP15, if you define std/algorithm/_.d, and then import std.algorithm,
> it imports std/algorithm/_.d, which then 1. publicly imports other
> modules, and 2. aliases symbols to the name std.algorithm.symbol. At
> least, this is how I understand the intent. It seems equivalent to me to
> the package.d proposal, it's just using _.d instead of package.d.
> 
> If you import std.algorithm.sorting, and try and use std.algorithm.sort,
> yes it will not work. But this does not break existing code (which does
> not import std.algorithm.sorting), and I find it odd that we want to make
> std.algorithm.sort work if you don't import std.algorithm.

Okay. I reread DIP15 again. I guess that I scanned over it too quickly before 
and/or misremembered it. I had understood that it was proposing that 
importing std.algorithm where std.algorithm was a package would be the
equivalent of importing std.algorithm.* in Java and that there were no extra
files involved. So clearly, I've been misunderstanding things here.

So, yeah. DIP15 is basically the same as DIP16 except without the std.sort 
nonsense and the fact that it uses _.d instead of package.d. Using package.d 
has the advantage of package being a keyword, making it so that no one is 
going to accidentally create a module that will be treated specially, but it 
has the downside of likely requiring more special handling by the compiler. I 
don't really care which we pick though.

My main point though, misunderstandings aside, is that it would be _really_
nice to be able to split up  a package in place and that without an
enhancement of some kind, we can't do  that without breaking code. DIP15
appears to fit the bill quite nicely in that regard though. The part of
DIP16 which is really bad is the std.sort stuff which. Public importing
combined with either the first part of DIP16 or with DIP15 seems to take
care of the problem quite nicely.

- Jonathan M Davis


Re: Windows 8 Metro support

2012-04-09 Thread Dmitry Olshansky

On 09.04.2012 20:39, Sönke Ludwig wrote:

(IMO) one of the biggest obstacles for truly broad adoption of D
currently is the weak platform support on end user platforms. The two
mobile platforms that came up recently (iOS and Android) are two
examples. And indeed I think that support for mobile platforms could be
a real stepping stone because of D's extraordinary convenience and
language power - the alternatives to C/C++ are pretty thin here and
cross-platform development in general has come to a grinding halt
recently with all the proprietary languages and APIs. If D could step up
here...




But mobile platforms aside, Windows support is something that in general
has always been neglected a bit, especially regarding 64-bit support.
Starting with Windows 8 there will arise additional problems because
Metro application will only be able/allowed to use the COM based WinRT
and the VisualStudio runtime. DMD with its use of snn.lib is out of the
game here, just as the any other runtime library.


Not true at all, in every talk I've seen on WinRT so far C++ CRT is 
still shipped side by side with WinRT. Basically every language has his 
own runtime. It wouldn't be Microsoft if they haven't got a solid 
reserve of backwards compatibility. Simply put WinRT is a major update 
on COM technology and even here it's backwards compatible with the old COM.
The fact that OS API is expossed through this new COM interface is just 
a nice feature. I was kind of wondering when they will finally ditch 
Win32 API.




Right now, if we don't catch up here, D will slowly degrade to a pure
server and command line application language which surely wouldn't do it
justice.

In consequence this means that there is one more reason to raise the
priority of COFF output from DMD (together with 64-bit codegen) - or
possibly the alternative to make OptLink COFF-capable to at least be
able to somehow link against the VS runtime.

Another such thing - although this can be worked around - would be
direct support for Objective-C classes like in Michel Fortin's dmd
modification. I think these GUI application related functionalities are
by far the most important things for D's mass adoption. And personally,
I would even be willing to donate a (for me) considerable amount of
money to help bringing this forward because many things I would like to
realize with D are currently (almost) impossible.



--
Dmitry Olshansky


Re: TickDuration.to's second template parameter

2012-04-09 Thread Steven Schveighoffer
On Mon, 09 Apr 2012 13:05:09 -0400, Jonathan M Davis   
wrote:



On Monday, April 09, 2012 09:36:45 Steven Schveighoffer wrote:

I think what Andrei is asking for is to change this:

T to(string units, T)() @safe const pure nothrow

Into this:

T to(string units, T = long)() @safe const pure nothrow


We could add that, but why?


I don't know.  Andrei has the use case ;)  Perhaps he has a template  
string instead of directly calling the symbol.


I was just clarifying what I think he was asking for, it seemed to be  
misunderstood...


-Steve


Re: TickDuration.to's second template parameter

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 12:05 PM, Jonathan M Davis wrote:

We could add that, but why? td.msecs already does what td.to!"msecs"() would
do if to defaulted to long. I don't see any reason to use to directly unless
you're using something other than long. And if you use a type of than long,
you're going to have to provide the whole thing anyway - e.g. td.to!("msecs",
uint)().


My bad, I didn't know about td.msecs and friends.

Andrei



Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 11:29 AM, Francois Chabot wrote:

Which is great, unless the program wants to measure the cache memory
itself, in which case it would use special assembler instructions or
large memset()s. (We do such at Facebook.)


I disagree. If a regression suddenly causes a function to become
heavily cache-bound, it should show up in benchmarks somehow,
regardless of the previous expected behavior of the function.


But cache binding depends on a variety of cache characteristics, i.e. 
the machine. The question is whether we accept a heavy dependence of the 
benchmark on the machine.


Andrei




Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Andrei Alexandrescu

On 4/9/12 11:44 AM, Somedude wrote:

It helps benchmarking being as standard as unit testing.
We don't want to have to write again and again the same boilerplate code
for such trivial uses.


Yes, I had unittest in mind when writing the library. If one needs more 
than one statement to get an informative benchmark off the ground, we 
failed.


Andrei


Re: malloc in core.memory.GC

2012-04-09 Thread Alex Rønne Petersen

On 09-04-2012 16:16, Steven Schveighoffer wrote:

On Sun, 08 Apr 2012 16:33:01 -0400, Alex Rønne Petersen
 wrote:


APPENDABLE is, IIRC, mostly an internal attribute used for the array
append cache. You can ignore it entirely (we should document this).


It's used to flag that the block of GC data is actually an appendable
array. If this flag is missing, and you attempt to append data that
points at the block, it will always reallocate. If this flag is present,
it assumes it has a valid the "used" length in the block, and proceed.
Do NOT set this flag unless you know what you are doing. Let the runtime
do it.


FINALIZE is only relevant if the type allocated in the block has a
destructor (it simply specifies that finalization is then desired).


I'll add that it not only identifies the type stored has a dtor, the GC
expects that the layout of the block is of type Object. This is an
important distinction, because it will use that information to traverse
the vtable looking for dtors. For example, just setting this flag for a
struct that has a dtor will not work, because a struct has no vtable.


Something that I've always disliked about the current GC.

In MCI, I can't provide finalization support when programs running in 
the VM use the D GC, because that *requires* me to use the Object layout 
for runtime objects. That's just not nice, since it adds (IIRC) 3 words 
of data that's basically useless to *me*. It would be nice if the GC 
supported finalization callbacks similar to how Boehm does it.




-Steve



--
- Alex


Re: malloc in core.memory.GC

2012-04-09 Thread Steven Schveighoffer
On Mon, 09 Apr 2012 13:39:10 -0400, Alex Rønne Petersen  
 wrote:


In MCI, I can't provide finalization support when programs running in  
the VM use the D GC, because that *requires* me to use the Object layout  
for runtime objects. That's just not nice, since it adds (IIRC) 3 words  
of data that's basically useless to *me*. It would be nice if the GC  
supported finalization callbacks similar to how Boehm does it.


Well, considering that there is no reason whatsoever to expect anything  
stored in the block except the exact object it was created with, I see no  
reason why you couldn't store a TypeInfo reference in the block somewhere  
(not part of the object/struct).


This would mean you could determine the type without having the type  
system, and it would mean you would not carry around the extra baggage for  
calling the finalizer from the GC, when the struct is stored on the stack.


I had great success with this when fixing array appending, I don't see why  
it couldn't be done with finalization.  We do need compiler support to  
make sure the struct dtor function pointer gets stored in the TypeInfo,  
not sure if this has already been done.


-Steve


Re: Windows 8 Metro support

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 18:39, Sönke Ludwig wrote:

(IMO) one of the biggest obstacles for truly broad adoption of D
currently is the weak platform support on end user platforms. The two
mobile platforms that came up recently (iOS and Android) are two
examples. And indeed I think that support for mobile platforms could be
a real stepping stone because of D's extraordinary convenience and
language power - the alternatives to C/C++ are pretty thin here and
cross-platform development in general has come to a grinding halt
recently with all the proprietary languages and APIs. If D could step up
here...

But mobile platforms aside, Windows support is something that in general
has always been neglected a bit, especially regarding 64-bit support.
Starting with Windows 8 there will arise additional problems because
Metro application will only be able/allowed to use the COM based WinRT
and the VisualStudio runtime. DMD with its use of snn.lib is out of the
game here, just as the any other runtime library.

Right now, if we don't catch up here, D will slowly degrade to a pure
server and command line application language which surely wouldn't do it
justice.


It's possible to use D with WinRT, as someone posted in an other thread:

http://www.reddit.com/tb/ow7qc


In consequence this means that there is one more reason to raise the
priority of COFF output from DMD (together with 64-bit codegen) - or
possibly the alternative to make OptLink COFF-capable to at least be
able to somehow link against the VS runtime.

Another such thing - although this can be worked around - would be
direct support for Objective-C classes like in Michel Fortin's dmd
modification. I think these GUI application related functionalities are
by far the most important things for D's mass adoption. And personally,
I would even be willing to donate a (for me) considerable amount of
money to help bringing this forward because many things I would like to
realize with D are currently (almost) impossible.


I agree.

--
/Jacob Carlborg


Re: Precise GC

2012-04-09 Thread deadalnix

Le 08/04/2012 14:02, Alex Rønne Petersen a écrit :

On 08-04-2012 11:42, Manu wrote:

On 8 April 2012 11:56, Timon Gehr mailto:timon.g...@gmx.ch>> wrote:

On 04/08/2012 10:45 AM, Timon Gehr wrote:

That actually sounds like a pretty awesome idea.


Make sure that the compiler does not actually rely on the fact that
the template generates a function. The design should include the
possibility of just generating tables. It all should be completely
transparent to the compiler, if that is possible.


This sounds important to me. If it is also possible to do the work with
generated tables, and not calling thousands of indirect functions in
someone's implementation, it would be nice to reserve that possibility.
Indirect function calls in hot loops make me very nervous for non-x86
machines.


Yes, I agree here. The last thing we need is a huge amount of
kinda-sorta-virtual function calls on ARM, MIPS, etc. It may work fine
on x86, but anywhere else, it's really not what you want in a GC.



Nothing prevent the generated function to itself call other generated 
functions, when things are predictable. It avoid many indirect calls, 
and purely by lib, which is good (can be tuned for application/plateform).


Re: Windows 8 Metro support

2012-04-09 Thread Nick Sabalausky
"Sönke Ludwig"  wrote in message 
news:jlv3c2$10rn$1...@digitalmars.com...
> (IMO) one of the biggest obstacles for truly broad adoption of D currently 
> is the weak platform support on end user platforms. The two mobile 
> platforms that came up recently (iOS and Android) are two examples. And 
> indeed I think that support for mobile platforms could be a real stepping 
> stone because of D's extraordinary convenience and language power - the 
> alternatives to C/C++ are pretty thin here and cross-platform development 
> in general has come to a grinding halt recently with all the proprietary 
> languages and APIs. If D could step up here...
>
> But mobile platforms aside, Windows support is something that in general 
> has always been neglected a bit, especially regarding 64-bit support. 
> Starting with Windows 8 there will arise additional problems because Metro 
> application will only be able/allowed to use the COM based WinRT and the 
> VisualStudio runtime. DMD with its use of snn.lib is out of the game here, 
> just as the any other runtime library.
>
> Right now, if we don't catch up here, D will slowly degrade to a pure 
> server and command line application language which surely wouldn't do it 
> justice.
>
> In consequence this means that there is one more reason to raise the 
> priority of COFF output from DMD (together with 64-bit codegen) - or 
> possibly the alternative to make OptLink COFF-capable to at least be able 
> to somehow link against the VS runtime.
>
> Another such thing - although this can be worked around - would be direct 
> support for Objective-C classes like in Michel Fortin's dmd modification. 
> I think these GUI application related functionalities are by far the most 
> important things for D's mass adoption. And personally, I would even be 
> willing to donate a (for me) considerable amount of money to help bringing 
> this forward because many things I would like to realize with D are 
> currently (almost) impossible.

Aside from the Win8 stuff (only because I have a hard time believing Win32 
won't work on Win8), I strongly agree will all of this.




Re: Precise GC

2012-04-09 Thread deadalnix

Le 08/04/2012 03:56, Walter Bright a écrit :

Of course, many of us have been thinking about this for a looong time,
and what is the best way to go about it. The usual technique is for the
compiler to emit some sort of table for each TypeInfo giving the layout
of the object, i.e. where the pointers are.

The general problem with these is the table is non-trivial, as it will
require things like iterated data blocks, etc. It has to be compressed
to save space, and the gc then has to execute a fair amount of code to
decode it.

It also requires some significant work on the compiler end, leading of
course to complexity, rigidity, development bottlenecks, and the usual
bugs.

An alternative Andrei and I have been talking about is to put in the
TypeInfo a pointer to a function. That function will contain customized
code to mark the pointers in an instance of that type. That custom code
will be generated by a template defined by the library. All the compiler
has to do is stupidly instantiate the template for the type, and insert
an address to the generated function.

The compiler need know NOTHING about how the marking works.

Even better, as ctRegex has demonstrated, the custom generated code can
be very, very fast compared with a runtime table-driven approach. (The
slow part will be calling the function indirectly.)

And best of all, the design is pushed out of the compiler into the
library, so various schemes can be tried out without needing compiler work.

I think this is an exciting idea, it will enable us to get a precise gc
by enabling people to work on it in parallel rather than serially
waiting for me.


This id a good idea. However, this doesn't handle type qualifiers. And 
this is important !


D2 type system is made in such a way that most data are either thread 
local or immutable, and a small amount is shared. Both thread local 
storage and immutability are source of BIG improvement for the GC. Doing 
without it is a huge design error.


For instance, Ocaml's GC is known to be more performant than Java's. 
Because in Caml, most data are immutable, and the GC take advantage of 
this. Immutability means 100% concurrent garbage collection.


In the other hand, TLS can be collected independently and only influence 
the thread that own the data. Both are every powerfull improvement, and 
the design you propose « as this » cannot provide any mean to handle 
that. Which is a big missed opportunity, and will be hard to change in 
the future.


Re: std.benchmark ready for review. Manager sought after

2012-04-09 Thread Manfred Nowak
Andrei Alexandrescu wrote:

>> For example by running more than one instance of the benchmarked
>> program in paralell and use the thereby gathered statistical 
>> routines to spread T into the additiv components A, Q and N.

> I disagree with running two benchmarks in parallel because that
> exposes them to even more noise (scheduling, CPU count, current
> machine load etc).

I did not mean to run two or more benchmarks in parallel but only 
more instances of the benchmarked program _without_ the environment 
supplied by the benchmakr. Of course may there be more noise. But if 
so, that noise is dependent on the additional running instances and 
to nothing else.

Now let T(n) be the time needed to run n instances in parallel on a 
single processor. According to your definition and your remark above:

§   T(n) = n*A + Q + N + P(n)

where P(n)>=0 and P(1)==0 is the additional noise with which the 
benchmarked program disturbs itself.

Please observe that Q and N eliminate themselves on subtraction:
§  T(2) - T(1) = A + P(2) - P(1)
§  T(3) - T(2) = A + P(3) - P(2)
...

P(n+1)-P(n) measures the sensitivity of the benchmarked program to 
its one noise. As you wrote yourself its value should be close to 
zero until the machine is close to falling flat.


> I don't understand the part of the sentence starting with "...use
> the thereby...", I'd be grateful if you elaborated. 

Ohh..., an unrecognized deletion. I have written:
| use the thereby gathered _data to feed_ statistical routines to
| spread T
as elaborated above.

-manfred


Re: Precise GC

2012-04-09 Thread Manu
On 9 April 2012 21:20, deadalnix  wrote:

> Le 08/04/2012 14:02, Alex Rønne Petersen a écrit :
>
>  On 08-04-2012 11:42, Manu wrote:
>>
>>> On 8 April 2012 11:56, Timon Gehr >> > wrote:
>>>
>>> On 04/08/2012 10:45 AM, Timon Gehr wrote:
>>>
>>> That actually sounds like a pretty awesome idea.
>>>
>>>
>>> Make sure that the compiler does not actually rely on the fact that
>>> the template generates a function. The design should include the
>>> possibility of just generating tables. It all should be completely
>>> transparent to the compiler, if that is possible.
>>>
>>>
>>> This sounds important to me. If it is also possible to do the work with
>>> generated tables, and not calling thousands of indirect functions in
>>> someone's implementation, it would be nice to reserve that possibility.
>>> Indirect function calls in hot loops make me very nervous for non-x86
>>> machines.
>>>
>>
>> Yes, I agree here. The last thing we need is a huge amount of
>> kinda-sorta-virtual function calls on ARM, MIPS, etc. It may work fine
>> on x86, but anywhere else, it's really not what you want in a GC.
>>
>>
> Nothing prevent the generated function to itself call other generated
> functions, when things are predictable. It avoid many indirect calls, and
> purely by lib, which is good (can be tuned for application/plateform).
>

Eh?
Not sure what you mean. The idea is the template would produce a
struct/table of data instead of being a pointer to a function, this way the
GC could work without calling anything. If the GC was written to assume GC
info in a particular format/structure, it could be written without any
calls.
I'm just saying to leave that as a possibility, and not REQUIRE an indirect
function call for every single allocation in the system. Some GC might be
able to make better use of that sort of setup.


Re: uploading with curl

2012-04-09 Thread Gleb

Woks perfectly! Thanks a lot!


Can't assign to static array in ctor?

2012-04-09 Thread H. S. Teoh
What's the reason the following code doesn't compile?

struct S {
const(int)[4] data;
this(const(int)[4] d) {
data = d;   // this is line 4
}
}

void main() {
S s;
}

Compiler error:

test.d(4): Error: slice this.data[] is not mutable

Shouldn't the assignment be valid in the ctor?


T

-- 
MSDOS = MicroSoft's Denial Of Service


Re: More ddoc complaints

2012-04-09 Thread Stewart Gordon

On 09/04/2012 14:34, Adam D. Ruppe wrote:

On Monday, 9 April 2012 at 11:05:10 UTC, Stewart Gordon wrote:



Create LT, GT and AMP macros and use them in your code examples.


There's two problems with that: 1) it is hideous
and 2) what if the user wants some format other
than html?


Then they would define LT, GT and AMP differently.


Suppose your format escapes \. Should I defensively
make a $(BACKSLASH) macro too?



No, since you don't have any idea in what formats I might want to 
generate docs for your lib.  But I would know I need to be on the 
lookout for characters that have a special meaning in my chosen output 
format.


But indeed, the ESCAPES macro is a much better solution.

Stewart.


Re: Small Buffer Optimization for string and friends

2012-04-09 Thread Manu
On 9 April 2012 17:55, Andrei Alexandrescu wrote:

> On 4/9/12 4:21 AM, Manu wrote:
>
>> After thinking on it a bit, I'm becoming a little worried about this
>> move for 2 rarely considered reasons:
>> Using lowering to a template, debug(/unoptimised) performance will
>> probably get a lot slower, which is really annoying. And
>> debugging/stepping might become considerably more annoying too, if every
>> time I press F11 (step in) over a function call that happens to receive
>> an arg from an array, the debugger then steps into the array templates
>> index operator... We'd be no better off than with STL, unless the
>> language has clever ways of hiding this magic from the debugger too, and
>> optimising/inlining the index even in debug builds...? But this is the
>> built-in array, and not a library we can optionally not use.
>>
>
> I agree. So we have the counterarguments:
>
> 1. Lowering would treat array primitives as sheer D code, subject to
> refusal of inlining. That means worse performance.
>
> 2. Unless the compiler takes special measures, source-level debuggers will
> trace through core, uninteresting code for array operations.
>
> 3. There are patterns that attempt to optimize by e.g. using .ptr, but end
> up pessimizing code because they trigger multiple memory allocations.


Indeed. I don't think the small array optimisation would benefit us in any
way that could even come close to balancing the loss. I don't think it
would benefit us much at all regardless, since we've already proven the
need for a string class anyway, and we can make such small-string
optimisations already.
I am very wary of removing a fundamental language primitive, that is, the
basic ability to express an array with absolutely no frills.

This idea only really benefits utf-8 strings, so why not just make a real
d-string class in phobos and be done with it? People complain about UTF
issues with raw strings, and this is what you intend to do anyway to
implement the optimisation.
d-string could perhaps be made to appear as a first-class language feature
via the 'string' keyword, and lower to the library in the way you describe.
Just don't blanket this across all arrays in general.


Re: Windows 8 Metro support

2012-04-09 Thread Sönke Ludwig


It's possible to use D with WinRT, as someone posted in an other thread:

http://www.reddit.com/tb/ow7qc



But that does not suffice to make a Metro app. For desktop apps there 
shouldn't be a problem, but the Metro side poses more restrictions on 
the app.


Re: Windows 8 Metro support

2012-04-09 Thread Sönke Ludwig

Am 09.04.2012 19:12, schrieb Dmitry Olshansky:


Not true at all, in every talk I've seen on WinRT so far C++ CRT is
still shipped side by side with WinRT. Basically every language has his
own runtime. It wouldn't be Microsoft if they haven't got a solid
reserve of backwards compatibility. Simply put WinRT is a major update
on COM technology and even here it's backwards compatible with the old COM.
The fact that OS API is expossed through this new COM interface is just
a nice feature. I was kind of wondering when they will finally ditch
Win32 API.



I've got my information directly from Microsoft Metro guys. Not totally 
sure how good their knowledge actually is, but for _Metro_ apps they 
said that because of sandboxing it is only allowed to access functions 
of the WinRT - the C++ runtime is an exception but I would guess that 
this does not apply for foreign runtimes. You also have to recompile 
C/C++ libraries with the new runtime. Also, the sandboxing model seemed 
to be a part of WinRT - so DMD executables would not be sandboxed at all.


The desktop world will of course be working exactly like it used to do 
and the Win32 API will probably live on for at least a few OS generations.


Re: Windows 8 Metro support

2012-04-09 Thread Sönke Ludwig

Am 09.04.2012 20:21, schrieb Nick Sabalausky:


Aside from the Win8 stuff (only because I have a hard time believing Win32
won't work on Win8), I strongly agree will all of this.



No, sorry for the confusion, Win32 will work in general! Just Metro apps 
may not use Win32 and other libraries with the exception of the runtime 
and WinRT.





Re: Shared library in D on Linux

2012-04-09 Thread Timo Westkämper

On Monday, 9 April 2012 at 15:14:45 UTC, Ellery Newcomer wrote:
Well, if you're really hankering for a shared lib, try ldc. I 
have gotten it to compile working shared libs in the past.


On 04/09/2012 01:24 AM, "Timo Westkämper" 
" wrote:
On Sunday, 8 April 2012 at 17:59:28 UTC, Timo Westkämper 
wrote:
Does someone know why the lib (.a) packaging instead of 
objects (.o)

works better in this case?


Didn't work after all with -lib. I mixed up outputs.


Thanks, I might switch to ldc, if dmd and gdc fail here.

I found this tls.S script in the druntime sources (src/rt/tls.S). 
Do you think it could be included in the library to make tls 
initialization work?


#if linux

/* The memory between the addresses of _tlsstart and _tlsend is 
the storage for
 * thread-local data in D 2.0.  Both of these rely on the default 
linker script

 * of:
 *  .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
 *  .tbss  : { *(.tbss .tbss.* .gnu.linkonce.tb.*) 
*(.tcommon) }

 * to group the sections in that order.
 *
 * Sadly, this does not work because ld orders .tdata after 
.tdata.*, despite

 * what the linker script says.
 */

.file "tls.S"

.globl _tlsstart
.section .tdata,"awT",@progbits
.align 4
.type   _tlsstart, @object
.size   _tlsstart, 4
_tlsstart:
.long   3

.globl _tlsend
.section .tcommon,"awT",@nobits
.align 4
.type   _tlsend, @object
.size   _tlsend, 4
_tlsend:
.zero   4

#endif


I will see if I can copy the exception handling parts from the D 
main wrapping code.


As a temporary solution I did this

extern(C) void _deh_beg() { }
extern(C) void _deh_end() { }


Re: Windows 8 Metro support

2012-04-09 Thread Michel Fortin

On 2012-04-09 19:20:49 +, Sönke Ludwig  said:

I've got my information directly from Microsoft Metro guys. Not totally 
sure how good their knowledge actually is, but for _Metro_ apps they 
said that because of sandboxing it is only allowed to access functions 
of the WinRT - the C++ runtime is an exception but I would guess that 
this does not apply for foreign runtimes. You also have to recompile 
C/C++ libraries with the new runtime. Also, the sandboxing model seemed 
to be a part of WinRT - so DMD executables would not be sandboxed at 
all.


The desktop world will of course be working exactly like it used to do 
and the Win32 API will probably live on for at least a few OS 
generations.


Interesting.

Apple too is moving to a sandboxed model. On the Mac, the sandbox is 
only imposed (or soon will be imposed) to apps distributed through 
their App Store. On iOS, the sandbox is always active. It'd be nice to 
check that the D runtime still run fine in a sanboxed Mac app, 
otherwise it restricts even more where you can use D code. Fortunately, 
Apple's sandbox doesn't require you to use a whole different set of 
APIs, it just prevent the code from doing the things it has no 
entitlements for.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Can't assign to static array in ctor?

2012-04-09 Thread bearophile

H. S. Teoh:


struct S {
const(int)[4] data;
this(const(int)[4] d) {
data = d;   // this is line 4
}
}

void main() {
S s;
}


I think this used to work (do you have an older DMD to verify 
it?). So maybe this is regression.


Bye,
bearophile


Re: Windows 8 Metro support

2012-04-09 Thread Michel Fortin

On 2012-04-09 16:39:30 +, Sönke Ludwig  said:

Right now, if we don't catch up here, D will slowly degrade to a pure 
server and command line application language which surely wouldn't do 
it justice.


I share your feeling. In fact, I'm not using D anywhere right now 
because it'd be too inconvenient for what I do most of the time.


Another such thing - although this can be worked around - would be 
direct support for Objective-C classes like in Michel Fortin's dmd 
modification. I think these GUI application related functionalities are 
by far the most important things for D's mass adoption.


And the reason GUI apps are so important is because they're the front 
end of most back ends. If using D on the back end makes it harder to 
build the user interface because of the language barrier, then that's a 
huge downside to using D on the back end of any project where the goal 
includes a user interface. For me at least, C++ is much better choice 
for the backend of a GUI app at the moment mostly it intermixes easily 
with Objective-C.


And personally, I would even be willing to donate a (for me) 
considerable amount of money to help bringing this forward because many 
things I would like to realize with D are currently (almost) impossible.


I started the D/Objective-C project, patching DMD, because after a huge 
attempt at making a bridge I found out it wasn't going to cut it. The 
need for an intermediate layer at the language level is a huge 
liability: it costs compilation time, slows down the program, bloats 
the executable size, and it increases the memory footprint.


The problem is that D/Objective-C still needs a huge investment in 
development time to become really useful. It's more a proof of concept 
as it is now. The most important blocks have been shown to work, but 
the difficulty lies in getting all the details/variants right, 
integrating with the GC, automatic reference counting, Apple's Modern 
runtime, ARM, etc.


I'd love to take your money to free some of my time so I can continue 
working on this project. But I'm not too confident it will ever reach a 
satisfactory state without a huge time investment on my part. And I 
can't spare that investment myself, hence why the project is stalled.


As for WinRT and the C++ extensions Microsoft has created for it, it 
looks very similar to what I've been doing to integrate Objective-C 
into D. No doubt my work could be reused to also add similar WinRT 
support.


--
Michel Fortin
michel.for...@michelf.com
http://michelf.com/



Re: Shared library in D on Linux

2012-04-09 Thread Iain Buclaw
On 9 April 2012 20:37,  <"Timo Westkämper\"
"@puremagic.com> wrote:
> On Monday, 9 April 2012 at 15:14:45 UTC, Ellery Newcomer wrote:
>>
>> Well, if you're really hankering for a shared lib, try ldc. I have gotten
>> it to compile working shared libs in the past.
>>
>> On 04/09/2012 01:24 AM, "Timo Westkämper" "
>> wrote:
>>>
>>> On Sunday, 8 April 2012 at 17:59:28 UTC, Timo Westkämper wrote:

 Does someone know why the lib (.a) packaging instead of objects (.o)
 works better in this case?
>>>
>>>
>>> Didn't work after all with -lib. I mixed up outputs.
>
>
> Thanks, I might switch to ldc, if dmd and gdc fail here.
>
> I found this tls.S script in the druntime sources (src/rt/tls.S). Do you
> think it could be included in the library to make tls initialization work?
>
> #if linux
>
> /* The memory between the addresses of _tlsstart and _tlsend is the storage
> for
>  * thread-local data in D 2.0.  Both of these rely on the default linker
> script
>  * of:
>  *      .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
>  *      .tbss  : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) }
>  * to group the sections in that order.
>  *
>  * Sadly, this does not work because ld orders .tdata after .tdata.*,
> despite
>  * what the linker script says.
>  */
>
> .file "tls.S"
>
> .globl _tlsstart
>    .section .tdata,"awT",@progbits
>    .align 4
>    .type   _tlsstart, @object
>    .size   _tlsstart, 4
> _tlsstart:
>    .long   3
>
> .globl _tlsend
>    .section .tcommon,"awT",@nobits
>    .align 4
>    .type   _tlsend, @object
>    .size   _tlsend, 4
> _tlsend:
>    .zero   4
>
> #endif
>
>

That assembly file does nothing for shared library support.  I have
been meaning to finish up a solution to help support shared libs,
would mean more deviation from the dmd compiler's runtime library, but
that's fine.

-- 
Iain Buclaw

*(p < e ? p++ : p) = (c & 0x0f) + '0';


The Downfall of Imperative Programming

2012-04-09 Thread Mirko Pilger

i guess this might be of interest to some.

http://fpcomplete.com/the-downfall-of-imperative-programming/

http://www.reddit.com/r/programming/comments/s112h/the_downfall_of_imperative_programming_functional/


Re: Shared library in D on Linux

2012-04-09 Thread Timo Westkämper

On Monday, 9 April 2012 at 19:59:18 UTC, Iain Buclaw wrote:

On 9 April 2012 20:37,  <"Timo Westkämper\"
"@puremagic.com> wrote:

On Monday, 9 April 2012 at 15:14:45 UTC, Ellery Newcomer wrote:


Well, if you're really hankering for a shared lib, try ldc. I 
have gotten

it to compile working shared libs in the past.

On 04/09/2012 01:24 AM, "Timo Westkämper" 
"

wrote:


On Sunday, 8 April 2012 at 17:59:28 UTC, Timo Westkämper 
wrote:


Does someone know why the lib (.a) packaging instead of 
objects (.o)

works better in this case?



Didn't work after all with -lib. I mixed up outputs.



Thanks, I might switch to ldc, if dmd and gdc fail here.

I found this tls.S script in the druntime sources 
(src/rt/tls.S). Do you
think it could be included in the library to make tls 
initialization work?


#if linux

/* The memory between the addresses of _tlsstart and _tlsend 
is the storage

for
 * thread-local data in D 2.0.  Both of these rely on the 
default linker

script
 * of:
 *      .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
 *      .tbss  : { *(.tbss .tbss.* .gnu.linkonce.tb.*) 
*(.tcommon) }

 * to group the sections in that order.
 *
 * Sadly, this does not work because ld orders .tdata after 
.tdata.*,

despite
 * what the linker script says.
 */

.file "tls.S"

.globl _tlsstart
   .section .tdata,"awT",@progbits
   .align 4
   .type   _tlsstart, @object
   .size   _tlsstart, 4
_tlsstart:
   .long   3

.globl _tlsend
   .section .tcommon,"awT",@nobits
   .align 4
   .type   _tlsend, @object
   .size   _tlsend, 4
_tlsend:
   .zero   4

#endif




That assembly file does nothing for shared library support.  I 
have
been meaning to finish up a solution to help support shared 
libs,
would mean more deviation from the dmd compiler's runtime 
library, but

that's fine.


Ok. Good to know. Here is what I came up with for now.

I have not yet much knowledge of DMD internals so I just played 
around with declarations:


import std.stdio;

// FIXME
__gshared extern(C) void* __data_start;

// FIXME tls marks
extern(C) int _tlsstart;
extern(C) int _tlsend;

// FIXME exception handling markers
extern(C) void _deh_beg() { }
extern(C) void _deh_end() { }

// hooks for init and term
extern (C) void rt_init();
extern (C) void rt_term();

extern (C) void hiD() {
  rt_init();
  writeln("hi from D lib");
  rt_term();
}

//void main();

/*extern(C) {

  void _init() {
rt_init();
  }

  void _fini() {
rt_term();
  }

}*/


For some reasons, the _init and _fini parts don't yet work 
properly.


And here the part of the Makefile that created the library:

  dmd -c -g test.d -fPIC
  ld -shared -o libtest.so test.o -lrt -lphobos2 -lpthread

This mostly reflects Jacob Carlborg's comment in the beginning of 
what features are still missing


* Proper initialization of TLS data
* Setting up exception handling tables
* Setting up module info


Re: Windows 8 Metro support

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 21:20, Sönke Ludwig wrote:


I've got my information directly from Microsoft Metro guys. Not totally
sure how good their knowledge actually is, but for _Metro_ apps they
said that because of sandboxing it is only allowed to access functions
of the WinRT - the C++ runtime is an exception but I would guess that
this does not apply for foreign runtimes. You also have to recompile
C/C++ libraries with the new runtime. Also, the sandboxing model seemed
to be a part of WinRT - so DMD executables would not be sandboxed at all.

The desktop world will of course be working exactly like it used to do
and the Win32 API will probably live on for at least a few OS generations.


D is statically linked to the standard library and runtime.

--
/Jacob Carlborg


Re: Windows 8 Metro support

2012-04-09 Thread Jacob Carlborg

On 2012-04-09 21:23, Sönke Ludwig wrote:


It's possible to use D with WinRT, as someone posted in an other thread:

http://www.reddit.com/tb/ow7qc



But that does not suffice to make a Metro app. For desktop apps there
shouldn't be a problem, but the Metro side poses more restrictions on
the app.


Aha, ok. Don't know about that. BTW, have you seen this: 
http://channel9.msdn.com/Events/Lang-NEXT/Lang-NEXT-2012/The-Windows-Runtime


He's encouraging other language developers to bring their languages to 
Windows 8 and be compatible with WinRT.


--
/Jacob Carlborg


Re: Shared library in D on Linux

2012-04-09 Thread Timo Westkämper

On Monday, 9 April 2012 at 20:31:44 UTC, Timo Westkämper wrote:

On Monday, 9 April 2012 at 19:59:18 UTC, Iain Buclaw wrote:

On 9 April 2012 20:37,  <"Timo Westkämper\"
"@puremagic.com> wrote:
On Monday, 9 April 2012 at 15:14:45 UTC, Ellery Newcomer 
wrote:


Well, if you're really hankering for a shared lib, try ldc. 
I have gotten

it to compile working shared libs in the past.

On 04/09/2012 01:24 AM, "Timo Westkämper" 
"

wrote:


On Sunday, 8 April 2012 at 17:59:28 UTC, Timo Westkämper 
wrote:


Does someone know why the lib (.a) packaging instead of 
objects (.o)

works better in this case?



Didn't work after all with -lib. I mixed up outputs.



Thanks, I might switch to ldc, if dmd and gdc fail here.

I found this tls.S script in the druntime sources 
(src/rt/tls.S). Do you
think it could be included in the library to make tls 
initialization work?


#if linux

/* The memory between the addresses of _tlsstart and _tlsend 
is the storage

for
 * thread-local data in D 2.0.  Both of these rely on the 
default linker

script
 * of:
 *      .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
 *      .tbss  : { *(.tbss .tbss.* .gnu.linkonce.tb.*) 
*(.tcommon) }

 * to group the sections in that order.
 *
 * Sadly, this does not work because ld orders .tdata after 
.tdata.*,

despite
 * what the linker script says.
 */

.file "tls.S"

.globl _tlsstart
   .section .tdata,"awT",@progbits
   .align 4
   .type   _tlsstart, @object
   .size   _tlsstart, 4
_tlsstart:
   .long   3

.globl _tlsend
   .section .tcommon,"awT",@nobits
   .align 4
   .type   _tlsend, @object
   .size   _tlsend, 4
_tlsend:
   .zero   4

#endif




That assembly file does nothing for shared library support.  I 
have
been meaning to finish up a solution to help support shared 
libs,
would mean more deviation from the dmd compiler's runtime 
library, but

that's fine.


Ok. Good to know. Here is what I came up with for now.

I have not yet much knowledge of DMD internals so I just played 
around with declarations:


import std.stdio;

// FIXME
__gshared extern(C) void* __data_start;

// FIXME tls marks
extern(C) int _tlsstart;
extern(C) int _tlsend;

// FIXME exception handling markers
extern(C) void _deh_beg() { }
extern(C) void _deh_end() { }

// hooks for init and term
extern (C) void rt_init();
extern (C) void rt_term();

extern (C) void hiD() {
  rt_init();
  writeln("hi from D lib");
  rt_term();
}

//void main();

/*extern(C) {

  void _init() {
rt_init();
  }

  void _fini() {
rt_term();
  }

}*/


For some reasons, the _init and _fini parts don't yet work 
properly.


And here the part of the Makefile that created the library:

  dmd -c -g test.d -fPIC
  ld -shared -o libtest.so test.o -lrt -lphobos2 -lpthread

This mostly reflects Jacob Carlborg's comment in the beginning 
of what features are still missing


* Proper initialization of TLS data
* Setting up exception handling tables
* Setting up module info


I just figured out this alternative approach which works as well:

import std.stdio;

// FIXME
__gshared extern(C) void* __data_start;

// hooks for init and term
extern (C) void rt_init();
extern (C) void rt_term();

extern (C) void hiD() {
  rt_init();
  writeln("hi from D lib");
  rt_term();
}

void main() {}


The declaration of the main method adds the tls and deh parts.



Re: Precise GC

2012-04-09 Thread deadalnix

Le 09/04/2012 20:33, Manu a écrit :

Eh?
Not sure what you mean. The idea is the template would produce a
struct/table of data instead of being a pointer to a function, this way
the GC could work without calling anything. If the GC was written to
assume GC info in a particular format/structure, it could be written
without any calls.
I'm just saying to leave that as a possibility, and not REQUIRE an
indirect function call for every single allocation in the system. Some
GC might be able to make better use of that sort of setup.


If you have reference to objects, you can't avoid a function call. If 
you have something you know at compile time, the generated function can 
directly call the other function that mark the pointed data (or even can 
do it itself, if you don't fear code bloat) without going back to the GC 
and its indirect call.


So it make no difference in the number of indirect calls you have, but 
the struct proposal is a stronger constraint on the GC that the function 
one.


BTW, starting you answer by « Not sure what you mean. » should have been 
a red flag.


Re: The Downfall of Imperative Programming

2012-04-09 Thread Gour
On Mon, 09 Apr 2012 22:28:01 +0200
Mirko Pilger  wrote:

> i guess this might be of interest to some.

Yes, it is...and I wonder if D's FP features are good enough? Author
mentions D, but says:"...This is all good, but not enough..."



Sincerely,
Gour


-- 
Everyone is forced to act helplessly according to the qualities 
he has acquired from the modes of material nature; therefore no 
one can refrain from doing something, not even for a moment.

http://atmarama.net | Hlapicina (Croatia) | GPG: 52B5C810


signature.asc
Description: PGP signature


Re: The Downfall of Imperative Programming

2012-04-09 Thread Sean Cavanaugh

On 4/9/2012 3:28 PM, Mirko Pilger wrote:

i guess this might be of interest to some.

http://fpcomplete.com/the-downfall-of-imperative-programming/

http://www.reddit.com/r/programming/comments/s112h/the_downfall_of_imperative_programming_functional/




I would counter a flow based programming approach solves a lot of the 
same concurrency problems and doesn't tie you to a programming style for 
the actual code (functional vs declarative) as each module can be made 
to do whatever it wants or needs to do.


Re: Precise GC

2012-04-09 Thread Walter Bright

On 4/9/2012 11:30 AM, deadalnix wrote:

In the other hand, TLS can be collected independently and only influence the
thread that own the data. Both are every powerfull improvement, and the design
you propose « as this » cannot provide any mean to handle that. Which is a big
missed opportunity, and will be hard to change in the future.


I think this is an orthogonal issue.


Re: Custom attributes (again)

2012-04-09 Thread deadalnix

Le 08/04/2012 12:44, Jacob Carlborg a écrit :

On 2012-04-08 09:27, Marco Leise wrote:

I don't want this thread to disappear. The ideas presented here have
common basic features among the nice-to-haves.



For these to work it would require:
- user annotations to functions/methods/structs/classes
- only CTFE support (as annotations don't change at runtime)


I don't see why the attributes should be accessible at runtime. Even if
they're read-only it's still good to be able to read the attributes at
runtime.



If it is available at compile time, it is implementable at runtime as a 
lib. So you pay for it only if you use it, and you don't add feature in 
the language just because it is convenient.


Re: Shared library in D on Linux

2012-04-09 Thread Ellery Newcomer
On 04/09/2012 03:31 PM, "Timo Westkämper" " 
wrote:



For some reasons, the _init and _fini parts don't yet work properly.



what's wrong with them? if it is a link problem, use gcc -nostartfiles.
Well, I'm doing that, and it compiles and rt_init is called, but doesn't 
seem to be calling the module constructors, so maybe that is a very bad 
idea.


Re: Custom attributes (again)

2012-04-09 Thread Walter Bright

On 4/6/2012 3:49 AM, Timon Gehr wrote:

On 04/06/2012 12:23 PM, Walter Bright wrote:

On 4/6/2012 2:54 AM, Timon Gehr wrote:

Should add additional information to the type Foo. I don't see any
issues with
it, and not supporting it would be very strange.


How would:

@attr(foo) int x;
int y;

work? Are x and y the same type or not?


Yes, they are.

(But a future extension might leave this choice up to 'foo')


Now, consider:

auto c = b ? x : y;

What type does c have? int or @attr(foo)int ? And that's really just the
beginning. How about:

struct S(T) {
T t;
}

Instantiate it with S!int and S!(@attr(foo)int). Are those the same
instantiation, or different? If the same, does S.t have the attribute or
not?


There is no such thing as an @attr(foo) int, because @attr is not a type
constructor.


But you said it was added to the *type*.


Re: Custom attributes (again)

2012-04-09 Thread Walter Bright

On 4/6/2012 4:20 AM, Manu wrote:

On 4/6/2012 2:54 AM, Timon Gehr wrote:
 Should add additional information to the type Foo.
Attributes are on the declaration, and not passed around.


Right, they are not added to the *type*.


Re: Discussion on Go and D

2012-04-09 Thread SomeDude

On Monday, 9 April 2012 at 00:18:22 UTC, Manu wrote:
On 9 April 2012 02:24, Walter Bright 
 wrote:


still clearly isn't there yet when it comes to the GC either, 
and I'm
amazed Google thing Go is production ready if that guys 
findings are true!


They use it on 64 bits servers with tons of RAM. But they can't 
target Android powered mobile devices with that language, and D 
may have similar issues.

On what kind of server does the web forum run ?


The new std.process?

2012-04-09 Thread Nick Sabalausky
Wasn't someone working on a std.process overhaul? What ever happened to 
that?




  1   2   >