variable x cannot be read at compile time - how to get around this?

2014-11-12 Thread Sergey via Digitalmars-d

  Hello everyone!

I need to create a two-dimensional array in this way, for example:

auto x = 10;
auto y = 10;
auto some_array = new string[x][y];
variable x cannot be read at compile time

I tried this:
enum columns_array = 
[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20];

auto y = 10;
int i = 1;
auto some_array = new string[columns_array[i]][y];
Error: columns_array is used as a type

And yet, if I have a function:
string[x][] some_function (some par) {
   auto x = 10;
   auto y = 10;
   auto some_array = new string[x][y];
   return some_array;
   }

Thanks in advance.


Byron Scott is not a fan

2014-11-12 Thread Joakim via Digitalmars-d

"Byron Scott on D: 'It was just terrible'" :)

http://espn.go.com/los-angeles/nba/story/_/id/11868151/los-angeles-lakers-coach-byron-scott-rips-team-lack-defense


Re: Microsoft now giving away VS 2013

2014-11-12 Thread ZombineDev via Digitalmars-d

Probably more important:

Microsoft starts to open source .NET and take it cross-platform 
to Mac, Linux:


http://venturebeat.com/2014/11/12/microsoft-starts-to-open-source-net-and-take-it-cross-platform-to-mac-linux/

https://github.com/Microsoft/dotnet

License

.NET open source projects typically use either the MIT or Apache 
2 licenses for code. Some projects license documentation and 
other forms of content under Creative Commons Attribution 4.0. 
See specific projects to understand the license used.




Re: Microsoft now giving away VS 2013

2014-11-12 Thread David Nadlinger via Digitalmars-d
On Thursday, 13 November 2014 at 00:58:41 UTC, Walter Bright 
wrote:
This is good news for D! It lowers the bar for writing 64 bit D 
code on Windows, and it also enables us to abandon support for 
versions of VS prior to 2013.


Does it? Weren't the relevant parts available as part of VS 
Express and/or the Windows SDK? (Dreamspark Premium makes you 
rather oblivious in that regard.)


In any case, this is good news for Windows development indeed.

David


Re: GC: memory collected but destructors not called

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/12/14 3:00 PM, Uranuz wrote:

If we will have something like *scoped destructor* (that will be
executed at scope exit) could it help to release some resources? I worry
about this problem too because even using class to hold resource I
expirience some *delays* in relesing them. For example I have database
connection opened. And I want to close it when I finished my job.
Relying on GC I sometimes experiece problems like *too many DB
connections*, because GC frees it not enough quickly.


GC I would use as a last resort.

Let's say you leak the object, forget to dispose it. Do you want it to 
leak the DB resource too?


Basically, you want a close() or dispose() method on your object, then 
you can let the GC clean the memory, and synchronously close the DB 
connection.


-Steve


Microsoft now giving away VS 2013

2014-11-12 Thread Walter Bright via Digitalmars-d

http://techcrunch.com/2014/11/12/microsoft-makes-visual-studio-free-for-small-teams/

This is good news for D! It lowers the bar for writing 64 bit D code on Windows, 
and it also enables us to abandon support for versions of VS prior to 2013.


Re: Why is `scope` planned for deprecation?

2014-11-12 Thread Andrei Alexandrescu via Digitalmars-d

On 11/12/14 2:10 PM, deadalnix wrote:

On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven
wrote:

I think Rust's lifetimes would be a huge change if ported to D. In
Rust user types often need annotations as well as function parameters.
People tend to want Rust's guarantees without the limitations. I think
D does need some kind of scope attribute verification, but we need to
throw out some of the guarantees Rust makes to get an appropriate fit
for existing D code.



Rust is not the first language going that road. The problem is
that you get great complexity if you don't want to be too
limiting in what you can do. This complexity ultimately ends up
costing more than what you gain.

I think the sane road to go into is supporting
ownership/burrowing for common cases, and fallback on the GC, or
unsafe construct for the rest.

One have to admit there is no silver bullet, and shoehorning
everything in the same solution is not gonna work.


I agree. This is one of those cases in which a good engineering solution 
may be a lot better than the "perfect" solution (and linear types are 
not even perfect...).


Andrei


Re: Why is `scope` planned for deprecation?

2014-11-12 Thread deadalnix via Digitalmars-d

On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven
wrote:
I think Rust's lifetimes would be a huge change if ported to D. 
In Rust user types often need annotations as well as function 
parameters. People tend to want Rust's guarantees without the 
limitations. I think D does need some kind of scope attribute 
verification, but we need to throw out some of the guarantees 
Rust makes to get an appropriate fit for existing D code.




Rust is not the first language going that road. The problem is
that you get great complexity if you don't want to be too
limiting in what you can do. This complexity ultimately ends up
costing more than what you gain.

I think the sane road to go into is supporting
ownership/burrowing for common cases, and fallback on the GC, or
unsafe construct for the rest.

One have to admit there is no silver bullet, and shoehorning
everything in the same solution is not gonna work.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread deadalnix via Digitalmars-d
On Wednesday, 12 November 2014 at 20:36:32 UTC, Dmitry Olshansky 
wrote:

Seems sane. owned(Exception) would be implicitly assumed i.e.:
catch(Exception e){ ... }

would be seen by compiler as:
catch(owned(Exception) e){ ... }

What happens if I throw l-value exception? Do I need to cast or 
assumeOwned it?


It's easy to see how it goes with r-values, such as new 
Exception(...), since they are "unique expressions" whatever 
that means ;)




Yes, the unsafe road must always be open, we are a system 
programming language :)


I take it that owned(T) is implicitly deduced by compiler in 
case of pure functions? Also it seem templates should not take 
owned(T) into consideration and let it decay... How does owned 
compose with other qualifiers?




You mean what is I have an owned field into an object ? In the 
case you pass the owned where a TL, shared or immutable is 
expected, the island is merged so the question do not make sense.


An owned field in an object is interpreted as follow:
 - immutable => immutable
 - shared => owned (and can be touched only if the shared object 
is synchronized, which allow to hide a whole hierarchy behind a 
mutex. That is another selling point but I don't wanted to get 
into all the details as the post was already quite big).
 - const => const owned (essentially unusable - except via 
burrowing if we ever want to go that road one day).


Seems absolutely cool. But doesn't allocating exception touches 
heap anyway? I take it that if I don't save exception 
explicitly anywhere the owned island is destroyed at catch 
scope?




Yes it touches the heap. But as long as things are owned, they'll 
be freed automatically when going out of scope. That means, with 
that definition of things, what is forbidden in @nogc code is to 
consume the owned in such a fashion that its island is merged 
into TL, shared or immutable heap. If you don't do this, then 
your isolated will be freed when going out of scope and the GC 
won't need to kick in/no garbage will be produced.


Doing so allow for relaxing the constraint in @nogc and allow for 
the same library code to be used with or without GC.


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 20:52:28 UTC, Ola Fosheim 
Grøstad wrote:
In order to be consistent it with your line of reasoning it 
should simply HLT, then a SIGSEGV handler should set up a 
preallocated stack, obtain the information and send it off to a 
logging service using pure system calls before terminating (or 
send it to the parent process).


Btw, in C you should get SIGABRT on assert()


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread deadalnix via Digitalmars-d

On Wednesday, 12 November 2014 at 12:49:41 UTC, Marc Schütz wrote:
All this is unfortunately only true if there are no references 
between heaps, i.e. if the heaps are indeed "islands". 
Otherwise, there need to be at least write barriers.




Yes, that is exactly why I'm listing the case where these can be 
created in @safe code and propose a solution to plug the hole 
(and which brings other benefits along the road).


Re: C++ overloaded operators and D

2014-11-12 Thread IgorStepanov via Digitalmars-d

On Wednesday, 12 November 2014 at 20:49:42 UTC, Marc Schütz wrote:
On Wednesday, 12 November 2014 at 19:32:32 UTC, IgorStepanov 
wrote:
On Wednesday, 12 November 2014 at 14:41:17 UTC, Marc Schütz 
wrote:
On Wednesday, 12 November 2014 at 11:43:36 UTC, IgorStepanov 
wrote:
C++ and D provides different behaviour for operator 
overloading.
D has a opIndex + opIndexAssign overloads, and if we want to 
map opIndex to operator[], we must to do something with 
opIndexAssign.


operator[] can be mapped to opIndex just fine, right? Only 
opIndexAssign wouldn't be accessible from C++ via an 
operator, but that's because the feature doesn't exist. We 
can still call it via its name opIndexAssign.


operator< and operator> can't be mapped to D. Same for 
operator&.


That's true. Maybe we can just live with pragma(mangle) for 
them, but use D's op... for all others?


Binary arithmetic operators can't be mapped to D, if them 
implemented as static functions:


Foo operator+(int a, Foo f); //unable to map it to D, 
because static module-level Foo opAdd(int, Foo) will not 
provide the same behaviour as operator+ in D.
Thus: C++ and D overloaded operators should live in 
different worlds.


Can't we map both static and member operators to opBinary 
resp. opBinaryRight members in this case? How likely is it 
that both are defined on the C++ side, and if they are, how 
likely is it that they will behave differently?


opBinary(Right) is a template-functions. You can't add 
previous declaration for it to struct:


//C++
struct Foo
{
   Foo operator+(const Foo&);
};

Foo operator+(int, const Foo&);

//D
extern(C++)
struct struct Foo
{
   Foo opBinary!"+"(const ref Foo); //???


I see...


}

Foo opBinary!"+"(int, const ref Foo); //???


But this would of course be opBinaryRight, and inside struct 
Foo.


What if
Foo operator+(const Bar&, const Foo&);?
Is it Foo.opBinaryRight, or Bar.opBinary, or both?


std.datetime, destructors, and dustmite

2014-11-12 Thread Walter Bright via Digitalmars-d
In order to make D viable for refcounting, I have to fix all the issues with 
destructors. Some problems with destructors have shown up with std.datetime, but 
getting a reduced example from that tangle is difficult.


If anyone wants to lend a hand and dustmite them down to canonical examples, it 
would be most helpful.


The relevant bug reports are:

https://issues.dlang.org/show_bug.cgi?id=13719
https://issues.dlang.org/show_bug.cgi?id=13720
https://issues.dlang.org/show_bug.cgi?id=13721

Thanks for any help!


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 20:40:45 UTC, Walter Bright 
wrote:
Forgive me for being snarky, but there are text editing 
utilities where one can:


Snarky is ok. :)

In any case, compiler switches should not change behavior like 
that. assert() and enforce() are completely different.


Well, but I don't understand how assert() can unwind the stack if 
everyone should assume that the stack might be trashed and 
therefore invalid?


In order to be consistent it with your line of reasoning it 
should simply HLT, then a SIGSEGV handler should set up a 
preallocated stack, obtain the information and send it off to a 
logging service using pure system calls before terminating (or 
send it to the parent process).




Re: C++ overloaded operators and D

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 19:32:32 UTC, IgorStepanov 
wrote:
On Wednesday, 12 November 2014 at 14:41:17 UTC, Marc Schütz 
wrote:
On Wednesday, 12 November 2014 at 11:43:36 UTC, IgorStepanov 
wrote:
C++ and D provides different behaviour for operator 
overloading.
D has a opIndex + opIndexAssign overloads, and if we want to 
map opIndex to operator[], we must to do something with 
opIndexAssign.


operator[] can be mapped to opIndex just fine, right? Only 
opIndexAssign wouldn't be accessible from C++ via an operator, 
but that's because the feature doesn't exist. We can still 
call it via its name opIndexAssign.


operator< and operator> can't be mapped to D. Same for 
operator&.


That's true. Maybe we can just live with pragma(mangle) for 
them, but use D's op... for all others?


Binary arithmetic operators can't be mapped to D, if them 
implemented as static functions:


Foo operator+(int a, Foo f); //unable to map it to D, because 
static module-level Foo opAdd(int, Foo) will not provide the 
same behaviour as operator+ in D.
Thus: C++ and D overloaded operators should live in different 
worlds.


Can't we map both static and member operators to opBinary 
resp. opBinaryRight members in this case? How likely is it 
that both are defined on the C++ side, and if they are, how 
likely is it that they will behave differently?


opBinary(Right) is a template-functions. You can't add previous 
declaration for it to struct:


//C++
struct Foo
{
Foo operator+(const Foo&);
};

Foo operator+(int, const Foo&);

//D
extern(C++)
struct struct Foo
{
Foo opBinary!"+"(const ref Foo); //???


I see...


}

Foo opBinary!"+"(int, const ref Foo); //???


But this would of course be opBinaryRight, and inside struct Foo.


Re: D support in Exuberant Ctags 5.8 for Windows

2014-11-12 Thread Gary Willoughby via Digitalmars-d
On Wednesday, 12 November 2014 at 20:33:32 UTC, Brian Schott 
wrote:

On Tuesday, 11 November 2014 at 18:44:23 UTC, ANtlord wrote:
I want to ask about Dscanner. Does it provide same formats as 
ctags. I use ctags with --excmd=pattern --fields=nksSa and 
output with them is different of Dscanner's output with key 
--ctags.


The ctags output is implemented in this file: 
https://github.com/Hackerpilot/Dscanner/blob/master/src/ctags.d. 
It's less than 200 lines long so you should be able to modify 
it easily.


Is there any tutorial articles for Dscanner/libdparse, they look 
like awesome tools!


Re: GC: memory collected but destructors not called

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 20:00:31 UTC, Uranuz wrote:
If we will have something like *scoped destructor* (that will 
be executed at scope exit) could it help to release some 
resources?


Don't know what you mean here. Isn't that just a normal 
destructor?


I worry about this problem too because even using class to hold 
resource I expirience some *delays* in relesing them. For 
example I have database connection opened. And I want to close 
it when I finished my job. Relying on GC I sometimes experiece 
problems like *too many DB connections*, because GC frees it 
not enough quickly.


I'd say database connections and file descriptors simply 
shouldn't be managed by the GC. It is good at managing memory, 
but not for other things. It's better to have a connection pool, 
and from that pool take a reference to the DB connection, use it 
as long as you need it, and then give it back. This can be 
implemented nicely with scope(exit).


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread Walter Bright via Digitalmars-d
On 11/12/2014 11:40 AM, "Ola Fosheim Grøstad" 
" wrote:

On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:

Having assert() not throw Error would be a reasonable design choice.


What if you could turn assert() in libraries into enforce() using a compiler
switch?


Forgive me for being snarky, but there are text editing utilities where one can:

   s/assert/enforce/

because if one can use a compiler switch, then one has the source which can be 
edited.


In any case, compiler switches should not change behavior like that. assert() 
and enforce() are completely different.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread Dmitry Olshansky via Digitalmars-d

12-Nov-2014 05:34, deadalnix пишет:

Hi all,

I want to get back on the subject of ownership, lifetime and propose
some solution, but before, propose to state the problem in a way that
haven't seen before (even if I have no doubt some have came to the same
conclusion in the past).


[snip nice summary]


In that world, D has a bizaro position were it use a combination of
annotations (immutable, shared) and GC. Ultimately, this is a good
solution. Using annotation for common cases, fallback on GC/unsafe code
when these annotations fall short.


Aye.


Before going into why it is fallign short, a digression on GC and the
benefits of segregating the heap. In D, the heap is almost segregated in
3 groups: thread local, shared and immutable. These group are very
interesting for the GC:
  - Thread local heap can be collected while disturbing only one thread.
It should be possible to use different strategy in different threads.
  - Immutable heap can be collected 100% concurrently without any
synchronization with the program.
  - Shared heap is the only one that require disturbing the whole
program, but as a matter of good practice, this heap should be small
anyway.

Various ML family languages (like OCaml) have adopted segregated heap
strategy and get great benefice out of it. For instance, OCaml's GC is
known to outperform Java's in most scenarios.


+1000
We should take advantage of segregated heap to make all complexity of 
shared/immutable/TL finally pay off.



I'd argue for the introduction of a basic ownership system. Something
much simpler than rust's, that do not cover all uses cases. But the good
thing is that we can fallback on GC or unsafe code when the system show
its limits. That mean we rely less on the GC, while being able to
provide a better GC.

We already pay a cost at interface with type qualifier, let's make the
best of it ! I'm proposing to introduce a new type qualifier for owned
data.

Now it means that throw statement expect a owned(Throwable), that pure
function that currently return an implicitly unique object will return
owned(Object) and that message passing will accept to pass around owned
stuff.

The GC heap can be segregated into island. We currently have 3 types of
islands : Thread local, shared and immutable. These are builtin island
with special characteristics in the language. The new qualifier
introduce a new type of island, the owned island.



Seems sane. owned(Exception) would be implicitly assumed i.e.:
catch(Exception e){ ... }

would be seen by compiler as:
catch(owned(Exception) e){ ... }

What happens if I throw l-value exception? Do I need to cast or 
assumeOwned it?


It's easy to see how it goes with r-values, such as new Exception(...), 
since they are "unique expressions" whatever that means ;)



owned island can only refers to other owned island and to immutable.
they can be merged in any other island at any time (that is why they
can't refers to TL or shared).

owned(T) can be passed around as function parameter or returned, or
stored as fields. When doing so they are consumed. When an owned is not
consumed and goes out of scope, the whole island is freed.

That means that owned(T) can implicitly decay into T, immutable(T),
shared(T) at any time. When doing so, a call to the runtime is done to
merge the owned island to the corresponding island. It is passed around
as owned, then the ownership is transferred and all local references to
the island are invalidated (using them is an error).

On an implementation level, a call to a pure function that return an
owned could look like this :

{
   IslandID __saved = gc_switch_new_island();
   scope(exit) gc_restore_island(__saved);

   call_pure_function();
}

This allow us to rely much less on the GC and allow for a better GC
implementation.


I take it that owned(T) is implicitly deduced by compiler in case of 
pure functions? Also it seem templates should not take owned(T) into 
consideration and let it decay... How does owned compose with other 
qualifiers?




@nogc . Remember ? It was in the title. What does a @nogc function look
like ? a no gc function o not produce any garbage or trigger the
collection cycle. there is no reason per se to prevent the @nogc code to
allocate on the GC as long as you know it won't produce garbage. That
mean the only operation you need to ban are the one that merge the owned
things into TL, shared or immutable heap.

This solves the problem of the @nogc + Exception. As Exception are
isolated, they can be allocated, throw and catched into @nogc code
without generating garbage. They can safely bubble out of the @nogc
section of the code and still be safe.



Seems absolutely cool. But doesn't allocating exception touches heap 
anyway? I take it that if I don't save exception explicitly anywhere the 
owned island is destroyed at catch scope?



The same way, it open the door for a LOT of code that is not @nogc to
be. If the code allocate memory in an owned island and return it, then
i

Re: D support in Exuberant Ctags 5.8 for Windows

2014-11-12 Thread Brian Schott via Digitalmars-d

On Tuesday, 11 November 2014 at 18:44:23 UTC, ANtlord wrote:
I want to ask about Dscanner. Does it provide same formats as 
ctags. I use ctags with --excmd=pattern --fields=nksSa and 
output with them is different of Dscanner's output with key 
--ctags.


The ctags output is implemented in this file: 
https://github.com/Hackerpilot/Dscanner/blob/master/src/ctags.d. 
It's less than 200 lines long so you should be able to modify it 
easily.




Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d
On the other hand I guess HLT will signal SIGSEGV which can be 
caught using a signal handler, but then D should provide the 
OS-specific infrastructure for obtaining the necessary 
information before exiting.


Re: GC: memory collected but destructors not called

2014-11-12 Thread Uranuz via Digitalmars-d
If we will have something like *scoped destructor* (that will be 
executed at scope exit) could it help to release some resources? 
I worry about this problem too because even using class to hold 
resource I expirience some *delays* in relesing them. For example 
I have database connection opened. And I want to close it when I 
finished my job. Relying on GC I sometimes experiece problems 
like *too many DB connections*, because GC frees it not enough 
quickly.


Re: Program logic bugs vs input/environmental errors

2014-11-12 Thread via Digitalmars-d

On Sunday, 9 November 2014 at 21:44:53 UTC, Walter Bright wrote:
Having assert() not throw Error would be a reasonable design 
choice.


What if you could turn assert() in libraries into enforce() using 
a compiler switch?


Servers should be able to record failure and free network 
resources/locks even on fatal failure.






Re: C++ overloaded operators and D

2014-11-12 Thread IgorStepanov via Digitalmars-d

On Wednesday, 12 November 2014 at 14:41:17 UTC, Marc Schütz wrote:
On Wednesday, 12 November 2014 at 11:43:36 UTC, IgorStepanov 
wrote:
C++ and D provides different behaviour for operator 
overloading.
D has a opIndex + opIndexAssign overloads, and if we want to 
map opIndex to operator[], we must to do something with 
opIndexAssign.


operator[] can be mapped to opIndex just fine, right? Only 
opIndexAssign wouldn't be accessible from C++ via an operator, 
but that's because the feature doesn't exist. We can still call 
it via its name opIndexAssign.


operator< and operator> can't be mapped to D. Same for 
operator&.


That's true. Maybe we can just live with pragma(mangle) for 
them, but use D's op... for all others?


Binary arithmetic operators can't be mapped to D, if them 
implemented as static functions:


Foo operator+(int a, Foo f); //unable to map it to D, because 
static module-level Foo opAdd(int, Foo) will not provide the 
same behaviour as operator+ in D.
Thus: C++ and D overloaded operators should live in different 
worlds.


Can't we map both static and member operators to opBinary resp. 
opBinaryRight members in this case? How likely is it that both 
are defined on the C++ side, and if they are, how likely is it 
that they will behave differently?


opBinary(Right) is a template-functions. You can't add previous 
declaration for it to struct:


//C++
struct Foo
{
Foo operator+(const Foo&);
};

Foo operator+(int, const Foo&);

//D
extern(C++)
struct struct Foo
{
Foo opBinary!"+"(const ref Foo); //???
}

Foo opBinary!"+"(int, const ref Foo); //???

May be some cases can be mapped to D, but these cases require 
special consideration.


I suggest a generic rule.

extern(C++)
struct struct Foo
{
pragma(mangle,  cppOpAdd)Foo op_add(const ref Foo);
}

extern(C++)
pragma(mangle,  cppOpAdd)Foo op_add2(int, const ref Foo);

Now, if you want to use this overloaded operators as D operators, 
you may wrap it to D operator-functions.


extern(C++)
struct struct Foo
{
pragma(mangle,  cppOpAdd) Foo op_add(const ref Foo);

Foo opBinary(string s)(const ref Foo rvl) if (s == "+")
{
 return op_add(rvl);
}

Foo opBinaryRight(string s)(int lvl) if (s == "+")
{
 return op_add2(lvl, this);
}
}

extern(C++)
pragma(mangle,  cppOpAdd)Foo op_add2(int, const ref Foo);

This way allows access to C++ operators and doesn't add new rules 
into the language.


Re: Inspecting GC memory/stats

2014-11-12 Thread Iain Buclaw via Digitalmars-d
On Tuesday, 11 November 2014 at 20:20:26 UTC, Rainer Schuetze 
wrote:



On 11.11.2014 08:36, Iain Buclaw wrote:

Hi,

I find myself wondering what do people use to inspect the GC 
on a

running process?

Last night, I had a look at a couple of vibe.d servers that 
had been
running for a little over 100 days.  But the same code, but 
one used

less (or not at all).

Given that the main app logic is rather simple, and things 
that may be
otherwise held in memory (files) are offloaded onto a Redis 
server.
I've have thought that it's consumption would have stayed 
pretty much
stable.  But to my surprise, I found that the server that is 
under more

(but not heavy) use is consuming a whooping 200MB.

Initially I had tried to see if this could be shrunk in some 
way or
form.  Attached gdb to the process, called gc_minimize(), 
gc_collect() -

but it didn't have any effect.


The GC allocates pools of memory in increasing size. It starts 
with 1 MB, then adds 3MB for every new pool. (Numbers might be 
slightly different depending on the druntime version). These 
pools are then used to service any allocation request.


gc_minimize can only return memory to the system if all the 
allocation in a pool are collected, which is very unlikely.




I'm aware of roughly how the gc grows.  But it seems an unlikely 
scenario to have 200MB worth of 3MB pools with at least one 
object in each.


And if it did get to that state, the next question would be, how? 
 I could say, I'd expect that if a large number of requests came 
in all at once, but I would have been prompted by this in the 
network graphs.





When I noticed gc_stats with an informative *experimental* 
warning, I
thought "lets just run it anyway and see what happens"... 
SEGV.  Wonderful.


I suspect calling gc_stats from the debugger is "experimental" 
because it returns a struct. With a bit of casting, you might 
by able to call "_gc.getStats( *cast(GCStats*)some_mem_adr );" 
instead.




No, that is not the reason.  More like the iterative scan may be 
unsafe.  I should have looked closer at the backtrace / memory 
location that was violated (I was in a hurry to get the site back 
up), but a more likely cause of the SEGV is that one of the pools 
in gcx.pooltable[n] or pages in pool.pagetable[n] was pointing to 
a free'd, stomped, or null location.



Iain.


Re: std.experimental.logger formal review round 3

2014-11-12 Thread Dicebot via Digitalmars-d
On Wednesday, 12 November 2014 at 12:39:24 UTC, Robert burner 
Schadek wrote:
Only one thread can write to one Logger at a time, also known 
as synchronization. Anything else is properly wrong. But you 
can have as many Logger as you have memory.


One thing we can improve is to use thread-local stdlog singletons 
(that forward to global shared one by default) instead of using 
shared one as entry point. In server applications one would 
typically want to have logs buffered per thread or even sent to 
remote process / machine in parallel - while this can be done 
right now it would require applications to resort from using 
stdlog completely.


Better approach that comes to my mind is to have both 
thread-local and global configurable stdlog and have means to 
explicitly capture lock on global one from local proxies for 
optimized bulk logging.


Re: std.string import cleanup: how to fix regression?

2014-11-12 Thread Dicebot via Digitalmars-d

Will this work?

static import std.string;
deprecated public alias split = std.string.split;


Re: Splitting stdio, etc.

2014-11-12 Thread David Nadlinger via Digitalmars-d
On Wednesday, 12 November 2014 at 15:40:49 UTC, Adam D. Ruppe 
wrote:
But regardless, I still think we should do one thing at a time. 
If the cleaned up import solves the speed+size problem, no need 
to spend the time trying to split the module.


One point that tends to be ignored every time this discussion 
comes up is that of encapsulation. Since access modifiers work on 
a per-module basis in D, I think there is a strong incentive to 
making modules as small as reasonably possible by default. If a 
more coarse-grained structure is desired for imports from user 
code, one can always use package modules.


David


Re: std.string import cleanup: how to fix regression?

2014-11-12 Thread Ilya Yaroshenko via Digitalmars-d
One solution is new `extern import std.array : split;` syntax. 
Like `public` but do *not* visible for module itself.
If user will use selective imports with std.string, than there 
compiler can deduce dependencies without full imports.


Furthermore we can deprecate it lately with  `deprecated extern 
import std.array : split;`


What should we do? Anybody has a good idea for getting rid of 
the
gratuitous dependency on std.algorithm / std.array without 
breaking user

code with no warning?


T


std.string import cleanup: how to fix regression?

2014-11-12 Thread H. S. Teoh via Digitalmars-d
Recently, Ilya has been helping to clean up import dependencies between
Phobos module. In the course of cleaning up std.string, a few public
imports were removed because they were not referenced by the module
itself. However, this caused a regression:

https://issues.dlang.org/show_bug.cgi?id=13717

The code that got removed was:

--
//Remove when repeat is finally removed. They're only here as part of the
//deprecation of these functions in std.string.
public import std.algorithm : startsWith, endsWith, cmp, count;
public import std.array : join, split;
--

>From the comment, it seems clear that the intent is to move these
functions out of std.string into std.algorithm and std.array. However,
there is currently no way to deprecate public imports, so we can't get
rid of this dependency without breaking user code (one of my projects
already doesn't compile because of this).

What should we do? Anybody has a good idea for getting rid of the
gratuitous dependency on std.algorithm / std.array without breaking user
code with no warning?


T

-- 
EMACS = Extremely Massive And Cumbersome System


Re: Why is `scope` planned for deprecation?

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 15:57:18 UTC, Nick Treleaven 
wrote:

On 11/11/2014 18:01, bearophile wrote:
I agree it's a very important topic (more important/urgent 
than the GC,
also because it reduces the need of the GC). But I think 
Walter thinks
this kind of change introduces too much complexity in D 
(despite it may
eventually become inevitable for D once Rust becomes more 
popular and

programmers get used to that kind of static enforcement).


I think Rust's lifetimes would be a huge change if ported to D. 
In Rust user types often need annotations as well as function 
parameters. People tend to want Rust's guarantees without the 
limitations. I think D does need some kind of scope attribute 
verification, but we need to throw out some of the guarantees 
Rust makes to get an appropriate fit for existing D code.


Have you seen my proposal?

http://wiki.dlang.org/User:Schuetzm/scope

It takes a slightly different approach from Rust. Instead of 
specifying lifetimes, it uses owners, and it's also otherwise 
more simple than Rust's system. E.g. there is no full blown 
borrow checker (and no need for it).




For example, taking a mutable borrowed pointer for a variable 
means you can't even *read* the original variable whilst the 
pointer lives. I think no one would try to make D do that, but 
Rust's reason for adding it is actually memory safety (I don't 
quite understand it, but it involves iterator invalidation 
apparently). It's possible their feature can be refined, but 
basically 'mut' in Rust really means 'unique'.


In my proposal, there's "const borrowing". It still allows access 
to the owner, but not mutation. This is necessary for safe 
implementation of move semantics, and to guard against iterator 
invalidation. It also has other uses, like the problems with 
"transient range", e.g. stdin.byLine(), which overwrite their 
buffer in popFront(). On the other hand, it's opt-in; by default, 
owners are mutable while borrowed references exist.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 16:47:47 UTC, ketmar via 
Digitalmars-d wrote:

On Wed, 12 Nov 2014 16:40:10 +
Sean Kelly via Digitalmars-d  
wrote:


Try following the big allocation with a really small 
allocation to clear out any registers that may be referencing 
the large block.
but this clearly not an issue with sample which does 
`GC.free()`, and
it stops at 1.7GB, while C sample does the same and stops at 
2.9GB.


You should get debuginfo by compiling the runtime with the PRINTF 
debugging flag set:


https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L20


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 16:40:10 +
Sean Kelly via Digitalmars-d  wrote:

> Try following the big allocation with a really small allocation 
> to clear out any registers that may be referencing the large 
> block.
but this clearly not an issue with sample which does `GC.free()`, and
it stops at 1.7GB, while C sample does the same and stops at 2.9GB.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Sean Kelly via Digitalmars-d
It's been a while since I Dded this, but I think the GC will 
effectively call minimize after collecting, so any collected 
large allocations should be returned to the OS. Allocations 
larger than 4K get their own dedicated pool, so fragmentation 
shouldn't come into play here.


Re: Splitting stdio, etc.

2014-11-12 Thread Andrei Alexandrescu via Digitalmars-d

On 11/12/14 7:40 AM, Adam D. Ruppe wrote:

I *might* be wrong about the cost of parsing templates vs instantiating,
I've used several thousand line programs of plain code but not that much
template without pulling in all of Phobos, which makes the speed hard to
isolate.


The cost of D parsing is negligible in most instances. -- Andrei


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 16:20:48 UTC, Steven 
Schveighoffer wrote:
I don't know the internals of C malloc. But I think it should 
be possible to make D merge segments when it needs to.


Yes, but then D must provide a malloc replacement.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Sean Kelly via Digitalmars-d
Try following the big allocation with a really small allocation 
to clear out any registers that may be referencing the large 
block.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 16:13:39 +
via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 16:06:32 UTC, ketmar via 
> Digitalmars-d wrote:
> > if i'll use libc malloc() for allocating, everything works as i
> > expected: address space consumtion is on par with allocation 
> > size.
> 
> The gc uses C's calloc rather implementing memory handling itself 
> using the OS so you get fragmentation:
> 
> https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L2223
ah, and sorry once again, i was tired. ;-) the C program stops at
2,936,012,800, which is much more realistic. i checked three times and
it's really using libc `malloc()` now. so libc malloc is perfectly able
to merge segments, while D GC is not.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 11:20:48 -0500
Steven Schveighoffer via Digitalmars-d 
wrote:

> One thing I am curious about -- it needs to allocate space to deal with 
> metadata in the heap. That data should be moveable, but I bet it doesn't 
> get moved. That may be why it can't merge segments.
looks like a good GC improvement. ;-)


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 16:23:01 +
Kagamin via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 15:36:48 UTC, ketmar via 
> Digitalmars-d wrote:
> > so heap fragmentation from other allocations can't be the issue.
> 
> Why do you think so?
> Try to go in opposite direction: start from 700MB and decrease 
> allocation size.
i mean "from allocations in other places of the progam", not the
"previous allocations in this code". sorry.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Kagamin via Digitalmars-d
On Wednesday, 12 November 2014 at 15:36:48 UTC, ketmar via 
Digitalmars-d wrote:

so heap fragmentation from other allocations can't be the issue.


Why do you think so?
Try to go in opposite direction: start from 700MB and decrease 
allocation size.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/12/14 11:06 AM, ketmar via Digitalmars-d wrote:

i posted the second samle where i'm doing `GC.free()` to reclaim
memory. as i said, RES is jumping between "almost nothing" and "several
GB", as sample allocates and frees. but VIRT is growing constantly.

i believe that GC just can't merge segments, so it keep asking for more
and more address space for new segments, leaving old ones unused and
unmerged. this way GC has alot of free memory, but when it can't
allocate another segment, it throws "out of memory error".


Yes, this is what I think is happening.



if i'll use libc malloc() for allocating, everything works as i
expected: address space consumtion is on par with allocation size.


I don't know the internals of C malloc. But I think it should be 
possible to make D merge segments when it needs to.


One thing I am curious about -- it needs to allocate space to deal with 
metadata in the heap. That data should be moveable, but I bet it doesn't 
get moved. That may be why it can't merge segments.


-Steve




Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 16:13:39 +
via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 16:06:32 UTC, ketmar via 
> Digitalmars-d wrote:
> > if i'll use libc malloc() for allocating, everything works as i
> > expected: address space consumtion is on par with allocation 
> > size.
> 
> The gc uses C's calloc rather implementing memory handling itself 
> using the OS so you get fragmentation:
> 
> https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L2223
hm. my bad, i was checking malloc() with C program and it happens to
use custom allocator. i just carefully re-checked it and it really
works the same as D GC. sorry.

so this seems to be libc memory manager fault after all. sorry for the
noise.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 16:06:32 UTC, ketmar via 
Digitalmars-d wrote:

if i'll use libc malloc() for allocating, everything works as i
expected: address space consumtion is on par with allocation 
size.


The gc uses C's calloc rather implementing memory handling itself 
using the OS so you get fragmentation:


https://github.com/D-Programming-Language/druntime/blob/master/src/gc/gc.d#L2223


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 15:19:51 +
Kagamin via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> Digitalmars-d wrote:
> > the question is: am i doing something wrong here? how can i 
> > force GC to
> > stop eating my address space and reuse what it already has?
> 
> Try to allocate the arrays with NO_SCAN flag.
btw, compiler is smart enough to allocate array with NO_SCAN flag, i
checked this with `GC.getAttr()`.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 10:51:31 -0500
Steven Schveighoffer via Digitalmars-d 
wrote:

> On 11/12/14 6:04 AM, ketmar via Digitalmars-d wrote:
> > Hello.
> >
> > let's run this program:
> >
> >import core.sys.posix.unistd;
> >import std.stdio;
> >import core.memory;
> >
> >void main () {
> >  uint size = 1024*1024*300;
> >  for (;;) {
> >auto buf = new ubyte[](size);
> >writefln("%s", size);
> >sleep(1);
> >size += 1024*1024*100;
> >buf = null;
> >GC.collect();
> >GC.minimize();
> >  }
> >}
> >
> > pretty innocent, right? i even trying to help GC here. but...
> >
> >314572800
> >419430400
> >524288000
> >629145600
> >734003200
> >core.exception.OutOfMemoryError@(0)
> >
> > ps.
> >
> > by the way, this is not actually "no more memory", this is "i'm out of
> > address space" (yes, i'm on 32-bit system, GNU/Linux).
> >
> > the question is: am i doing something wrong here? how can i force GC to
> > stop eating my address space and reuse what it already has?
> >
> > sure, i can use libc malloc(), refcounting, and so on, but the question
> > remains: why GC not reusing already allocated and freed memory?
> >
> 
> I think I might know what's going on.
> 
> You are continually adding 100MB to the allocation size. Memory is 
> contiguous from the OS, but can get fragmented inside the GC.
> 
> So let's say, you allocate 300MB. Fine. It needs more space from the OS, 
> allocates it, and assigns a pool to that 300MB. Now, you add another 
> 100MB. At this point, it can't fit into the original pool, so it 
> allocates another 400MB. BUT, it doesn't merge the 300MB into that (I 
> don't think), so when it adds another 100MB, it has a 300MB space, and a 
> 400MB space, neither of which can hold 500MB. And it goes on and on. 
> Keep in mind also that it is a frequent error that people make to set a 
> pointer to null and expect the data will be collected. For example, buf 
> could still be in a register.
> 
> I would be interested in how much memory the GC has vs. how much is 
> actually used.
i posted the second samle where i'm doing `GC.free()` to reclaim
memory. as i said, RES is jumping between "almost nothing" and "several
GB", as sample allocates and frees. but VIRT is growing constantly.

i believe that GC just can't merge segments, so it keep asking for more
and more address space for new segments, leaving old ones unused and
unmerged. this way GC has alot of free memory, but when it can't
allocate another segment, it throws "out of memory error".

if i'll use libc malloc() for allocating, everything works as i
expected: address space consumtion is on par with allocation size.


signature.asc
Description: PGP signature


Re: Why is `scope` planned for deprecation?

2014-11-12 Thread Nick Treleaven via Digitalmars-d

On 11/11/2014 18:01, bearophile wrote:

I agree it's a very important topic (more important/urgent than the GC,
also because it reduces the need of the GC). But I think Walter thinks
this kind of change introduces too much complexity in D (despite it may
eventually become inevitable for D once Rust becomes more popular and
programmers get used to that kind of static enforcement).


I think Rust's lifetimes would be a huge change if ported to D. In Rust 
user types often need annotations as well as function parameters. People 
tend to want Rust's guarantees without the limitations. I think D does 
need some kind of scope attribute verification, but we need to throw out 
some of the guarantees Rust makes to get an appropriate fit for existing 
D code.


For example, taking a mutable borrowed pointer for a variable means you 
can't even *read* the original variable whilst the pointer lives. I 
think no one would try to make D do that, but Rust's reason for adding 
it is actually memory safety (I don't quite understand it, but it 
involves iterator invalidation apparently). It's possible their feature 
can be refined, but basically 'mut' in Rust really means 'unique'.


Re: convert static arrays to dynamic arrays and return, have wrong data.

2014-11-12 Thread via Digitalmars-d

On Sunday, 9 November 2014 at 10:04:16 UTC, bearophile wrote:
Yeah, what do you suggest to change in the language to avoid 
this problem?


1. Deprecate dynamic arrays.

2.  Implement dynamic arrays as a library type with it's own 
fat-slice implementation which supports reallocation (slices with 
indirection and stored begin/end indices).


3. Provide conversion to regular shrinkable slices for operations 
on the array with a warning similar to those provided by 
iterators in C++.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/12/14 6:04 AM, ketmar via Digitalmars-d wrote:

Hello.

let's run this program:

   import core.sys.posix.unistd;
   import std.stdio;
   import core.memory;

   void main () {
 uint size = 1024*1024*300;
 for (;;) {
   auto buf = new ubyte[](size);
   writefln("%s", size);
   sleep(1);
   size += 1024*1024*100;
   buf = null;
   GC.collect();
   GC.minimize();
 }
   }

pretty innocent, right? i even trying to help GC here. but...

   314572800
   419430400
   524288000
   629145600
   734003200
   core.exception.OutOfMemoryError@(0)

ps.

by the way, this is not actually "no more memory", this is "i'm out of
address space" (yes, i'm on 32-bit system, GNU/Linux).

the question is: am i doing something wrong here? how can i force GC to
stop eating my address space and reuse what it already has?

sure, i can use libc malloc(), refcounting, and so on, but the question
remains: why GC not reusing already allocated and freed memory?



I think I might know what's going on.

You are continually adding 100MB to the allocation size. Memory is 
contiguous from the OS, but can get fragmented inside the GC.


So let's say, you allocate 300MB. Fine. It needs more space from the OS, 
allocates it, and assigns a pool to that 300MB. Now, you add another 
100MB. At this point, it can't fit into the original pool, so it 
allocates another 400MB. BUT, it doesn't merge the 300MB into that (I 
don't think), so when it adds another 100MB, it has a 300MB space, and a 
400MB space, neither of which can hold 500MB. And it goes on and on. 
Keep in mind also that it is a frequent error that people make to set a 
pointer to null and expect the data will be collected. For example, buf 
could still be in a register.


I would be interested in how much memory the GC has vs. how much is 
actually used.


-Steve


Re: Splitting stdio, etc.

2014-11-12 Thread Adam D. Ruppe via Digitalmars-d
On Wednesday, 12 November 2014 at 15:34:37 UTC, H. S. Teoh via 
Digitalmars-d wrote:

Ilya has been working on localizing imports, which will help in
splitting up these modules.


I think if the imports can be localized, there's no need to split 
the module, especially if it is template heavy like 
std.algorithm, which is de-facto lazy and minimal anyway - they 
are always imported, but the real work is only done upon being 
used.


I *might* be wrong about the cost of parsing templates vs 
instantiating, I've used several thousand line programs of plain 
code but not that much template without pulling in all of Phobos, 
which makes the speed hard to isolate.


But regardless, I still think we should do one thing at a time. 
If the cleaned up import solves the speed+size problem, no need 
to spend the time trying to split the module.



"A one-question geek test. If you get the joke, you're a geek: 
Seen on a California license plate on a VW Beetle: 
'FEATURE'..." -


LOL


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/12/14 10:19 AM, Kagamin wrote:

On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via Digitalmars-d
wrote:

the question is: am i doing something wrong here? how can i force GC to
stop eating my address space and reuse what it already has?


Try to allocate the arrays with NO_SCAN flag.


Really that shouldn't matter. The arrays should all be 0-initialized.

-Steve


Re: convert static arrays to dynamic arrays and return, have wrong data.

2014-11-12 Thread Nick Treleaven via Digitalmars-d

On 09/11/2014 10:34, bearophile wrote:

If you just disallow that kind of operations indiscriminately, you
reduce a lot the usefulness of D (because fixed size => dynamic slice
array is a conversion useful in many cases) and probably force the
introduction of many casts, and I don't know if this will increase the
overall safety of the D code.


Seeing as the 'scope' attribute doesn't seem to be happening any time 
soon, shouldn't the compiler reject static array slicing in @safe code? 
The user is then forced to think about the operation, and put the code 
in a @trusted delegate if they think it is actually safe.



It would help a bit if we had @trusted blocks instead of having to call 
a @trusted delegate inline (which is non-obvious). The status quo 
encourages people to just mark whole functions as @trusted, skipping 
much otherwise acceptable safety enforcement.




Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 15:19:51 +
Kagamin via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> Digitalmars-d wrote:
> > the question is: am i doing something wrong here? how can i 
> > force GC to
> > stop eating my address space and reuse what it already has?
> 
> Try to allocate the arrays with NO_SCAN flag.
why this must make any difference in the demonstrated cases? ubyte
arrays are initialized to zeroes, so they can't contain false pointers.


signature.asc
Description: PGP signature


Re: Splitting stdio, etc.

2014-11-12 Thread H. S. Teoh via Digitalmars-d
On Wed, Nov 12, 2014 at 03:25:40PM +, Adam D. Ruppe via Digitalmars-d wrote:
[...]
> So the stdio splitup would probably get the biggest gain by making
> sure std.format isn't imported unless the user specifically uses it
> (making sure it is localally imported in writefln but not writeln
> would probably do it, since they are templates, no need to split the
> module for this).
> 
> Actually, looking at the code, std.format is already a local import,
> but it is in a plain function in some places, like the private void
> writefx. Why is that function even there?

It's a legacy function that is on the way out. If it hasn't been
deprecated yet, it should be, and it should be deleted within the next
release or two.


> std.format is also imported in module scope in std.conv which is
> imported in std.traits.
> 
> What a mess.
>
> Bottom line, we shouldn't split up modules for its own sake. It should
> be done as a step toward the larger goal of cleaning up the import
> web.

Yeah, Ilya has been working on cleaning up imports in Phobos. Things
should improve in the next release. Hopefully.


T

-- 
There are 10 kinds of people in the world: those who can count in binary, and 
those who can't.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 15:24:08 +
Kagamin via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 12:30:15 UTC, ketmar via 
> Digitalmars-d wrote:
> > this shouldn't fail so soon, right? i'm freeing the memory, 
> > so... it
> > still dying on 1,887,436,800. 1.7GB and that's all? this can't 
> > be true,
> > i have 3GB of free RAM (with 1.2GB used) and 8GB of unused 
> > swap. and
> > yes, it consumed all of the process address space again.
> 
> Maybe you fragmented the heap and don't have 1.7GB of contiguous 
> memory?
i gave two example programs, which demonstrates the effect, and they
aren't excerpts. the only allocating `writef` can be removed too, but
the effect stays. so heap fragmentation from other allocations can't be
the issue.


signature.asc
Description: PGP signature


Re: Splitting stdio, etc.

2014-11-12 Thread H. S. Teoh via Digitalmars-d
On Wed, Nov 12, 2014 at 06:59:03AM -0800, Andrei Alexandrescu via Digitalmars-d 
wrote:
> On 11/12/14 6:51 AM, Shammah Chancellor wrote:
> >Will a PR for splitting stdio up into a package require a DIP?  It
> >should not be a breaking change -- correct?  Some of the standard
> >module files are very substantial at this point and require quite a
> >bit of work to compile a simple "Hello World" application.
> 
> $ wc -l std/stdio.d
> 4130 std/stdio.d
> 
> Looks reasonably sized to me.
> 
> $ wc -l std/**/*.d | sort -nr | head
> 
>   212342 total
>33275 std/datetime.d
>14589 std/algorithm.d
>10703 std/range.d
> 9010 std/uni.d
> 6540 std/math.d
> 6452 std/traits.d
> 6254 std/format.d
> 5235 std/typecons.d
> 5183 std/conv.d
> 
> Better choose from these. Jonathan has long planned to break
> std.datetime into smaller parts, but he's as busy as the next guy so
> feel free to take up on that.
[...]

Ilya has been working on localizing imports, which will help in
splitting up these modules.

I've tried splitting up std.algorithm before but it was way too huge to
be doable in a short amount of time (I didn't have enough free time to
go through the entire module, and there were many problems with
interdependencies, import dependencies, etc.). I'm thinking perhaps it
should be done piecemeal -- start introducing one or two submodules
under std.algorithm and rename the current algorithm.d to package.d,
then gradually migrate functions over to the submodules, keeping them as
public imports in package.d.


T

-- 
"A one-question geek test. If you get the joke, you're a geek: Seen on a 
California license plate on a VW Beetle: 'FEATURE'..." -- Joshua D. Wachs - 
Natural Intelligence, Inc.


Re: Splitting stdio, etc.

2014-11-12 Thread Adam D. Ruppe via Digitalmars-d
On Wednesday, 12 November 2014 at 14:59:02 UTC, Andrei 
Alexandrescu wrote:

$ wc -l std/stdio.d
4130 std/stdio.d

Looks reasonably sized to me.


I think line count shouldn't be the metric. My simpledisplay.d is 
almost 6,000 lines, but it compiles lightning fast and makes a 
small binary. It doesn't import anything in Phobos.


When splitting a module, we need to think about independent 
functionality with the goal of reducing the total number of 
imports a program uses. I think local imports generally help more 
than splitting modules.


So the stdio splitup would probably get the biggest gain by 
making sure std.format isn't imported unless the user 
specifically uses it (making sure it is localally imported in 
writefln but not writeln would probably do it, since they are 
templates, no need to split the module for this).



Actually, looking at the code, std.format is already a local 
import, but it is in a plain function in some places, like the 
private void writefx. Why is that function even there?


std.format is also imported in module scope in std.conv which 
is imported in std.traits.


What a mess.



Bottom line, we shouldn't split up modules for its own sake. It 
should be done as a step toward the larger goal of cleaning up 
the import web.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Kagamin via Digitalmars-d
On Wednesday, 12 November 2014 at 12:30:15 UTC, ketmar via 
Digitalmars-d wrote:
this shouldn't fail so soon, right? i'm freeing the memory, 
so... it
still dying on 1,887,436,800. 1.7GB and that's all? this can't 
be true,
i have 3GB of free RAM (with 1.2GB used) and 8GB of unused 
swap. and

yes, it consumed all of the process address space again.


Maybe you fragmented the heap and don't have 1.7GB of contiguous 
memory?


Re: GC: memory collected but destructors not called

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/11/14 11:59 PM, Shachar Shemesh wrote:

On 10/11/14 16:19, Steven Schveighoffer wrote:


Only classes call dtors from the GC. Structs do not. There are many
hairy issues with structs calling dtors from GC. Most struct dtors
expect to be called synchronously, and are not expecting to deal with
multithreading issues.

Note that structs inside classes WILL call dtors.


How is this any different? If one should not be allowed, how is the
other okay?


I'm not defending the status quo, I'm just saying what happens today.

But adding struct dtor calls to the GC will not solve the problems 
identified here.


-Steve


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Kagamin via Digitalmars-d
On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
Digitalmars-d wrote:
the question is: am i doing something wrong here? how can i 
force GC to

stop eating my address space and reuse what it already has?


Try to allocate the arrays with NO_SCAN flag.


Re: GC: memory collected but destructors not called

2014-11-12 Thread Steven Schveighoffer via Digitalmars-d

On 11/12/14 12:05 AM, Shachar Shemesh wrote:

On 11/11/14 22:41, Steven Schveighoffer wrote:


At this point, I am not super-concerned about this. I cannot think of
any bullet-proof way to ensure that struct dtors for structs that were
meant only for stack variables can be called correctly from the GC.

Isn't "structs meant only for stack variables" a semantic thing? The D
compiler cannot possibly know. Shouldn't that be the programmer's choice?


This
pull doesn't change that, and it does have some nice new features that
we do need for other reasons.

In other words, putting a struct in the GC heap that was written to be
scope-destroyed is an error before and after this pull. Before the pull,
the dtor doesn't run, which is wrong, and after the pull the dtor may
cause race issues, which is wrong. So either way, it's wrong :)

I disagree.

The first is wrong. The second is a corner case the programmer needs to
be aware of, and account for.


The programmer being the user of the struct or the designer? It's 
impossible to force the user to avoid using a struct on the GC, it would 
be enforcement by comment.


But even then, in your dtor, there are issues with accessing what a dtor 
would normally access. Tell me how you implement reference counting 
smart pointer when you can't access the reference count in the dtor...



The difference is that, in the first case,
the programmer is left with no tools to fix the problem, while in the
second case this is simply a bug in the program (which, like I said in
another email, also happens with the current implementation when the
struct is inside a class).


Sure there are tools, you can wrap the struct in a class if you like 
pain and suffering. Having the struct dtor called without the wrapper is 
the same issue.



In other words, the second case exposes a second (more direct and more
likely to be handled) path to an already existing problem, while the
first puts the programmer up against a new problem with no work around.


This means all struct dtors need to be thread aware, and this is not 
what you want in a language where the type information dictates whether 
it can be shared or not.


-Steve


Re: std.experimental.logger formal review round 3

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 12:39:24 UTC, Robert burner 
Schadek wrote:
Only one thread can write to one Logger at a time, also known 
as synchronization. Anything else is properly wrong. But you 
can have as many Logger as you have memory.


Taking a lock when the logging call doesn't flush to disk sounds 
rather expensive.


Re: 'partial' keyword in C# is very good for project , what's the same thing in D?

2014-11-12 Thread Regan Heath via Digitalmars-d

On Mon, 10 Nov 2014 18:09:12 -, deadalnix  wrote:


On Monday, 10 November 2014 at 10:21:34 UTC, Regan Heath wrote:
On Fri, 31 Oct 2014 09:30:25 -, Dejan Lekic   
wrote:
In D apps I work on I prefer all my classes in a single module, as is  
common "D way", or shall I call it "modular way"?


Sure, but that's not the point of partial.  It's almost never used by  
the programmer directly, and when it is used you almost never need to  
look at the generated partial class code as "it just works".  So, you  
effectively get what you "prefer" but you also get clean separation  
between generated and user code, which is very important if the  
generated code needs to be re-generated and it also means the user code  
stays simpler, cleaner and easier to work with.


Basically it's just a good idea(TM).  Unfortunately as many have said,  
it's not something D2.0 is likely to see.  String mixins aren't the  
nicest thing to use, but at least they can achieve the same/similar  
thing.


R


I don't get how the same can't be achieved with mixin template
for instance.


Someone raised concerns.. I haven't looked into it myself.  If it can,  
great :)


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: Splitting stdio, etc.

2014-11-12 Thread Andrei Alexandrescu via Digitalmars-d

On 11/12/14 6:51 AM, Shammah Chancellor wrote:

Will a PR for splitting stdio up into a package require a DIP?  It
should not be a breaking change -- correct?  Some of the standard module
files are very substantial at this point and require quite a bit of work
to compile a simple "Hello World" application.


$ wc -l std/stdio.d
4130 std/stdio.d

Looks reasonably sized to me.

$ wc -l std/**/*.d | sort -nr | head

  212342 total
   33275 std/datetime.d
   14589 std/algorithm.d
   10703 std/range.d
9010 std/uni.d
6540 std/math.d
6452 std/traits.d
6254 std/format.d
5235 std/typecons.d
5183 std/conv.d

Better choose from these. Jonathan has long planned to break 
std.datetime into smaller parts, but he's as busy as the next guy so 
feel free to take up on that.



Thanks,

Andrei



Splitting stdio, etc.

2014-11-12 Thread Shammah Chancellor via Digitalmars-d
Will a PR for splitting stdio up into a package require a DIP?  It 
should not be a breaking change -- correct?  Some of the standard 
module files are very substantial at this point and require quite a bit 
of work to compile a simple "Hello World" application.




Re: C++ overloaded operators and D

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 11:43:36 UTC, IgorStepanov 
wrote:

C++ and D provides different behaviour for operator overloading.
D has a opIndex + opIndexAssign overloads, and if we want to 
map opIndex to operator[], we must to do something with 
opIndexAssign.


operator[] can be mapped to opIndex just fine, right? Only 
opIndexAssign wouldn't be accessible from C++ via an operator, 
but that's because the feature doesn't exist. We can still call 
it via its name opIndexAssign.


operator< and operator> can't be mapped to D. Same for 
operator&.


That's true. Maybe we can just live with pragma(mangle) for them, 
but use D's op... for all others?


Binary arithmetic operators can't be mapped to D, if them 
implemented as static functions:


Foo operator+(int a, Foo f); //unable to map it to D, because 
static module-level Foo opAdd(int, Foo) will not provide the 
same behaviour as operator+ in D.
Thus: C++ and D overloaded operators should live in different 
worlds.


Can't we map both static and member operators to opBinary resp. 
opBinaryRight members in this case? How likely is it that both 
are defined on the C++ side, and if they are, how likely is it 
that they will behave differently?


Re: GC: memory collected but destructors not called

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 13:56:08 UTC, Shachar Shemesh 
wrote:
On 12/11/14 11:29, "Marc =?UTF-8?B?U2Now7x0eiI=?= 
" wrote:


Supposedly, a struct destructor will only access resources 
that the
struct itself manages. As long as that's the case, it will be 
safe. In

practice, there's still a lot that can go wrong.


Either a struct's destructor can be run from the context of a 
GC, in which case it should run when the struct is directly 
allocated on the heap, or it is not, in which case the fact it 
is run when the struct is inside a class should be considered a 
bug.



Today it happens for structs nested in classes, but not 
allocated directly. I don't see any situation in which this is 
not a bug.


Shachar


I think it's helpful to ask the question who's responsible for 
destroying an object. If it's the GC, then it's finalization, if 
it's no the GC, it's destruction. Both a destructor and a 
finalizer need to clean up themselves, and any other object they 
own. This includes embedded structs, but not GC managed objects 
created by the constructor.


This applies to both structs and classes as the owning objects, 
and manual/automatic management as well as GC.


But indeed, what's implemented today is inconsistent.


Re: GC: memory collected but destructors not called

2014-11-12 Thread Shachar Shemesh via Digitalmars-d

On 12/11/14 11:29, "Marc =?UTF-8?B?U2Now7x0eiI=?= " wrote:


Supposedly, a struct destructor will only access resources that the
struct itself manages. As long as that's the case, it will be safe. In
practice, there's still a lot that can go wrong.


Either a struct's destructor can be run from the context of a GC, in 
which case it should run when the struct is directly allocated on the 
heap, or it is not, in which case the fact it is run when the struct is 
inside a class should be considered a bug.



Today it happens for structs nested in classes, but not allocated 
directly. I don't see any situation in which this is not a bug.


Shachar


Re: Why is `scope` planned for deprecation?

2014-11-12 Thread Manu via Digitalmars-d
On 12 November 2014 04:01, bearophile via Digitalmars-d
 wrote:
> Dicebot:
>
>> ixid:
>>>
>>> The ship will have sailed by the time it's ready to fly (gloriously mixed
>>> metaphors), this would seem like such a fundamental issue with a big
>>> knock-on effect on everything else that it should surely be prioritized
>>> higher than that? I am aware you're not the one setting priorities. =)
>>
>>
>> It is going to take such long time not because no one considers it
>> important but because designing and implementing such system is damn hard.
>> Prioritization does not make a difference here.
>
>
> I agree it's a very important topic (more important/urgent than the GC, also
> because it reduces the need of the GC). But I think Walter thinks this kind
> of change introduces too much complexity in D (despite it may eventually
> become inevitable for D once Rust becomes more popular and programmers get
> used to that kind of static enforcement).

I agree. scope is top of my wishlist these days. Above RC/GC, or
anything else you hear me talking about.
I don't think quality RC is practical without scope implemented, and
rvalue temps -> references will finally be solved too.
Quite a few things I care about rest on this, but it doesn't seem to
be a particularly popular topic :(

> Regarding the design and implementation difficulties, is it possible to ask
> for help to one of the persons that designed (or watched closely design) the
> similar thing for Rust?
>
> Bye,
> bearophile


Re: Strengthening contract

2014-11-12 Thread David Nadlinger via Digitalmars-d

On Wednesday, 12 November 2014 at 10:15:45 UTC, Kagamin wrote:
I mean, it's something to keep in mind, as I guess, such 
limitation is not documented and not obvious.


It is documented: http://dlang.org/contracts (although it just 
says "no useful effect" instead of "illegal")


David


Re: D support in Exuberant Ctags 5.8 for Windows

2014-11-12 Thread Jussi Jumppanen via Digitalmars-d

On Thursday, 26 July 2012 at 22:06:08 UTC, Gary Willoughby wrote:

I'm looking at this page and trying to download the latest 
CTags 5.8 with D patch compiled for Windows but i'm getting a 
dead link.


FYI the Zeus IDE uses ctags and many years ago I updated ctags
to have some understanding of the D language.

Those code changes where made against the last 5.8 version and
they can be found here:

http://www.zeusedit.com/z300/ctags_src.zip

But as I said earlier those changes are quite old, so I don't
know how well they work with the current D language.



Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 06:48:47 UTC, deadalnix wrote:
On Wednesday, 12 November 2014 at 03:13:20 UTC, Rikki 
Cattermole wrote:

[...]


yes and no. The ideas is similar, but it is not doable at 
library level if we want to get safety and the full benefit out 
of it, as it would require for the compiler to introduce some 
call to the runtime at strategic places and it does interact 
with @nogc.


I'm not sure. A library implementation may be feasible. For 
invalidation, the objects can be returned to their init state. 
This is "safe", but maybe not ideal, as a compiler error might 
indeed be better. Even implicit conversion to shared & immutable 
will be possible with multiple alias this, though it's worth 
discussing whether an explicit conversion isn't preferable.


As for @nogc, I see it as a clutch that's needed while no "real" 
solution is available, and as a tool for aiding transition once 
we have one.


That said, some compiler support will definitely be necessary.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 12:42:10 +
Matthias Bentrup via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 12:30:15 UTC, ketmar via 
> Digitalmars-d wrote:
> > On Wed, 12 Nov 2014 12:05:25 +
> > thedeemon via Digitalmars-d  wrote:
> >
> >> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> >> Digitalmars-d wrote:
> >> >   734003200
> >> > address space" (yes, i'm on 32-bit system, GNU/Linux).
> >> >
> >> > the question is: am i doing something wrong here? how can i 
> >> > force GC to stop eating my address space and reuse what it 
> >> > already has?
> >> 
> >> Sure: just make the GC precise, not conservative. ;)
> >> With current GC implementation and array this big chances of 
> >> having a word on the stack that looks like a pointer to it and 
> >> prevents it from being collected are almost 100%. Just don't 
> >> store big arrays in GC heap or switch to 64 bits where the 
> >> problem is not that bad since address space is much larger and 
> >> chances of false pointers are much smaller.
> > but 'mkay, let's change the sample a little:
> >
> >   import core.memory;
> >   import std.stdio;
> >
> >   void main () {
> > uint size = 1024*1024*300;
> > for (;;) {
> >   auto buf = new ubyte[](size);
> >   writefln("%s", size);
> >   size += 1024*1024*100;
> >   GC.free(GC.addrOf(buf.ptr));
> >   buf = null;
> >   GC.collect();
> >   GC.minimize();
> > }
> >   }
> >
> > this shouldn't fail so soon, right? i'm freeing the memory, 
> > so... it
> > still dying on 1,887,436,800. 1.7GB and that's all? this can't 
> > be true,
> > i have 3GB of free RAM (with 1.2GB used) and 8GB of unused 
> > swap. and
> > yes, it consumed all of the process address space again.
> 
> On Linux/x86 you have only 3 GB virtual address space, and this 
> has to include the program code + all loaded libraries too. Check 
> out /proc//maps, to see where the dlls are loaded, and look 
> at the largest chunk of free space available. That is the 
> theoretical limit that could be allocated.
i know it. what i can't get is why D allocates more and more address
space with each 'new'. what i expecting is address space consumption on
par with 'size', but it grows alot faster.

seems that i should either read GC code to see what's going on (oh,
boring!) or write memory region dumper (it's funnier).

i bet that something is wrong with GC memory manager though, but can't
prove it for now.


signature.asc
Description: PGP signature


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 02:34:55 UTC, deadalnix wrote:
Before going into why it is fallign short, a digression on GC 
and the benefits of segregating the heap. In D, the heap is 
almost segregated in 3 groups: thread local, shared and 
immutable. These group are very interesting for the GC:
 - Thread local heap can be collected while disturbing only one 
thread. It should be possible to use different strategy in 
different threads.
 - Immutable heap can be collected 100% concurrently without 
any synchronization with the program.
 - Shared heap is the only one that require disturbing the 
whole program, but as a matter of good practice, this heap 
should be small anyway.


All this is unfortunately only true if there are no references 
between heaps, i.e. if the heaps are indeed "islands". Otherwise, 
there need to be at least write barriers.


I'd argue for the introduction of a basic ownership system. 
Something much simpler than rust's, that do not cover all uses 
cases. But the good thing is that we can fallback on GC or 
unsafe code when the system show its limits. That mean we rely 
less on the GC, while being able to provide a better GC.


We already pay a cost at interface with type qualifier, let's 
make the best of it ! I'm proposing to introduce a new type 
qualifier for owned data.


Now it means that throw statement expect a owned(Throwable), 
that pure function that currently return an implicitly unique 
object will return owned(Object) and that message passing will 
accept to pass around owned stuff.


The GC heap can be segregated into island. We currently have 3 
types of islands : Thread local, shared and immutable. These 
are builtin island with special characteristics in the 
language. The new qualifier introduce a new type of island, the 
owned island.


owned island can only refers to other owned island and to 
immutable. they can be merged in any other island at any time 
(that is why they can't refers to TL or shared).


owned(T) can be passed around as function parameter or 
returned, or stored as fields. When doing so they are consumed. 
When an owned is not consumed and goes out of scope, the whole 
island is freed.


That means that owned(T) can implicitly decay into T, 
immutable(T), shared(T) at any time. When doing so, a call to 
the runtime is done to merge the owned island to the 
corresponding island. It is passed around as owned, then the 
ownership is transferred and all local references to the island 
are invalidated (using them is an error).


On an implementation level, a call to a pure function that 
return an owned could look like this :


{
  IslandID __saved = gc_switch_new_island();
  scope(exit) gc_restore_island(__saved);

  call_pure_function();
}


This is nice. Instead of calling fixed helpers in Druntime, it 
can also make an indirect call to allow for pluggable (and 
runtime switchable) allocators.


The solution of passing a policy at compile for allocation is 
close to what C++'s stdlib is doing, and even if the proposed 
approach by Andrei is better, I don't think this is a good one. 
The proposed approach allow for a lot of code to be marked as 
@nogc and allow for the caller to decide. That is ultimately 
what we want libraries to look like.


+1

Andrei's approach mixes up memory allocation and memory 
management. Library functions shouldn't know about the latter. 
This proposal is clearly better and cleaner in this respect.


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread Matthias Bentrup via Digitalmars-d
On Wednesday, 12 November 2014 at 12:30:15 UTC, ketmar via 
Digitalmars-d wrote:

On Wed, 12 Nov 2014 12:05:25 +
thedeemon via Digitalmars-d  wrote:

On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
Digitalmars-d wrote:

>   734003200
> address space" (yes, i'm on 32-bit system, GNU/Linux).
>
> the question is: am i doing something wrong here? how can i 
> force GC to stop eating my address space and reuse what it 
> already has?


Sure: just make the GC precise, not conservative. ;)
With current GC implementation and array this big chances of 
having a word on the stack that looks like a pointer to it and 
prevents it from being collected are almost 100%. Just don't 
store big arrays in GC heap or switch to 64 bits where the 
problem is not that bad since address space is much larger and 
chances of false pointers are much smaller.

but 'mkay, let's change the sample a little:

  import core.memory;
  import std.stdio;

  void main () {
uint size = 1024*1024*300;
for (;;) {
  auto buf = new ubyte[](size);
  writefln("%s", size);
  size += 1024*1024*100;
  GC.free(GC.addrOf(buf.ptr));
  buf = null;
  GC.collect();
  GC.minimize();
}
  }

this shouldn't fail so soon, right? i'm freeing the memory, 
so... it
still dying on 1,887,436,800. 1.7GB and that's all? this can't 
be true,
i have 3GB of free RAM (with 1.2GB used) and 8GB of unused 
swap. and

yes, it consumed all of the process address space again.


On Linux/x86 you have only 3 GB virtual address space, and this 
has to include the program code + all loaded libraries too. Check 
out /proc//maps, to see where the dlls are loaded, and look 
at the largest chunk of free space available. That is the 
theoretical limit that could be allocated.


Re: std.experimental.logger formal review round 3

2014-11-12 Thread Robert burner Schadek via Digitalmars-d

On Wednesday, 12 November 2014 at 05:36:40 UTC, Jose wrote:

On Tuesday, 11 November 2014 at 15:06:49 UTC, Dicebot wrote:

https://github.com/Dicebot/phobos/tree/logger-safety


One shared Logger:
https://github.com/burner/phobos/blob/logger/std/experimental/logger/core.d#L1696

One global function that references that shared Logger:
https://github.com/burner/phobos/blob/logger/std/experimental/logger/core.d#L292

And more importantly a Mutex that is always acquired when 
writing:

https://github.com/burner/phobos/blob/logger/std/experimental/logger/core.d#L1035

Does this mean that if I use this library on a machine that has 
32 cores, 100s of threads and 1000s of fiber only one 
thread/fiber can write at a time no matter the concrete 
implementation of my logger? Am I missing something or isn't 
this a non-starter?


Thanks,
-Jose


Only one thread can write to one Logger at a time, also known as 
synchronization. Anything else is properly wrong. But you can 
have as many Logger as you have memory.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 11:51:11 UTC, ponce wrote:
Haswell does not have buffered transactions so you wait for 
the commit, but there are presentations out where Intel has 
put buffered transactions at around 2017… (but I would expect 
a delay).


I wasn't arguing of the current performance (which is not 
impressive).
My point is that transactional memory has limited 
applicability, since locks already fits the bill well.


Yes, Intel style HTM is only an optimization for high contention 
where you already have locking code in place, since you need to 
take a lock as a fallback anyway. But useful for database-like 
situations where you might have 7 readers traversing and 1 writer 
updating a leaf node.


It is of course difficult to say how it will perform in 2./3. 
generation implementations or if the overall hardware 
architecture will change radically (as we see in Phi and 
Parallella).


I can easily imagine that the on-die architecture will change 
radically, within a decade, with the current x86 architecture 
remaining at a coordination level. This is the direction Phi 
seems to be going.


In that case, maybe the performance of the x86 will be less 
critical (if it spends most time waiting and buffering is done in 
hardware).



And I'd argue the same with most lockfree structures actually.


I think in general that you need to create application specific 
data-structures to get performance and convenience. (I seldom 
reuse lists and graph-like data structures for this reason, it is 
just easier to create them from scratch.)


I also agree that you usually can get away with regular locks and 
very simple lockfree structures where performance matters (such 
as a lockfree stack where only one thread removes nodes).


Where performance truly matters you probably need to split up the 
dataset based on the actual computations and run over the data in 
a batch-like SIMD way anyway. (E.g. physics simulation).


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 12:05:25 +
thedeemon via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> Digitalmars-d wrote:
> >   734003200
> > address space" (yes, i'm on 32-bit system, GNU/Linux).
> >
> > the question is: am i doing something wrong here? how can i 
> > force GC to stop eating my address space and reuse what it 
> > already has?
> 
> Sure: just make the GC precise, not conservative. ;)
> With current GC implementation and array this big chances of 
> having a word on the stack that looks like a pointer to it and 
> prevents it from being collected are almost 100%. Just don't 
> store big arrays in GC heap or switch to 64 bits where the 
> problem is not that bad since address space is much larger and 
> chances of false pointers are much smaller.
for information: yes, RES is jumping high and low as it should. but
VIRT is steadily growing until there is no more address space available.

so the problem is clearly not in false pointers this time.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 12:05:25 +
thedeemon via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> Digitalmars-d wrote:
> >   734003200
> > address space" (yes, i'm on 32-bit system, GNU/Linux).
> >
> > the question is: am i doing something wrong here? how can i 
> > force GC to stop eating my address space and reuse what it 
> > already has?
> 
> Sure: just make the GC precise, not conservative. ;)
> With current GC implementation and array this big chances of 
> having a word on the stack that looks like a pointer to it and 
> prevents it from being collected are almost 100%. Just don't 
> store big arrays in GC heap or switch to 64 bits where the 
> problem is not that bad since address space is much larger and 
> chances of false pointers are much smaller.
but 'mkay, let's change the sample a little:

  import core.memory;
  import std.stdio;

  void main () {
uint size = 1024*1024*300;
for (;;) {
  auto buf = new ubyte[](size);
  writefln("%s", size);
  size += 1024*1024*100;
  GC.free(GC.addrOf(buf.ptr));
  buf = null;
  GC.collect();
  GC.minimize();
}
  }

this shouldn't fail so soon, right? i'm freeing the memory, so... it
still dying on 1,887,436,800. 1.7GB and that's all? this can't be true,
i have 3GB of free RAM (with 1.2GB used) and 8GB of unused swap. and
yes, it consumed all of the process address space again.


signature.asc
Description: PGP signature


Re: What's blocking DDMD?

2014-11-12 Thread Daniel Murphy via Digitalmars-d

"Suliman"  wrote in message news:tzwckeesoiotlabdy...@forum.dlang.org...

>As of a few hours ago DDMD has gone green in the autotester on the main 
>platforms.


> https://auto-tester.puremagic.com/?projectid=10

I do not see DDMD here. Is it was moved to another location?


It is there at the moment, although I do occasionally use that autotester 
project for testing other branches.  As you can see, it's passing everywhere 
except windows. 



Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
On Wed, 12 Nov 2014 12:05:25 +
thedeemon via Digitalmars-d  wrote:

> On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
> Digitalmars-d wrote:
> >   734003200
> > address space" (yes, i'm on 32-bit system, GNU/Linux).
> >
> > the question is: am i doing something wrong here? how can i 
> > force GC to stop eating my address space and reuse what it 
> > already has?
> 
> Sure: just make the GC precise, not conservative. ;)
> With current GC implementation and array this big chances of 
> having a word on the stack that looks like a pointer to it and 
> prevents it from being collected are almost 100%. Just don't 
> store big arrays in GC heap or switch to 64 bits where the 
> problem is not that bad since address space is much larger and 
> chances of false pointers are much smaller.
even with NO_INTERIOR the sample keeps failing (yet after more
iterations). no, really, there is ALWAYS a pointer exactly to the start
of the allocated buffer somewhere? i feel something smelly here.


signature.asc
Description: PGP signature


Re: either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread thedeemon via Digitalmars-d
On Wednesday, 12 November 2014 at 11:05:11 UTC, ketmar via 
Digitalmars-d wrote:

  734003200
address space" (yes, i'm on 32-bit system, GNU/Linux).

the question is: am i doing something wrong here? how can i 
force GC to stop eating my address space and reuse what it 
already has?


Sure: just make the GC precise, not conservative. ;)
With current GC implementation and array this big chances of 
having a word on the stack that looks like a pointer to it and 
prevents it from being collected are almost 100%. Just don't 
store big arrays in GC heap or switch to 64 bits where the 
problem is not that bad since address space is much larger and 
chances of false pointers are much smaller.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread ponce via Digitalmars-d
On Wednesday, 12 November 2014 at 11:19:59 UTC, Ola Fosheim 
Grøstad wrote:
STM = software based transactional memory (without hardware 
support)


I was meaning HTM instead, good catch.

Haswell does not have buffered transactions so you wait for the 
commit, but there are presentations out where Intel has put 
buffered transactions at around 2017… (but I would expect a 
delay).


I wasn't arguing of the current performance (which is not 
impressive).
My point is that transactional memory has limited applicability, 
since locks already fits the bill well. And I'd argue the same 
with most lockfree structures actually.




Re: C++ overloaded operators and D

2014-11-12 Thread IgorStepanov via Digitalmars-d

On Wednesday, 12 November 2014 at 02:37:52 UTC, deadalnix wrote:
On Tuesday, 11 November 2014 at 22:26:48 UTC, IgorStepanov 
wrote:

Now D provides very powerfull means to link C++ code with D.
However D doesn't allow to call C++ overloaded operators.
It's very annoying, because C++ code may don't provide 
non-operator analogues.
What we know about C++ overloadable operators? Overloaded 
operator in C++ is a trivial function/method with special 
name. Thus operator[](int) differs from op_index(int) function 
only by mangle.
C++ OO have a different behaviour from D OO (for example C++ 
allows different < and > operator overloads or static function 
fro binary operators), thus we should avoud the temptation of 
map C++ OOs to D OOs, or back.


Also D provides a pragma(mangle) which allows to redefine 
symbol mangle. It takes a string argument and redefine mangle 
to it:

pragma(mangle, "foo") void bar();//bar.mangleof == foo

I suggest to modify pragma(mangle) to support C++ operators.
If argument of this pragma is identifier (for example 
cppOpAdd), the pragma applied to extern(C++) function or 
method, compiler mangle the function in accordance with this 
identifier.


//C++
struct Foo
{
   int& operator[](int);
   //another fields
};

//D
extern(C++) struct Foo
{
   pragma(mangle, cppOpIndex) ref int op_index(int);
   //another fields
}

//using:
Foo f;
f.op_index(1)++; //OK, op_index is linked with Foo::operator[]
f[1]++; //Error, no special behaviour for op_index

I think this approach is simple, doesn't modify the language, 
can be easily implemented and usefull. Destroy!


Why would you want to go that road ? Souldn't extern(C++) 
struct mangle this the right way by themselves ?


C++ and D provides different behaviour for operator overloading.
D has a opIndex + opIndexAssign overloads, and if we want to map 
opIndex to operator[], we must to do something with opIndexAssign.
operator< and operator> can't be mapped to D. Same for operator&. 
Binary arithmetic operators can't be mapped to D, if them 
implemented as static functions:


Foo operator+(int a, Foo f); //unable to map it to D, because 
static module-level Foo opAdd(int, Foo) will not provide the same 
behaviour as operator+ in D.
Thus: C++ and D overloaded operators should live in different 
worlds.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 11:08:41 UTC, ponce wrote:
I actually tested Haswell HLE and was underwhelmed (not the 
full STM, was just trying to get more out of some locks).


STM = software based transactional memory (without hardware 
support)


Haswell does not have buffered transactions so you wait for the 
commit, but there are presentations out where Intel has put 
buffered transactions at around 2017… (but I would expect a 
delay).


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread ponce via Digitalmars-d
On Wednesday, 12 November 2014 at 09:56:57 UTC, Paulo  Pinto 
wrote:

On Wednesday, 12 November 2014 at 08:55:30 UTC, deadalnix wrote:

On Wednesday, 12 November 2014 at 08:38:14 UTC, Ola Fosheim
In addition, the whole
CPU industry is backpedaling on the transactional memory 
concept. That is awesome on the paper, but it didn't worked.


Given the support on Haskell, Clojure and C++ I am not sure if 
they are really backpedaling on it.


The Haskell bugs are supposed to have been fixed in the next 
generation.


And there is PowerPC A2 as well.

Not that I have any use for it, though.

--
Paulo


I actually tested Haswell HLE and was underwhelmed (not the full 
STM, was just trying to get more out of some locks).
The trouble with STM is that to be applicable, you need to have 
huge contention (else it wouldn't be a bottleneck) and a small 
task to do.
And this use case is already well served with spinlock-guarded 
locks which already allows to stay in user space most of the time.
That, added with the usual restrictions and gotchas for lockfree, 
makes it something not very life-changing at least in my limited 
experience.


either me or GC sux badly (GC don't reuse free memory)

2014-11-12 Thread ketmar via Digitalmars-d
Hello.

let's run this program:

  import core.sys.posix.unistd;
  import std.stdio;
  import core.memory;

  void main () {
uint size = 1024*1024*300;
for (;;) {
  auto buf = new ubyte[](size);
  writefln("%s", size);
  sleep(1);
  size += 1024*1024*100;
  buf = null;
  GC.collect();
  GC.minimize();
}
  }

pretty innocent, right? i even trying to help GC here. but...

  314572800
  419430400
  524288000
  629145600
  734003200
  core.exception.OutOfMemoryError@(0)

ps.

by the way, this is not actually "no more memory", this is "i'm out of
address space" (yes, i'm on 32-bit system, GNU/Linux).

the question is: am i doing something wrong here? how can i force GC to
stop eating my address space and reuse what it already has?

sure, i can use libc malloc(), refcounting, and so on, but the question
remains: why GC not reusing already allocated and freed memory?


signature.asc
Description: PGP signature


Re: Connection Problems with forum.dlang.org

2014-11-12 Thread Kagamin via Digitalmars-d
As I understand, HTTP proxies work at the application layer: if 
the target site refuses connection, they report it to the client 
as such, sometimes even with a custom page. I'd say, wireshark 
diagnoses the proxy itself, not the target site.


Re: Strengthening contract

2014-11-12 Thread Kagamin via Digitalmars-d
I mean, it's something to keep in mind, as I guess, such 
limitation is not documented and not obvious.


Re: GC: memory collected but destructors not called

2014-11-12 Thread eles via Digitalmars-d
On Monday, 10 November 2014 at 14:19:26 UTC, Steven Schveighoffer 
wrote:



Is the resource a GC resource? If so, don't worry about it.


I might be wrong but my view is that in presence of a GC and 
undre the abstraction of R Chen, it is wrong to think about 
memory as being a resource anymore.


If the abstraction is that of a machine with infinite memory, 
then the very mechanism of freeing memory shall be abstracted for.


This asks for decoupling the memory management of the management 
of the other resources and, in particular, RAII-like management.


The management of memory and any other resources is pretty much 
identical under C++ since they share the same paradigm. In GC 
languages, this is no longer the case.


So, the real question (that will also cover destructors throwing 
exceptions and so on) is not what is allowed inside a destructor 
(aka finalizer) but *what is allowed in a constructor* of a 
GC-managed object.


Since for symmetry purposes the destructor should undo what the 
constructor did, this cuts the problem as follows:


 1) in GC-entities, the destructor should not deal with releasing 
memory. Not only this job is no longer his, but *there is no need 
for this kind of job*. Memory is infinite.


 2) from symmetry, the constructor should not allocate memory as 
it would care about its lifetime.


 3) the question in that case is how the other resources are 
managed?


 3a) If they are to be released in the destructor, then the 
destructor's job should be only this (no memory dealloc). In this 
case, the resources should be taken in the constructor.


 3b) if the destructor disappears completely, then the place to 
acquire resources is no longer in the constructor anymore, since 
those will never be released.


 Ideally, a transparent mechanism to allocate memory would be 
needed, without explicit allocation. In this case, the symmetry 
would be conserved: the constructor will only acqire resources 
(but not memory!) and those resources are released, 
symmetrically, in the destructor, at the end of the lifetime.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread Paulo Pinto via Digitalmars-d

On Wednesday, 12 November 2014 at 08:55:30 UTC, deadalnix wrote:

On Wednesday, 12 November 2014 at 08:38:14 UTC, Ola Fosheim
In addition, the whole
CPU industry is backpedaling on the transactional memory 
concept. That is awesome on the paper, but it didn't worked.


Given the support on Haskell, Clojure and C++ I am not sure if 
they are really backpedaling on it.


The Haskell bugs are supposed to have been fixed in the next 
generation.


And there is PowerPC A2 as well.

Not that I have any use for it, though.

--
Paulo


Re: GC: memory collected but destructors not called

2014-11-12 Thread via Digitalmars-d
On Wednesday, 12 November 2014 at 04:59:33 UTC, Shachar Shemesh 
wrote:

On 10/11/14 16:19, Steven Schveighoffer wrote:


Only classes call dtors from the GC. Structs do not. There are 
many
hairy issues with structs calling dtors from GC. Most struct 
dtors
expect to be called synchronously, and are not expecting to 
deal with

multithreading issues.

Note that structs inside classes WILL call dtors.


How is this any different? If one should not be allowed, how is 
the other okay?


Supposedly, a struct destructor will only access resources that 
the struct itself manages. As long as that's the case, it will be 
safe. In practice, there's still a lot that can go wrong.


Re: Connection Problems with forum.dlang.org

2014-11-12 Thread via Digitalmars-d
On Tuesday, 11 November 2014 at 18:50:15 UTC, Jonathan Marler 
wrote:
On Tuesday, 11 November 2014 at 00:59:59 UTC, Jonathan Marler 
wrote:
I started having issues today connecting to forum.dlang.org 
from

the proxy server we use at work (I work at HP). I've never had
this problem before today.  I can connect if I use a 3rd party
proxy server (such as https://hide.me/en/proxy).  I captured a
wireshark trace and I'm just not getting any response from the
initial HTTP request.  I get the ACK from the request but never
get any response.  I've been trying it all day and it hasn't
worked.

I first tried using different proxy servers and it seems that 
all

the HP proxy servers aren't working. So, my best guess is that
the forum.dlang.org server might be blocking or limiting 
packets

from ip addresses it gets too many requests from.

We have a variety of proxies we can use to connect outside our
private network, I've tried three of them and they all can't
connect.  Here's there host names and ip addresses:

1. proxy-txn.austin.hp.com (16.85.175.70)
2. proxy.houston.hp.com (16.85.88.10)
3. web-proxy.corp.hp.com (16.85.175.150)

I don't know who manages the dlang servers, but if someone sees
this could you grep the logs/error logs for these ip addresses
and see if they are being blocked in some way?  There are alot 
of
machines behind these ip addresses so maybe the server thinks 
it
might be getting DOS'd from these ip addresses, when in 
reality,

it's just alot of machines using these ip addresses as proxy
connections.

Thanks.

P.S.  I didn't post this using my regular login because I don't
want to login through the 3rd party proxy server I'm connecting
through.


I'm still having this issue, this is quite an annoyance.  Can
someone tell me who to contact for this?  Feel free to send me 
an

email at johnnymar...@gmail.com

I'm looking to talk to whoever manages the dlang servers.  This
problem occurs on forum.dlang.org and wiki.dlang.org, it does 
not

happen when connecting to dlang.org


That would be Vladimir Panteleev .


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 08:55:30 UTC, deadalnix wrote:
I'm sorry to be blunt, but there is nothing actionable in your 
comment. You are just throwing more and more into the pot until 
nobody know what there is in. But ultimately, the crux of the 
problem is the thing quoted above.


My point is that you are making too many assumptions about both
applications and hardware.

 2. The transactional memory thing is completely orthogonal to 
the subject at hand so, as the details of implementation of 
modern chip, this doesn't belong here. In addition, the whole 
CPU industry is backpedaling on the transactional memory 
concept. That is awesome on the paper, but it didn't worked.


STM is used quite a bit. Hardware backed TM is used by IBM.

For many computationally intensive applications high levels of
parallelism is achieved using speculative computation. TM
supports that.

There is only 2 way to achieve good design. You remove useless 
things until there is obviously nothing wrong, or you add more 
and more until there is nothing obviously wrong. I won't follow 
you down the second road, so please stay on track.


Good design is achieved by understanding different patterns of
concurrency in applications and how it can reach peak performance
in the environment (hardware).

If D is locked to a narrow memory model then you can only reach
high performance on a subset of applications.

If D wants to support system level programming then it needs to 
taken an open approach to the memory model.


Re: What's blocking DDMD?

2014-11-12 Thread Jacob Carlborg via Digitalmars-d

On 2014-11-12 09:01, Suliman wrote:


I do not see DDMD here. Is it was moved to another location?


I would guess it's "DMD Yebblies". "yebblies" is Daniel Murphy's name on 
Github.


--
/Jacob Carlborg


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread deadalnix via Digitalmars-d
On Wednesday, 12 November 2014 at 08:38:14 UTC, Ola Fosheim 
Grøstad wrote:
That changes over time. The current focus in upcoming hardware 
is on:


1. Heterogenous architecture with high performance co-processors

2. Hardware support for transactional memory

Intel CPUs might have buffered transactional memory within 5 
years.




I'm sorry to be blunt, but there is nothing actionable in your 
comment. You are just throwing more and more into the pot until 
nobody know what there is in. But ultimately, the crux of the 
problem is the thing quoted above.


 1. No that do not change that much over time. The 
implementations details are changing, recent schemes become more 
complex to accommodate heterogeneous chips, but it is irrelevant 
here. What I've mentioned is true for all of them, and has been 
for at least 2 decades by now. There is no sign that this is 
gonna change.
 2. The transactional memory thing is completely orthogonal to 
the subject at hand so, as the details of implementation of 
modern chip, this doesn't belong here. In addition, the whole CPU 
industry is backpedaling on the transactional memory concept. 
That is awesome on the paper, but it didn't worked.


There is only 2 way to achieve good design. You remove useless 
things until there is obviously nothing wrong, or you add more 
and more until there is nothing obviously wrong. I won't follow 
you down the second road, so please stay on track.


Re: On heap segregation, GC optimization and @nogc relaxing

2014-11-12 Thread via Digitalmars-d

On Wednesday, 12 November 2014 at 02:34:55 UTC, deadalnix wrote:

The problem at hand here is ownership of data.


"ownership of data" is one possible solution, but not the problem.

We are facing 2 problems:

1. A performance problem: Concurrency in writes (multiple 
writers, one writer, periodical locking during clean up etc).


2. A structural problem: Releasing resources correctly.

I suggest that the ownership focus is on the latter, to support 
solid non-GC implementations. Then rely on conventions for 
multi-threading.


 - Being unsafe and rely on convention. This is the C++ road 
(and a possible road in D). It allow to implement almost any 
wanted scheme, but come at great cost for the developer.


All performant solutions are going to be "unsafe" in the sense 
that you need to select a duplication/locking level that are 
optimal for the characteristics of the actual application. 
Copying data when you have no writers is too inefficient in real 
applications.


Hardware support for transactional memory is going to be the easy 
approach for speeding up locking.




 - Annotations. This is the Rust road. It also come a great


I think Rust's approach would favour a STM approach where you 
create thread local copies for processing then merge the result 
back into the "shared" memory.



Immutability+GC allow to have safety while keeping interfaces 
simple. That is of great value. It also come with some nice 
goodies, in the sense that is it easy and safe to shared data 
without bookkeeping, allowing one to fit more in cache, and 
reduce the amount of garbage created.


How does GC fit more data in the cache? A GC usually has overhead 
and would typically generate more cache-misses due to unreachable 
in-cache ("hot") memory not being available for reallocation.



Relying on convention has the advantage that any scheme can be 
implemented without constraint, while keeping interface simple. 
The obvious drawback is that it is time consuming and error 
prone. It also make a lot of things unclear, and dev choose the 
better safe than sorry road. That mean excessive copying to 
make sure one own the data, which is wasteful (in term of work 
for the copy itself, garbage generation and cache pressure). If 
this must be an option locally for system code, it doesn't 
seems like this is the right option at program scale and we do 
it in C++ simply because we have to.


Finally, annotations are a great way to combine safety and 
speed, but generally come at a great cost when implenting 
uncommon ownership strategies where you ends up having to 
express complex lifetime and ownership relations.


The core problem is that if you are unhappy with single-threaded 
applications then you are looking for high throughput using 
multi-threading. And in that case sacrificing performance by not 
using the optimal strategy becomes problematic.


The optimal strategy is entirely dependent on the application and 
the dataset.


Therefore you need to support multiple approaches:

- per data structure GC
- thread local GC
- lock annotations of types or variables
- speculative lock optimisations (transactional memory)

And in the future you also will need to support the integration 
of GPU/Co-processors into mainstream CPUs. Metal and OpenCL is 
only a beginning…



Ideally, we want to map with what the hardware does. So what 
does the hardware do ?


That changes over time. The current focus in upcoming hardware is 
on:


1. Heterogenous architecture with high performance co-processors

2. Hardware support for transactional memory

Intel CPUs might have buffered transactional memory within 5 
years.



from one core to the other. They are bad at shared writable 
data (as effectively, the cache line will have to bounce back 
and forth between cores, and all memory access will need to be 
serialized instead of performed out of order).


This will vary a lot. On x86 you can write to a whole cache line 
(buffered) without reading it first and it uses a convenient 
cache coherency protocol (so that reads/write ops are in order). 
This is not true for all CPUs.


I agree with others that say that a heterogeneous approach, like 
C++, is the better alternative.  If parity with C++ is important 
then D needs to look closer at OpenMP, but that probably goes 
beyond what D can achieve in terms of implementation.



Some observations:

1. If you are not to rely on conventions for sync'ing threads 
then you need a pretty extensive framework if you want good 
performance.


2. Safety will harm performance.

3. Safety with high performance levels requires a very 
complicated static analysis that will probably not work very well 
for larger programs.


4. For most applications performance will come through 
co-processors (GPGPU etc).


5. If hardware progresses faster than compiler development, then 
you will never reach the performance frontier…



I think D needs to cut down on implementation complexity and 
ensure that the implementation

Re: What's blocking DDMD?

2014-11-12 Thread Suliman via Digitalmars-d
As of a few hours ago DDMD has gone green in the autotester on 
the main platforms.



https://auto-tester.puremagic.com/?projectid=10


I do not see DDMD here. Is it was moved to another location?