Re: bug with CTFE std.array.array ?

2013-07-10 Thread monarch_dodra

On Thursday, 11 July 2013 at 01:06:12 UTC, Timothee Cour wrote:

import std.array;

void main(){
  //enum a1=[1].array;//NG: Error: gc_malloc cannot be 
interpreted at

compile time
  enum a2=" ".array;//OK

  import std.string;
  //enum a3=" ".splitLines.array;//NG
  enum a4="".splitLines.array;//OK
  enum a5=" ".split.array;//OK
  //enum a6=" a ".split.array;//NG
  import std.algorithm:filter;
  enum a7=" a ".split.filter!(a=>true).array;
  auto a8=" a ".split.array;
  assert(a8==a7);
  enum a9=[1].filter!(a=>true).array;//OK
}


I don't understand why the NG above fail (with Error: gc_malloc 
cannot be

interpreted at compile time)

furthermore, it seems we can bypass the CT error with 
interleaving

filter!(a=>true) (see above), which is even weirder.


Funny, the same question was asked in learn not 3 day's ago.
http://forum.dlang.org/thread/frmaptrpnrgnuvcdf...@forum.dlang.org
And yeah, it was fixed.
https://github.com/D-Programming-Language/phobos/pull/1305

To answer your question about "filter": filter doesn't have 
length, so instead of taking an efficient code branch in array, 
array simply becomes:

foreach(e;range)
arr ~= e;
Which is more CTFE friendly than the optimized length 
implementation.


Re: The interaction of encapsulation and properties in D

2013-07-10 Thread Jonathan M Davis
On Thursday, July 11, 2013 04:55:09 =?UTF-8?B?Ikx1w61z?=.Marques 
@puremagic.com wrote:
> So, summing it up: even assuming that performance is not an
> issue, does the advice to always encapsulate your member
> variables (as one would do, for instance, in idiomatic Java)
> actually make sense for D, or would you recommend using public
> member variables when that is more straightforward, and the
> indirection is not yet needed?

In general, I would strongly advise against using public fields if you ever 
might want to replace them with property functions. The reason for this is 
that property functions really don't do all that great a job of emulating 
variables. For instance, taking the address of a variable gives you a 
completely different type than taking the address of a function does, and 
variables can be passed by ref, whereas the result of a property function 
can't (unless it returns by ref, which would ruin the encapsulation that it 
provides). In general, if you use public variables, you're just going to cause 
yourself trouble in the long run unless they stay public variables forever.

- Jonathan M Davis


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Kenji Hara
I filed the website bug in bugzilla, and posted pull request.

http://d.puremagic.com/issues/show_bug.cgi?id=10605
https://github.com/D-Programming-Language/dlang.org/pull/351

Kenji Hara


2013/7/11 Kenji Hara 

> This is accepts-valid behavior.
>
> function(parameters) => expr
>
> means the combination of:
>
> 1. specifying "context pointer is not necessary"
> 2. lambda syntax "(parameters) => expr"
>
> I think website documentation has a bug.
>
> Kenji Hara
>
>
>
> 2013/7/10 Brian Schott 
>
>> While finishing up work on my parser and grammar specification I found
>> this in container.d:
>>
>> return equal!(function(Elem a, Elem b) => !_less(a,b) && !_less(b,a))
>>  (thisRange, thatRange);
>>
>> It seems to be some strange hybrid of the function literal syntax and the
>> lambda syntax. It's not documented anywhere (surprise!) and I'm not sure if
>> I should support it or file an accepts-invalid bug against DMD.
>>
>
>


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Jonathan Dunlap
There is, but I don't think this was one of those cases. That 
manifests as an entirely new thread in the forum interface.


Your correct John, totally my bad. I'll be more vigilant in which 
post I reply to as I didn't realize that anyone really read into 
the reply's source.


I'm critical about D's GC as I am super passionate about being 
able to one day use D for professional game development. =)


Re: isCallableCTFE trait to test whether an expression is callable during CTFE

2013-07-10 Thread timotheecour

On Thursday, 11 July 2013 at 03:29:13 UTC, Kenji Hara wrote:

On Thursday, 11 July 2013 at 03:10:38 UTC, timotheecour wrote:

On Thursday, 11 July 2013 at 02:17:13 UTC, Timothee Cour wrote:

[snip]


can we add this to std.traits?
it allows (among other things) to write unittests for CTFE, 
etc.


Phobos has an internal helper function for testing CTFEable.
https://github.com/D-Programming-Language/phobos/blob/master/std/exception.d#L1322

Kenji Hara



Mine is more flexible. The one you mention can only work in a 
static assert statement.
We could replace it in terms of static assert(isCallableCTFE!dg) 
but not vice versa.


Re: bug with CTFE std.array.array ?

2013-07-10 Thread Kenji Hara
This issue is already fixed in git head.

Kenji Hara

2013/7/11 Timothee Cour 

> import std.array;
>
> void main(){
>   //enum a1=[1].array;//NG: Error: gc_malloc cannot be interpreted at
> compile time
>   enum a2=" ".array;//OK
>
>   import std.string;
>   //enum a3=" ".splitLines.array;//NG
>   enum a4="".splitLines.array;//OK
>   enum a5=" ".split.array;//OK
>   //enum a6=" a ".split.array;//NG
>   import std.algorithm:filter;
>   enum a7=" a ".split.filter!(a=>true).array;
>   auto a8=" a ".split.array;
>   assert(a8==a7);
>   enum a9=[1].filter!(a=>true).array;//OK
> }
>
>
> I don't understand why the NG above fail (with Error: gc_malloc cannot be
> interpreted at compile time)
>
> furthermore, it seems we can bypass the CT error with interleaving
> filter!(a=>true) (see above), which is even weirder.
>


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Kenji Hara
This is accepts-valid behavior.

function(parameters) => expr

means the combination of:

1. specifying "context pointer is not necessary"
2. lambda syntax "(parameters) => expr"

I think website documentation has a bug.

Kenji Hara



2013/7/10 Brian Schott 

> While finishing up work on my parser and grammar specification I found
> this in container.d:
>
> return equal!(function(Elem a, Elem b) => !_less(a,b) && !_less(b,a))
>  (thisRange, thatRange);
>
> It seems to be some strange hybrid of the function literal syntax and the
> lambda syntax. It's not documented anywhere (surprise!) and I'm not sure if
> I should support it or file an accepts-invalid bug against DMD.
>


Re: The interaction of encapsulation and properties in D

2013-07-10 Thread Jesse Phillips

On Thursday, 11 July 2013 at 02:55:24 UTC, Luís Marques wrote:
So, summing it up: even assuming that performance is not an 
issue, does the advice to always encapsulate your member 
variables (as one would do, for instance, in idiomatic Java) 
actually make sense for D, or would you recommend using public 
member variables when that is more straightforward, and the 
indirection is not yet needed?


In general it seems people just use public fields. (I do, and 
sometimes annotated with @property). Considering the history this 
makes sense; optional parens "addressed" properties (see property 
discussions).


On of the main problems has been that public fields can be passed 
to ref parameters while this is not true for getters. I see this 
as a deficiency in @property but don't recall all the detail of 
the problem.


Re: The interaction of encapsulation and properties in D

2013-07-10 Thread Luís.Marques
(Of course I forgot to change the variable to private in the 
versions with accessors)


Re: isCallableCTFE trait to test whether an expression is callable during CTFE

2013-07-10 Thread Kenji Hara

On Thursday, 11 July 2013 at 03:10:38 UTC, timotheecour wrote:

On Thursday, 11 July 2013 at 02:17:13 UTC, Timothee Cour wrote:

[snip]


can we add this to std.traits?
it allows (among other things) to write unittests for CTFE, etc.


Phobos has an internal helper function for testing CTFEable.
https://github.com/D-Programming-Language/phobos/blob/master/std/exception.d#L1322

Kenji Hara


Re: isCallableCTFE trait to test whether an expression is callable during CTFE

2013-07-10 Thread timotheecour

On Thursday, 11 July 2013 at 02:17:13 UTC, Timothee Cour wrote:

template isCallableCTFE(alias fun){
template isCallableCTFE_aux(alias T){
enum isCallableCTFE_aux=T;
}
enum 
isCallableCTFE=__traits(compiles,isCallableCTFE_aux!(fun()));

}

template isCallableCTFE2(fun...){
enum isCallableCTFE2=true;
}


unittest{
int fun1(){
return 1;
}
auto fun1_N(){
import std.array;
//would return Error: gc_malloc cannot be interpreted at 
compile time,

because it has no available source code due to a bug
return [1].array;
}
int fun2(int x){
return 1;
}
auto fun2_N(int x){
import std.array;
//same as fun1_N
return [1].array;
}

int a1;
enum a2=0;

static assert(!isCallableCTFE!(()=>a1));
static assert(isCallableCTFE!(()=>a2));

static assert(isCallableCTFE!fun1);
static assert(!isCallableCTFE!fun1_N);

static assert(isCallableCTFE!(()=>fun2(0)));
static assert(!isCallableCTFE!(()=>fun2_N(0)));
//NOTE:an alternate syntax which could be implemented would be: 
static

assert(!isCallableCTFE!(fun2_N,0)));
}



can we add this to std.traits?
it allows (among other things) to write unittests for CTFE, etc.


Re: isCallableCTFE trait to test whether an expression is callable during CTFE

2013-07-10 Thread Timothee Cour
On Wed, Jul 10, 2013 at 7:16 PM, Timothee Cour wrote:

> template isCallableCTFE(alias fun){
> template isCallableCTFE_aux(alias T){
> enum isCallableCTFE_aux=T;
>  }
> enum isCallableCTFE=__traits(compiles,isCallableCTFE_aux!(fun()));
> }
>
> template isCallableCTFE2(fun...){
> enum isCallableCTFE2=true;
> }
>
>
> unittest{
>  int fun1(){
> return 1;
> }
>  auto fun1_N(){
> import std.array;
> //would return Error: gc_malloc cannot be interpreted at compile time,
> because it has no available source code due to a bug
>  return [1].array;
> }
> int fun2(int x){
>  return 1;
> }
> auto fun2_N(int x){
>  import std.array;
> //same as fun1_N
> return [1].array;
>  }
>
> int a1;
> enum a2=0;
>
> static assert(!isCallableCTFE!(()=>a1));
> static assert(isCallableCTFE!(()=>a2));
>
> static assert(isCallableCTFE!fun1);
> static assert(!isCallableCTFE!fun1_N);
>
> static assert(isCallableCTFE!(()=>fun2(0)));
> static assert(!isCallableCTFE!(()=>fun2_N(0)));
>  //NOTE:an alternate syntax which could be implemented would be: static
> assert(!isCallableCTFE!(fun2_N,0)));
> }
>


can we add this to std.traits?
it allows (among other things) to write unittests for CTFE, etc.


The interaction of encapsulation and properties in D

2013-07-10 Thread Luís.Marques
Maybe this has already been discussed (if so sorry), but I would 
like to ask your opinion about the following.


The condensed version: because we have properties, is it wise to 
use public member variables in D, in some circumstances?


The long version, starting with the context: AFAIK, standard 
encapsulation wisdom says you should never expose member 
variables directly, like this:


class X
{
public int phone;
}

void main()
{
X x = new X();
x.phone = 555123;
}

The usual admonitions apply, such that you might want to change 
the internal representation of the 'phone'. So, as with every 
problem in life, you decide to add one more level of indirection, 
and you implement some form of getters and setters:


// in Java or in D without using properties...
class X
{
public int _phone;

void setPhone(int v)
{
_phone = v;
}

int getPhone()
{
return _phone;
}
}

void main()
{
X x = new X();
x.setPhone(555123);
}

With such encapsulation, once you realise the silliness of having 
a phone number stored in an int, this is easy to change without 
breaking the client code:


// in D without using properties
class X
{
public string _phone;

void setPhone(int v)
{
_phone = to!string(v);
}

void setPhone(string v)
{
_phone = to!string(v);
}

int getPhone()
{
return to!int(_phone);
}
}

void main()
{
X x = new X();
x.setPhone(555123); // did not break the old client code
x.setPhone("555 123"); // the new stringified API also 
works

}

Of course, in D we would do this with properties instead:

class X
{
public string _phone;

@property void phone(int v)
{
_phone = to!string(v);
}

@property int phone()
{
return to!int(_phone);
}

@property void phone(string v)
{
_phone = v;
}

@property string phoneString()
{
return _phone;
}
}

void main()
{
X x = new X();
x.phone = 555123;
x.phone = "555 123";
}

But when we use the property syntax to implement the 
encapsulation the client code does not have to change; there is 
source code compatibility at the client side: the same client 
code can be used for both the the direct member variable 
implementation and the property accessors implementation, as long 
as we are willing to recompile. The client code is already 
shielded from such implementation issue; in a certain sense the 
'phone' "field" was already encapsulated.


Why, then, should one use (traditional, property based) 
encapsulation *proactively*? After all, if we later decide that 
we want to sanitize the setter inputs, or add logging, or trigger 
an action, or something like that, we can always refactor the 
public member variable into the respective property accessors.


So why not, instead, just use a simple public member variable, 
when that is cleaner (less boilerplate code), more 
straightforward, and not obvious that we will really need the 
extra indirection?


A case where such reasoning would not apply is when source code 
compatibility is not enough, such as when you have a D library. 
But this could be solved by something like...


class X
{
@property public int phone;
}

...which would under the covers generate getter and setter 
properties, so that if you later wanted to encapsulate it without 
breaking binary compatibility you could do so.


So, summing it up: even assuming that performance is not an 
issue, does the advice to always encapsulate your member 
variables (as one would do, for instance, in idiomatic Java) 
actually make sense for D, or would you recommend using public 
member variables when that is more straightforward, and the 
indirection is not yet needed?


isCallableCTFE trait to test whether an expression is callable during CTFE

2013-07-10 Thread Timothee Cour
template isCallableCTFE(alias fun){
template isCallableCTFE_aux(alias T){
enum isCallableCTFE_aux=T;
}
enum isCallableCTFE=__traits(compiles,isCallableCTFE_aux!(fun()));
}

template isCallableCTFE2(fun...){
enum isCallableCTFE2=true;
}


unittest{
int fun1(){
return 1;
}
auto fun1_N(){
import std.array;
//would return Error: gc_malloc cannot be interpreted at compile time,
because it has no available source code due to a bug
return [1].array;
}
int fun2(int x){
return 1;
}
auto fun2_N(int x){
import std.array;
//same as fun1_N
return [1].array;
}

int a1;
enum a2=0;

static assert(!isCallableCTFE!(()=>a1));
static assert(isCallableCTFE!(()=>a2));

static assert(isCallableCTFE!fun1);
static assert(!isCallableCTFE!fun1_N);

static assert(isCallableCTFE!(()=>fun2(0)));
static assert(!isCallableCTFE!(()=>fun2_N(0)));
//NOTE:an alternate syntax which could be implemented would be: static
assert(!isCallableCTFE!(fun2_N,0)));
}


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Brian Rogoff

On Wednesday, 10 July 2013 at 14:01:51 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 13:30:26 UTC, bearophile wrote:

thedeemon:


No mature GCed languages behave that bad.


I think there is one JavaVM that manages to avoid part of the 
problem you explain (and maybe it needs special kernel 
support). I think all other JavaVMs suffer it, more or less.


Bye,
bearophile


Yes, the C4 garbage collector in the Azul JVM

http://www.azulsystems.com/products/zing/whatisit

http://www.azulsystems.com/products/zing/c4-java-garbage-collector-wp

They used to have special hardware, but now they use standard 
kernels.


RedHat is also planning to have a go at it for the OpenJDK,

http://rkennke.wordpress.com/2013/06/10/shenandoah-a-pauseless-gc-for-openjdk/


--
Paulo


Interesting. There was some discussion of adding a 'pauseless' GC 
to Go as well, here


https://groups.google.com/forum/#!topic/golang-dev/GvA0DaCI2BU

and in that discussion Gil Tene, one of the authors of Azul, 
opines


"Starting with a precise and generational stop-the-world 
implementation that is robust is a must, and a good launching pad 
towards a concurrent compacting collector (which is what a 
"pauseless" collector must be in server-scale environments). Each 
of those qualities (precise, generational) slaps serious 
requirements on the execution environment and on the compilers 
(whether they are pre-compilers or JIT compilers doesn't matter): 
precise collectors require full identification of all references 
at code safepoints, and also require a robust safepoint 
mechanism. Code safepoints must be frequent (usually devolve to 
being at every method entry and loop back edge), and support in 
non-compiler-generated code (e.g. library and runtime code 
written in C/C++) usually involves some form of reference handle 
support around safepoints. Generational collectors require a 
write barrier (a ref-store barrier to be precise) with full 
coverage for any heap reference store operations (in 
compiler-generated code and in all runtime code).


It is my opinion that investing in the above capabilities early 
in the process (i.e. start now!) is critical. Environments that 
skip this step for too long and try to live with conservative GC 
in order to avoid putting in the required work for supporting 
precise collectors in the compilers and runtime and libraries 
find themselves heavily invested in compiler code that would need 
to be completely re-vamped to move forward. ..."


Sounds like that precise GC talk at DConf was quite timely. Let's 
hope that prediction about being too heavily invested in 
conservative GC dependencies isn't too true!


-- Brian




Re: bug with CTFE std.array.array ?

2013-07-10 Thread Timothee Cour
On Wed, Jul 10, 2013 at 6:11 PM, Jonathan M Davis wrote:

> On Wednesday, July 10, 2013 18:06:00 Timothee Cour wrote:
> > import std.array;
> >
> > void main(){
> > //enum a1=[1].array;//NG: Error: gc_malloc cannot be interpreted at
> > compile time
> > enum a2=" ".array;//OK
> >
> > import std.string;
> > //enum a3=" ".splitLines.array;//NG
> > enum a4="".splitLines.array;//OK
> > enum a5=" ".split.array;//OK
> > //enum a6=" a ".split.array;//NG
> > import std.algorithm:filter;
> > enum a7=" a ".split.filter!(a=>true).array;
> > auto a8=" a ".split.array;
> > assert(a8==a7);
> > enum a9=[1].filter!(a=>true).array;//OK
> > }
> >
> >
> > I don't understand why the NG above fail (with Error: gc_malloc cannot be
> > interpreted at compile time)
> >
> > furthermore, it seems we can bypass the CT error with interleaving
> > filter!(a=>true) (see above), which is even weirder.
>
> I expect that it's a matter of code path. Some code paths don't hit
> anything
> which is illegal in CTFE, but apparently at least one of them uses
> gc_malloc,
> which is apparently illegal in CTFE, so when it gets hit, CTFE fails.
> Presumably, to fix it, either it needs to be changed to not use gc_malloc
> or
> changed so that it doesn't use gc_malloc with CTFE (by using __ctfe). I'd
> have
> to actually dig through the code to verify exactly what it's doing though.
>
> - Jonathan M Davis
>

in the meantime, one can do:

auto ctfe_array(T)(T a){
import std.algorithm:filter;
import std.array:array;
return a.filter!(a=>true).array;
}
unittest{
  //enum a1=[1].array;//NG
  enum a2=[1]. ctfe_array;//OK
}

but why not change std.array.array to this:

auto array(T)(T a){
if(__ctfe){
import std.algorithm:filter;
return a.filter!(a=>true). arrayImpl;
}
else{
return arrayImpl(a);
}
}

auto arrayImpl(T)(T a){...}//move current implementation here

That would at least temporarily solve the problem, until a better fix is
found.


Re: Is the compiler supposed to accept this?

2013-07-10 Thread deadalnix

On Wednesday, 10 July 2013 at 21:33:00 UTC, Brian Schott wrote:

On Wednesday, 10 July 2013 at 21:16:30 UTC, Timon Gehr wrote:

// (parameters) => expression ?

In any case, please consider that it actually makes no sense 
to restrict the expressiveness of the type signature based on 
how the function body is specified. (Why on earth should one 
have to use the { return expression; } syntax just in order to 
be able to assert that no context pointer is required?)


The documentation is in error here.


"(parameters) => expression" is mentioned in the source and I 
agree it's valid. I must have forgotton to copy-paste it.


I don't agree that "function(parameters) => expression" is 
valid though. Can any of the DMD devs clear up if this is 
intended?


I don't see how DMD implementation matters here. This is language 
design issue.


Re: bug with CTFE std.array.array ?

2013-07-10 Thread Jonathan M Davis
On Wednesday, July 10, 2013 18:06:00 Timothee Cour wrote:
> import std.array;
> 
> void main(){
> //enum a1=[1].array;//NG: Error: gc_malloc cannot be interpreted at
> compile time
> enum a2=" ".array;//OK
> 
> import std.string;
> //enum a3=" ".splitLines.array;//NG
> enum a4="".splitLines.array;//OK
> enum a5=" ".split.array;//OK
> //enum a6=" a ".split.array;//NG
> import std.algorithm:filter;
> enum a7=" a ".split.filter!(a=>true).array;
> auto a8=" a ".split.array;
> assert(a8==a7);
> enum a9=[1].filter!(a=>true).array;//OK
> }
> 
> 
> I don't understand why the NG above fail (with Error: gc_malloc cannot be
> interpreted at compile time)
> 
> furthermore, it seems we can bypass the CT error with interleaving
> filter!(a=>true) (see above), which is even weirder.

I expect that it's a matter of code path. Some code paths don't hit anything 
which is illegal in CTFE, but apparently at least one of them uses gc_malloc, 
which is apparently illegal in CTFE, so when it gets hit, CTFE fails. 
Presumably, to fix it, either it needs to be changed to not use gc_malloc or 
changed so that it doesn't use gc_malloc with CTFE (by using __ctfe). I'd have 
to actually dig through the code to verify exactly what it's doing though.

- Jonathan M Davis


bug with CTFE std.array.array ?

2013-07-10 Thread Timothee Cour
import std.array;

void main(){
  //enum a1=[1].array;//NG: Error: gc_malloc cannot be interpreted at
compile time
  enum a2=" ".array;//OK

  import std.string;
  //enum a3=" ".splitLines.array;//NG
  enum a4="".splitLines.array;//OK
  enum a5=" ".split.array;//OK
  //enum a6=" a ".split.array;//NG
  import std.algorithm:filter;
  enum a7=" a ".split.filter!(a=>true).array;
  auto a8=" a ".split.array;
  assert(a8==a7);
  enum a9=[1].filter!(a=>true).array;//OK
}


I don't understand why the NG above fail (with Error: gc_malloc cannot be
interpreted at compile time)

furthermore, it seems we can bypass the CT error with interleaving
filter!(a=>true) (see above), which is even weirder.


Re: D graph library -- update

2013-07-10 Thread Joseph Rushton Wakeling
On 07/11/2013 02:18 AM, bearophile wrote:
> If you want a meaningful memory comparison then perhaps you need a 10 or 100 
> (or
> more) times larger graph.

I know, and it's coming. :-)  The main memory-related issues will probably not
show up in a situation like this where all we're doing is storing the graph
data, but in the case where algorithms are being performed on the data.



Re: D graph library -- update

2013-07-10 Thread bearophile

Joseph Rushton Wakeling:

The memory usage for the example D code is slightly higher than 
for its
comparable igraph C code, clocking in at about 2MB as opposed 
to 1.7.


If you want a meaningful memory comparison then perhaps you need 
a 10 or 100 (or more) times larger graph.


Bye,
bearophile


Re: D graph library -- update

2013-07-10 Thread Joseph Rushton Wakeling
On 07/11/2013 01:59 AM, Joseph Rushton Wakeling wrote:
> There is also a simple test file that can be used for benchmarking against
> comparable igraph code.  With the current method of adding edges one by one,
> this code already benchmarks as faster than its igraph equivalent, running in
> 2.4s on my machine when compiled with gdmd -O -release -inline and 1.4s when
> compiled with ldmd2 and the same flags -- compared to 3s for the igraph code
> written in C.

Comparable igraph code attached. :-)  You'll need to download and install igraph
0.6.5 if you want to try this out, then compile with

gcc -O3 -o graph50 graph50.c -ligraph

There's some commented out stuff in the foo() function which implements the
alternative means of adding edges to an igraph_graph_t, namely by adding them
all in a big vector.  This makes igraph's performance _much_ faster than the
current D implementation.

A note on the data model.  The _head and _tail arrays I guess are fairly
self-explanatory, being the start and end vertices of edges in the graph.
_indexHead and _indexTail are edge IDs sorted respectively by head and tail
values.  Finally, _sumHead and _sumTail are vectors whose v'th entry corresponds
to the total number of edges whose head (or tail) vertices have IDs less than v.

In other words, if we want to find out the number of outgoing edges from a
vertex v, we can do so by calculating _sumHead[v + 1] - _sumHead[v].  If we want
the number of incoming edges, we do the same but with _sumTail.

By similar tricks, we can cheaply list the neighbours of a vertex.  I'm
reasonably confident that the D implementation here will allow it to reliably
beat igraph for algorithms that rely on access to this information.

The cost of all this is that it's expensive to add edges, or at least, it's
expensive to add edges one at a time -- but this is arguably a cost worth
bearing for the speed and efficiency gains elsewhere.
#include 
#include 

void foo(void)
{

	int edges[200] =
	  { 0, 24,
		0, 25,
		0, 16,
		0, 26,
		1, 37,
		1, 38,
		1, 11,
		1, 14,
		2, 19,
		2, 33,
		2, 35,
		2, 27,
		3, 40,
		3, 39,
		3, 23,
		3, 19,
		4, 18,
		4, 31,
		4, 37,
		4, 7,
		5, 17,
		5, 49,
		5, 18,
		5, 28,
		6, 27,
		6, 42,
		6, 9,
		6, 44,
		7, 19,
		7, 18,
		7, 49,
		8, 36,
		8, 43,
		8, 41,
		8, 33,
		9, 14,
		9, 11,
		9, 46,
		10, 26,
		10, 36,
		10, 35,
		10, 41,
		11, 32,
		11, 21,
		12, 21,
		12, 23,
		12, 29,
		12, 35,
		13, 45,
		13, 25,
		13, 38,
		13, 29,
		14, 16,
		14, 39,
		15, 48,
		15, 49,
		15, 44,
		15, 38,
		16, 47,
		16, 43,
		17, 25,
		17, 41,
		17, 49,
		18, 28,
		19, 40,
		20, 33,
		20, 21,
		20, 24,
		20, 31,
		21, 32,
		22, 32,
		22, 46,
		22, 34,
		22, 37,
		23, 30,
		23, 26,
		24, 43,
		24, 30,
		25, 42,
		26, 35,
		27, 39,
		27, 46,
		28, 47,
		28, 34,
		29, 33,
		29, 36,
		30, 38,
		30, 34,
		31, 48,
		31, 45,
		32, 46,
		34, 45,
		36, 48,
		37, 48,
		39, 40,
		40, 47,
		41, 44,
		42, 47,
		42, 44,
		43, 45
	  };

	int i;
	igraph_t g;
//	igraph_vector_t e;

	igraph_empty(&g, 50, IGRAPH_UNDIRECTED);
/*	igraph_vector_init(&e, 200);
	for(i = 0; i < 200; ++i)
		VECTOR(e)[i] = edges[i];
	igraph_add_edges(&g, &e, 0);*/
	for(i = 0; i < 100; ++i)
		igraph_add_edge(&g, edges[2*i], edges[2*i + 1]);

/*	printf("Number of vertices: %d\n", (int) igraph_vcount(&g));
	printf("Number of edges: %d\n", (int) igraph_ecount(&g));*/

	//igraph_vector_destroy(&e);
	igraph_destroy(&g);
}

int main(void)
{
	int i;
	for(i = 0; i < 1; ++i)
		foo();
}


D graph library -- update

2013-07-10 Thread Joseph Rushton Wakeling
Hi all,

Following earlier discussion about a D graph library
, this evening I
sat down and had a go at coming up with some basic code to support such a 
venture.

You can find it here: https://github.com/WebDrake/Dgraph

This takes the basic data structure from the widely-used igraph library
 but builds around it using idiomatic D
structures and algorithms.

The code currently consists of the basic data structure, implemented as a final
class with methods to extract the number of vertices, the list of edges, and the
degree and neighbours of a vertex.

There is also a simple test file that can be used for benchmarking against
comparable igraph code.  With the current method of adding edges one by one,
this code already benchmarks as faster than its igraph equivalent, running in
2.4s on my machine when compiled with gdmd -O -release -inline and 1.4s when
compiled with ldmd2 and the same flags -- compared to 3s for the igraph code
written in C.

However, when igraph's option to add the edges all in one go in a vector is
enabled, igraph is significantly faster.  This surely reflects a mix of memory
management (how many allocs/appends?) and also the amount of sorting and other
updates that occur when edges are added.  So, I think that igraph can probably
still be beaten here.

The memory usage for the example D code is slightly higher than for its
comparable igraph C code, clocking in at about 2MB as opposed to 1.7.

I've chosen igraph as a point of comparison because it's known for being both
the fastest and most scalable graph library out there.  Many of igraph's design
choices seem focused on minimizing memory usage, sometimes at the expense of
all-out speed: for example, if an algorithm needs an adjacency list
representation of the graph, igraph will generate one at that moment, and
destroy it afterwards.

However, on the basis of the simple work done here, I'm fairly confident that D
can do better.  The code here is _much_ simpler than the equivalent functions in
igraph, and has not yet been optimized in any way either for speed or for memory
management.  Yet it seems to be performing on par with or better than igraph
within the limits of its current design constraints.

I'll be trying to implement a few additional little pieces of functionality in
the next days, perhaps some graph metrics which can give another point of
performance comparison.

Anyway, comments welcome, both positive and negative.

Thanks & best wishes,

-- Joe


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread John Colvin

On Wednesday, 10 July 2013 at 22:22:41 UTC, H. S. Teoh wrote:

On Wed, Jul 10, 2013 at 11:32:29PM +0200, John Colvin wrote:
On Wednesday, 10 July 2013 at 21:05:32 UTC, Jonathan A Dunlap 
wrote:

>My 2cents: for D to be successful for the game development
>community, it has to be possible to mostly sidestep the GC or 
>opt

>into a minimal one like ARC. Granted, this is a bit premature
>considering that OpenGL library support is still in alpha 
>quality.


I've noticed that when you reply to a thread, you reply to the 
most
recent response, irrelevant of context. This is a bit 
confusing to
those of us who view the group in a threaded layout. For 
example

here Walter is talking about not having inflight audio on a
discontinued? military recon plane and then there's you listed 
as a

replying to him, talking about ARC and GCs!


There's a long-standing bug in the mailing list interface to 
the forum
that rewrites message IDs when it shouldn't, thus breaking 
threads. This
problem has been irking me for a long time now, but it seems 
nobody is

interested to fix it. :-(


T


There is, but I don't think this was one of those cases. That 
manifests as an entirely new thread in the forum interface.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread H. S. Teoh
On Wed, Jul 10, 2013 at 11:32:29PM +0200, John Colvin wrote:
> On Wednesday, 10 July 2013 at 21:05:32 UTC, Jonathan A Dunlap wrote:
> >My 2cents: for D to be successful for the game development
> >community, it has to be possible to mostly sidestep the GC or opt
> >into a minimal one like ARC. Granted, this is a bit premature
> >considering that OpenGL library support is still in alpha quality.
> 
> I've noticed that when you reply to a thread, you reply to the most
> recent response, irrelevant of context. This is a bit confusing to
> those of us who view the group in a threaded layout. For example
> here Walter is talking about not having inflight audio on a
> discontinued? military recon plane and then there's you listed as a
> replying to him, talking about ARC and GCs!

There's a long-standing bug in the mailing list interface to the forum
that rewrites message IDs when it shouldn't, thus breaking threads. This
problem has been irking me for a long time now, but it seems nobody is
interested to fix it. :-(


T

-- 
There are four kinds of lies: lies, damn lies, and statistics.


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Timon Gehr

On 07/10/2013 11:32 PM, Brian Schott wrote:

On Wednesday, 10 July 2013 at 21:16:30 UTC, Timon Gehr wrote:

// (parameters) => expression ?

In any case, please consider that it actually makes no sense to
restrict the expressiveness of the type signature based on how the
function body is specified. (Why on earth should one have to use the {
return expression; } syntax just in order to be able to assert that no
context pointer is required?)

The documentation is in error here.


"(parameters) => expression" is mentioned in the source and I agree it's
valid. I must have forgotton to copy-paste it.

I don't agree that "function(parameters) => expression" is valid though.


Yes, you said that. What I do not understand is why. I think common 
sense would mandate that it should be valid syntax.



Can any of the DMD devs clear up if this is intended?


This is the relevant pull:

https://github.com/D-Programming-Language/dmd/commit/675898721c04d0bf155a85abf986eae99c37c0dc



Re: Is the compiler supposed to accept this?

2013-07-10 Thread Ali Çehreli

On 07/10/2013 02:32 PM, Brian Schott wrote:
> On Wednesday, 10 July 2013 at 21:16:30 UTC, Timon Gehr wrote:
>> The documentation is in error here.
>
> "(parameters) => expression" is mentioned in the source and I agree it's
> valid. I must have forgotton to copy-paste it.
>
> I don't agree that "function(parameters) => expression" is valid though.
> Can any of the DMD devs clear up if this is intended?

According to spec "function(parameters) => expression" is not valid.

  http://dlang.org/expression.html

Lambda:
Identifier => AssignExpression
ParameterAttributes => AssignExpression

Neither of those allow 'function' or 'delegate' keyword. However, I 
agree with Timon Gehr that the spec should be changed to match the 
current behavior.


Ali



Re: [OT] Why mobile web apps are slow

2013-07-10 Thread John Colvin
On Wednesday, 10 July 2013 at 21:05:32 UTC, Jonathan A Dunlap 
wrote:
My 2cents: for D to be successful for the game development 
community, it has to be possible to mostly sidestep the GC or 
opt into a minimal one like ARC. Granted, this is a bit 
premature considering that OpenGL library support is still in 
alpha quality.


I've noticed that when you reply to a thread, you reply to the 
most recent response, irrelevant of context. This is a bit 
confusing to those of us who view the group in a threaded layout. 
For example here Walter is talking about not having inflight 
audio on a discontinued? military recon plane and then there's 
you listed as a replying to him, talking about ARC and GCs!


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Brian Schott

On Wednesday, 10 July 2013 at 21:16:30 UTC, Timon Gehr wrote:

// (parameters) => expression ?

In any case, please consider that it actually makes no sense to 
restrict the expressiveness of the type signature based on how 
the function body is specified. (Why on earth should one have 
to use the { return expression; } syntax just in order to be 
able to assert that no context pointer is required?)


The documentation is in error here.


"(parameters) => expression" is mentioned in the source and I 
agree it's valid. I must have forgotton to copy-paste it.


I don't agree that "function(parameters) => expression" is valid 
though. Can any of the DMD devs clear up if this is intended?


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Timon Gehr

On 07/10/2013 07:47 PM, Brian Schott wrote:

There are severel comments in the part of the dmd front end that show
the syntax that the parser is looking for. Here's a listing:

// function type (parameters) { statements... }
// delegate type (parameters) { statements... }
// function (parameters) { statements... }
// delegate (parameters) { statements... }
// function { statements... }
// delegate { statements... }
// (parameters) { statements... }
// { statements... }
// identifier => expression

Based on the fact that "function (parameters) => expression" isn't
written out like the others,


You mean, like some others.


I'm going to file an [...] bug for this.


// (parameters) => expression ?

In any case, please consider that it actually makes no sense to restrict 
the expressiveness of the type signature based on how the function body 
is specified. (Why on earth should one have to use the { return 
expression; } syntax just in order to be able to assert that no context 
pointer is required?)


The documentation is in error here.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Jonathan A Dunlap
My 2cents: for D to be successful for the game development 
community, it has to be possible to mostly sidestep the GC or opt 
into a minimal one like ARC. Granted, this is a bit premature 
considering that OpenGL library support is still in alpha quality.


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Timon Gehr

On 07/10/2013 08:47 PM, Brian Schott wrote:

On Wednesday, 10 July 2013 at 18:17:07 UTC, Timon Gehr wrote:

Accepts-valid is not a bug.


I think you know what I meant. :-)


Well, I am going to guess you meant accepts-invalid, though I'd prefer 
if you didn't. :o)


Re: Rust moving away from GC into reference counting

2013-07-10 Thread Jonathan A Dunlap
Interesting read on the subject of ARC and GC: 
http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


It does seem that ARC would be a preferred strategy for D to 
pursue (even if it's a first pass before resorting to GC sweeps).


Re: Rust moving away from GC into reference counting

2013-07-10 Thread Jonathan A Dunlap

Opps, just saw that this link was already posted here:
http://forum.dlang.org/thread/krhjq8$2r8l$1...@digitalmars.com


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Walter Bright

On 7/10/2013 12:40 PM, Nick Sabalausky wrote:

On Wed, 10 Jul 2013 10:47:16 -0700
Walter Bright  wrote:


On 7/10/2013 7:52 AM, qznc wrote:

On Tuesday, 9 July 2013 at 20:39:30 UTC, Brian Rogoff wrote:

PS: That silhouette of the SR-71 at the point allocators are
mentioned sets a high bar for the design!


I did not like that analogy at all. Have you seen the user
interface of an SR-71?

http://www.likecool.com/Gear/Pic/Spy%20Plane%20SR71%20Blackbird%20Cockpit/big/Spy-Plane-SR71-Blackbird-Cockpit.jpg



I always love the utilitarian design of military cockpits. No logos,
no fake wood grain paneling, no styling, no color scheme, no
cupholder. All business.


No cupholders is all fine and good until the sky gets clogged with
ariel traffic and you have to put down your coffee.

*Then* you'll regret not opting for the deluxe package!



Ok, I'll concede the cupholder! But I'm holding the line on the 8-track.


Re: Current version of D.

2013-07-10 Thread Andrei Alexandrescu

On 7/10/13 11:27 AM, Rob T wrote:

On Wednesday, 10 July 2013 at 04:15:12 UTC, Kapps wrote:

The download page has the wrong link, it doesn't seem to have been
updated for 2.063.2. Can just manually add a .2 at the end, such as
http://downloads.dlang.org/releases/2013/dmd.2.063.2.zip


Thanks, that worked.

Yup the download page needs to be fixed asap.

--rt


Fixed and synced: 
https://github.com/D-Programming-Language/dlang.org/commit/a5ecc6be856fe203e619fe59f96cc5ad2c844c3a


Andrei


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Nick Sabalausky
On Wed, 10 Jul 2013 10:47:16 -0700
Walter Bright  wrote:

> On 7/10/2013 7:52 AM, qznc wrote:
> > On Tuesday, 9 July 2013 at 20:39:30 UTC, Brian Rogoff wrote:
> >> PS: That silhouette of the SR-71 at the point allocators are
> >> mentioned sets a high bar for the design!
> >
> > I did not like that analogy at all. Have you seen the user
> > interface of an SR-71?
> >
> > http://www.likecool.com/Gear/Pic/Spy%20Plane%20SR71%20Blackbird%20Cockpit/big/Spy-Plane-SR71-Blackbird-Cockpit.jpg
> >
> 
> I always love the utilitarian design of military cockpits. No logos,
> no fake wood grain paneling, no styling, no color scheme, no
> cupholder. All business.

No cupholders is all fine and good until the sky gets clogged with
ariel traffic and you have to put down your coffee.

*Then* you'll regret not opting for the deluxe package!



Re: Is the compiler supposed to accept this?

2013-07-10 Thread Brian Schott

On Wednesday, 10 July 2013 at 18:17:07 UTC, Timon Gehr wrote:

Accepts-valid is not a bug.


I think you know what I meant. :-)


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Paulo Pinto

Am 10.07.2013 20:32, schrieb Jacob Carlborg:

On 2013-07-10 19:25, Sean Kelly wrote:

On Jul 9, 2013, at 11:12 AM, Paulo Pinto  wrote:


A bit off-topic, but well worth reading,

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


Oh, regarding ObjC (and I'll qualify this by saying that I'm not an
ObjC programmer).  My understanding is that ObjC was originally
reference counted (ARC = Automatic Reference Counting).  Apple then
introduced a mark & sweep GC for ObjC and then in the following
release deprecated it and switched back to ARC for reasons I don't
recall.  However, reference counting *is* garbage collection, despite
what that slide suggests.  It just behaves in a manner that tends to
spread the load out more evenly across the application lifetime.


Objective-C originally used manual reference counting. Then Apple
created a GC (never available on iOS). Then they implemented ARC in
Clang. And now they have deprecated the GC and one should use ARC.



What sometimes goes missed between the lines is that one of the 
decisions to go ARC instead of GC, is because the Objective-C GC never

worked properly and ARC offers a better fit for the current state of
Objective-C world.

First of all, GC was an opt-in and very few libraries supported it.

Then we have the typical issues with a conservative GC in a C based 
language, which lead to tons of issues if one looks into developer forums.


This is, of course, not good PR to explain the real technical reason why
they decided to go ARC instead, was that the GC implementation was a 
failure.



--
Paulo


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Jacob Carlborg

On 2013-07-10 19:25, Sean Kelly wrote:

On Jul 9, 2013, at 11:12 AM, Paulo Pinto  wrote:


A bit off-topic, but well worth reading,

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


Oh, regarding ObjC (and I'll qualify this by saying that I'm not an ObjC 
programmer).  My understanding is that ObjC was originally reference counted (ARC = 
Automatic Reference Counting).  Apple then introduced a mark & sweep GC for 
ObjC and then in the following release deprecated it and switched back to ARC for 
reasons I don't recall.  However, reference counting *is* garbage collection, 
despite what that slide suggests.  It just behaves in a manner that tends to spread 
the load out more evenly across the application lifetime.


Objective-C originally used manual reference counting. Then Apple 
created a GC (never available on iOS). Then they implemented ARC in 
Clang. And now they have deprecated the GC and one should use ARC.


--
/Jacob Carlborg


Re: Current version of D.

2013-07-10 Thread Rob T

On Wednesday, 10 July 2013 at 04:15:12 UTC, Kapps wrote:
The download page has the wrong link, it doesn't seem to have 
been updated for 2.063.2. Can just manually add a .2 at the 
end, such as 
http://downloads.dlang.org/releases/2013/dmd.2.063.2.zip


Thanks, that worked.

Yup the download page needs to be fixed asap.

--rt


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Timon Gehr

On 07/10/2013 07:47 PM, Brian Schott wrote:

There are severel comments in the part of the dmd front end that show
the syntax that the parser is looking for. Here's a listing:

// function type (parameters) { statements... }
// delegate type (parameters) { statements... }
// function (parameters) { statements... }
// delegate (parameters) { statements... }
// function { statements... }
// delegate { statements... }
// (parameters) { statements... }
// { statements... }
// identifier => expression

Based on the fact that "function (parameters) => expression" isn't
written out like the others, I'm going to file an accepts-valid bug for
this.


Accepts-valid is not a bug.


Re: Memory management design

2013-07-10 Thread BLM768

On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


One can already choose their own memory model in their own 
code. The issue is with the core library and pre-existing code 
that forces you to use the GC model.


It's possible to use your own memory model, but that doesn't mean 
it's necessarily convenient or safe, and there's no standardized 
method of going about it. If it becomes standardized, there's a 
much higher chance of the core library using it.


@nogc has been proposed several years ago but not gotten any 
footing. By having the ability to mark stuff has @nogc phobos 
could be migrated slowly and, at least, some libraries would be 
weaned off the GC and available.


I think the use of custom allocators would be better. Plug your 
own memory management model into D.


Memory management and memory allocation are not the same issue; 
from a purely theoretical standpoint, they're nearly orthogonal, 
at least without a compacting collector. If the both the GC and 
the allocators are designed in a sufficiently flexible and 
modular manner, it would be possible to tie several 
general-purpose allocators to the GC at once. There are some 
allocators that can't be shoehorned into the GC model, but those 
would just return non-GC references.


On Wednesday, 10 July 2013 at 07:59:41 UTC, Dicebot wrote:
I think merging "scope" and "owned" can be usable enough to be 
interesting without introducing any new concepts. Simply make 
it that "scope" in a variable declaration means it is a 
stack-allocated entity with unique ownership and "scope" as a 
function parameter attribute is required to accept scope data, 
verifying no references to it are taken / stored. Expecting 
mandatory deadalnix comment about lifetime definition ;)


Most of the functionality of "owned" is redundant, but there are 
still some corner cases where it could be useful. The idea behind 
it is to have it function very much like a pointer in C++ code. 
For non-reference types, you could just use a pointer, but using 
a pointer with reference types introduces an extra dereference 
operation to get to the real data.


This is something that could be implemented as a library type 
rather than an intrinsic part of the language, and that would 
probably be better because it's really sort of a low-level tool.


Only thing I have no idea about is if "scope" attribute should 
be shallow or transitive. Former is dangerous, latter severely 
harms usability.


I'm not sure how shallow "scope" would be dangerous. If a "scope" 
object contains non-scope references to GC-allocated data, it's 
perfectly safe to stash those references somewhere because the 
target of the reference won't be collected. If the object 
contains "scope" members (if the language even allows that), then 
references to those members should actually inherit the 
container's "scope" status, not the members' "scope" status, 
because "scope" would be "overloaded" in that case to mean 
"packed into a (potentially heap allocated) object". "scope" is 
all about where the objects might be allocated, which is not a 
transitive property.


Re: Is the compiler supposed to accept this?

2013-07-10 Thread Brian Schott
There are severel comments in the part of the dmd front end that 
show the syntax that the parser is looking for. Here's a listing:


// function type (parameters) { statements... }
// delegate type (parameters) { statements... }
// function (parameters) { statements... }
// delegate (parameters) { statements... }
// function { statements... }
// delegate { statements... }
// (parameters) { statements... }
// { statements... }
// identifier => expression

Based on the fact that "function (parameters) => expression" 
isn't written out like the others, I'm going to file an 
accepts-valid bug for this.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Walter Bright

On 7/10/2013 7:52 AM, qznc wrote:

On Tuesday, 9 July 2013 at 20:39:30 UTC, Brian Rogoff wrote:

PS: That silhouette of the SR-71 at the point allocators are mentioned sets a
high bar for the design!


I did not like that analogy at all. Have you seen the user interface of an 
SR-71?

http://www.likecool.com/Gear/Pic/Spy%20Plane%20SR71%20Blackbird%20Cockpit/big/Spy-Plane-SR71-Blackbird-Cockpit.jpg



I always love the utilitarian design of military cockpits. No logos, no fake 
wood grain paneling, no styling, no color scheme, no cupholder. All business.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread bearophile

Sean Kelly:


However, reference counting *is* garbage collection,


Of course, it's explained well here:

http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf

Bye,
bearophile


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Sean Kelly
On Jul 9, 2013, at 11:12 AM, Paulo Pinto  wrote:

> A bit off-topic, but well worth reading,
> 
> http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/

Oh, regarding ObjC (and I'll qualify this by saying that I'm not an ObjC 
programmer).  My understanding is that ObjC was originally reference counted 
(ARC = Automatic Reference Counting).  Apple then introduced a mark & sweep GC 
for ObjC and then in the following release deprecated it and switched back to 
ARC for reasons I don't recall.  However, reference counting *is* garbage 
collection, despite what that slide suggests.  It just behaves in a manner that 
tends to spread the load out more evenly across the application lifetime.

Re: Memory management design

2013-07-10 Thread Johannes Pfau
Am Wed, 10 Jul 2013 18:12:42 +0200
schrieb Paulo Pinto :

> Who is going to write two versions of the library then?
> 
> Throwing exceptions with @nogc pointers floating around would just
> lead to the same headache as in C++.

This will really be an issue if/once we support systems which just
can't run a GC. (Because of really limited memory or because of code
size limitations...)

Once we have ARC we might check whether switching all exceptions to ARC
would work. If we manage to combine ARC with different custom
allocators it can be integrated perfectly with the GC as well, although
for an ARC-object allocated from the GC the reference count code would
just be no-ops.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Oleg Kuporosov

 Thanks for the link.  In my experience, mobile networking is
slow in general.  When I run Speedtest on my phone vs. a laptop 
sitting right next to it, the phone has a fraction of the 
bandwidth of the laptop.  So there's even more at issue than 
raw JS performance.


Sean is right, this is only one side of problem and very often 
not a major, another side is mobile network bandwidth and 
infrastructure load.
It is usual for corporate offices/cities (sharing), weak signal 
conditions and
movement you will have quite limited bandwidth even for LTE/3G 
cases.


Re: Unhelpful error messages

2013-07-10 Thread H. S. Teoh
On Wed, Jul 10, 2013 at 03:05:25PM +0200, Don wrote:
> On Monday, 8 July 2013 at 20:46:35 UTC, H. S. Teoh wrote:
> >On Mon, Jul 08, 2013 at 09:47:46PM +0200, Peter Alexander wrote:
[...]
> >>Maybe the compiler could just spew out every possible error for
> >>every instantiation, and expect the user to grep, but that's not
> >>going to be a pleasant experience.
> 
> The compiler could join all the constraints and then simplify it to
> create the error message. (eg by creating a Binary Decision Diagram
> (BDD))
> 
> eg given constraints:
> 
> if ( A  && B && C )
> if ( (A  && D) || ( A && E && F) )
> if ( E && G )
> 
> Suppose A is true but the conditions fail. The compiler could then
> write that no templates match because ( B || D || E ) is false.

+1. Now that's cool, and is something actually implementable.  Can we
have this pretty please? :-)


T

-- 
The right half of the brain controls the left half of the body. This
means that only left-handed people are in their right mind. -- Manoj
Srivastava


Re: Memory management design

2013-07-10 Thread Paulo Pinto

Am 10.07.2013 15:57, schrieb John Colvin:

On Wednesday, 10 July 2013 at 13:00:53 UTC, Kagamin wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


I think I mentioned before, elsewhere, that @nogc could allow
exceptions. No one who is sensitive to memory usage is going to use
exceptions for anything other than exceptional circumstances, which
perhaps don't need the same stringent memory control and high
performance as the normal code path.


How much of the exception model would have to change in order to free
them from the GC? I don't see high performance as a concern for
exceptions so even an inefficient situation would be fine.


Who is going to write two versions of the library then?

Throwing exceptions with @nogc pointers floating around would just lead 
to the same headache as in C++.


--
Paulo


Re: Memory management design

2013-07-10 Thread bearophile

sclytrack:


Why not just go manual memory. Just store everything
in a tree-like structure.

SuperOwner
--Child1
--Child2
SubChild1
SubChild2
--Container1
--Container2
--TStringList

Freeing a Child2 disposes of everything below.


Something like this?
http://swapped.cc/?_escaped_fragment_=/halloc#!/halloc

A manual hierarchical allocator is possibly handy to have in 
Phobos. But it doesn't replace a owning scheme for automatic 
memory management.


Bye,
bearophile


Re: Memory management design

2013-07-10 Thread sclytrack

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
Given all of this talk about memory management, it would seem 
that it's time for people to start putting forward some ideas 
for improved memory management designs. I've got an idea or two 
of my own, but I'd like to discuss my ideas before I draft a 
DIP so I can try to get everything fleshed out and polished.


Anyway the core idea behind my design is that object lifetimes 
tend to be managed in one of three ways:


1. Tied to a stack frame
2. Tied to an "owner" object
3. Not tied to any one object (managed by the GC)

To distinguish between these types of objects, one could use a 
set of three storage classes:


1. "scope": refers to stack-allocated memory (which seems to be 
the original design behind "scope"). "scope" references may not 
be stashed anywhere where they might become invalid. Since this 
is the "safest" type of reference, any object may be passed by 
"scope ref".


2. "owned": refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. "owned" 
references may only be stashed in other "owned" references. Any 
non-scope object may be passed by "owned ref". This storage 
class might not be usable in @safe code without further 
restrictions.


3. GC-managed: the default storage class. Fairly 
self-explanatory. GC-managed references may not refer to 
"scope" or "owned" objects.


Besides helping with the memory management issue, this design 
could also help tame the hairy mess of "auto ref"; "scope ref" 
can safely take any stack-allocated object, including 
temporaries, so a function could have "scope auto ref" 
parameters without needing to be a template function and with 
greater safety than "auto ref" currently provides.


--


2. Tied to an "owner" object

Why not just go manual memory. Just store everything
in a tree-like structure.

SuperOwner
--Child1
--Child2
SubChild1
SubChild2
--Container1
--Container2
--TStringList

Freeing a Child2 disposes of everything below.






Re: A thread-safe weak reference implementation

2013-07-10 Thread Andrej Mitrovic
On 3/14/12, Alex Rønne Petersen  wrote:
>  auto ptr = cast(size_t)cast(void*)object;

I think this should be:

auto ptr = cast(size_t)*cast(void**)&object;

The object might have an opCast method, and this is one way to avoid
calling it by accident.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread qznc

On Tuesday, 9 July 2013 at 20:39:30 UTC, Brian Rogoff wrote:
PS: That silhouette of the SR-71 at the point allocators are 
mentioned sets a high bar for the design!


I did not like that analogy at all. Have you seen the user 
interface of an SR-71?


http://www.likecool.com/Gear/Pic/Spy%20Plane%20SR71%20Blackbird%20Cockpit/big/Spy-Plane-SR71-Blackbird-Cockpit.jpg


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 13:57:50 UTC, John Colvin wrote:
How much of the exception model would have to change in order 
to free them from the GC? I don't see high performance as a 
concern for exceptions so even an inefficient situation would 
be fine.


Well, you can just throw malloc'ed exceptions. Problem is 
druntime and Phobos use "new" for exceptions and that is hard 
wired into GC. That has generally the same issues as global 
customized "new" allocator hook.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread bearophile

Paulo Pinto:

They used to have special hardware, but now they use standard 
kernels.


This is very good. (Maybe in the meantime someone has folded 
their needs inside the standard kernel).


Bye,
bearophile


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 13:30:26 UTC, bearophile wrote:

thedeemon:


No mature GCed languages behave that bad.


I think there is one JavaVM that manages to avoid part of the 
problem you explain (and maybe it needs special kernel 
support). I think all other JavaVMs suffer it, more or less.


Bye,
bearophile


Yes, the C4 garbage collector in the Azul JVM

http://www.azulsystems.com/products/zing/whatisit

http://www.azulsystems.com/products/zing/c4-java-garbage-collector-wp

They used to have special hardware, but now they use standard 
kernels.


RedHat is also planning to have a go at it for the OpenJDK,

http://rkennke.wordpress.com/2013/06/10/shenandoah-a-pauseless-gc-for-openjdk/


--
Paulo


Re: Memory management design

2013-07-10 Thread John Colvin

On Wednesday, 10 July 2013 at 13:00:53 UTC, Kagamin wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


I think I mentioned before, elsewhere, that @nogc could allow 
exceptions. No one who is sensitive to memory usage is going to 
use exceptions for anything other than exceptional circumstances, 
which perhaps don't need the same stringent memory control and 
high performance as the normal code path.



How much of the exception model would have to change in order to 
free them from the GC? I don't see high performance as a concern 
for exceptions so even an inefficient situation would be fine.


Re: Memory management design

2013-07-10 Thread bearophile

Kagamin:


Most functions can't be @nogc because they throw exceptions.


Probably about half of my functions/metods are tagged with 
"nothrow". And as ".dup" becomes nothrow and few more functions 
become nothrow (iota, etc), that percentage will increase. I have 
also proposed to add to Phobos some nonthrowing functions, like a 
maybeTo, that help increase the percentage of nothrow functions:


http://d.puremagic.com/issues/show_bug.cgi?id=6840

Bye,
bearophile


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread bearophile

thedeemon:


No mature GCed languages behave that bad.


I think there is one JavaVM that manages to avoid part of the 
problem you explain (and maybe it needs special kernel support). 
I think all other JavaVMs suffer it, more or less.


Bye,
bearophile


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Michel Fortin

On 2013-07-09 18:12:25 +, Paulo Pinto  said:


A bit off-topic, but well worth reading,

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


What I'm retaining from this is that garbage collectors are wasteful. 
They're viable if you have a lot of RAM to spare. They cause noticeable 
hiccups at unpredictable times unless you have a battery-hungry 
overpowered CPU that makes pauses impossible to notice. And while those 
pauses are not that bad for non-realtime apps, all iOS apps are 
considered realtime by Apple because you don't want hiccups messing 
smooth scrolling and animations.


Also, non-deterministic deallocation makes it hard for an app to fit 
within a fixed memory limit.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Unhelpful error messages

2013-07-10 Thread Don

On Monday, 8 July 2013 at 20:46:35 UTC, H. S. Teoh wrote:

On Mon, Jul 08, 2013 at 09:47:46PM +0200, Peter Alexander wrote:

On Monday, 8 July 2013 at 18:10:45 UTC, H. S. Teoh wrote:
>On Sun, Jul 07, 2013 at 02:06:46PM +0200, Peter Alexander 
>wrote:
>>It's a tough situation and I think the only way this could 
>>even

>>reasonably be resolved is through some sophisticated IDE
>>integration. There is no way to display this kind of error 
>>report in

>>a blob of command line text.
>
>I don't see how an IDE could do better than the compiler.
>Combinatorial explosion is a nasty problem, and if an IDE 
>could solve

>it, so could the compiler. Sure, the IDE could give you a nice
>scrollable GUI widget to look through all the various reasons 
>of the
>instantiation failure, but fundamentally speaking, that's not 
>much
>different from running grep through 50 pages of compiler 
>output. You
>still haven't solved the root problem, which is to narrow 
>down the

>exponential set of possible problem causes to a manageable,
>human-comprehensible number.

I thinking of more of an interactive diagnostic: you choose 
which
overload you intended to instantiate and then get a list of 
reasons
why that failed to compile. Repeat recursively for any 
sub-calls.


The problem is, this presumes knowledge of Phobos internals, 
which most
D users probably would have no clue about. How is one supposed 
to know
which of the 25 overloads should be used in this particular 
case anyway?

For all one knows, it may be a Phobos bug or something.

Whereas a message like "cannot instantiate S in 
std.blah.internal.func"
where S is the user-defined struct, would be a good indication 
as to
what might be wrong without needing to understand how Phobos 
works.



Maybe the compiler could just spew out every possible error 
for every
instantiation, and expect the user to grep, but that's not 
going to be

a pleasant experience.


The compiler could join all the constraints and then simplify it 
to create the error message. (eg by creating a Binary Decision 
Diagram (BDD))


eg given constraints:

if ( A  && B && C )
if ( (A  && D) || ( A && E && F) )
if ( E && G )

Suppose A is true but the conditions fail. The compiler could 
then write that no templates match because ( B || D || E ) is 
false.





Re: [OT] Why mobile web apps are slow

2013-07-10 Thread thedeemon

On Tuesday, 9 July 2013 at 18:12:24 UTC, Paulo Pinto wrote:

A bit off-topic, but well worth reading,

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/


The chart of different kinds of GC is very worth looking there. 
It shows how much generational GCs are faster than simple 
scan-whole-heap ones. Without generational GC D will remain 
rather slow, as each collection cycle with a 1 GB heap of small 
objects now takes ~5 seconds. And after you've gained 1 GB of 
data in the heap, each 8 MB of new allocated data cost you 
another 5 seconds of full scan/collect. No mature GCed languages 
behave that bad.


Re: Memory management design

2013-07-10 Thread Kagamin

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


Re: A thread-safe weak reference implementation

2013-07-10 Thread David
Am 09.07.2013 23:35, schrieb Robert:
> 
>> if (GC.addrOf(cast(void*)obj))
>> return obj;
> 
> 
> Smart :-) You are waiting for the collection to complete, if we are one
> of the threads started before destructor calls happen. Brilliant.
> 
> Is it ok if I shamelessly copy parts of your implementation for the new
> std.signals? I will of course mention your name.

My first thought when I saw this title on the NG, this could work for
std.signals! Thanks for reading it ;)



Re: access CTFE variables at compile time.

2013-07-10 Thread bearophile

Don:


Although that's a problem, it's not at all what's happened here.


Right. I presume that several of the patches that are waiting for 
some time have some significant problems.


Bye,
bearophile


Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 11:38:35 UTC, JS wrote:

On Wednesday, 10 July 2013 at 10:56:48 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot  wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, 
but only the ones
that are explicitly attributed will enjoy that 
recognition... seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the 
world GC(at least not without a great amount of work or 
severe limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more 
useful for real time applications. If custom allocation 
methods ever come about then the @nogc may be obsolete are 
extremely useful depending on how the alternate memory models 
are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc 
dependent.


I do agree D needs something like @nogc, something like 
untraced pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of 
the default allocation mode.




I agree but it's not going to happen ;/

In such case it looks more as a workaround instead of fixing 
the real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting 
with compiler support to minimize increment/decrement 
operations.


I don't know if that is a solid statement. ARC is pretty 
different from AGC.


Reference counting is pretty much seen as a primitive form of 
garbage collection in the CS literature.


In some books it is usually the first chapter, hence the way I 
phrased my comment.



--
Paulo


Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 10:56:48 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot  wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, 
but only the ones
that are explicitly attributed will enjoy that 
recognition... seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more 
useful for real time applications. If custom allocation 
methods ever come about then the @nogc may be obsolete are 
extremely useful depending on how the alternate memory models 
are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc 
dependent.


I do agree D needs something like @nogc, something like 
untraced pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of the 
default allocation mode.




I agree but it's not going to happen ;/

In such case it looks more as a workaround instead of fixing 
the real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting 
with compiler support to minimize increment/decrement 
operations.


I don't know if that is a solid statement. ARC is pretty 
different from AGC.


I personally think memory management should be up to the 
programmer with some sort of GC as a fallback, ideally 
optional... maybe even selectable at run-time.




Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 10:49:04 UTC, Dicebot wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

...


@nogc itself does not help here as this code will still be 
affected by stop-the-world. Those issues are related, but not 
directly.


It will help to avoid memory leaks when you switch the GC off 
though.


Of Course, I never said stop the world only affected certain 
parts of the program. ? Having a nogc allows one to use only 
those functions and disable the gc and not worry about running 
out of memory.


e.g., import @nogc std.string; only imports nogc functions. Can 
disable the gc, use reference counting and write a RT app. (it 
would be better to specify the module as @nogc then all imports 
can only import @nogc functions)





Re: Memory management design

2013-07-10 Thread Michel Fortin

On 2013-07-10 08:00:42 +, Manu  said:


I'd push for an ARC implementation. I've become convinced that's what I
actually want, and that GC will never completely satisfy my requirements.


There's two ways to implement ARC. You can implement it instead of the 
GC, but things with cycles in them will leak. You can implement it as a 
supplement to the GC, where the GC is used to collect cycles which ARC 
cannot release. Or you can implement it only for a subset of the 
language by having a base reference-counted class and things derived 
from it are reference counted.


The two first ideas, which implement ARC globally, would have to call a 
function at each and every pointer assignment. Implementing this would 
require a different codegen from standard D, and libraries compiled 
with and without those calls on pointer assignment would be 
incompatible with each other.


On the other end, having a reference counted base class is of more 
limited utility because you can't reuse D code that rely on the GC if 
your requirement does not allow the GC to run when it needs too. But it 
does not create codegen fragmentation.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: access CTFE variables at compile time.

2013-07-10 Thread Don

On Wednesday, 10 July 2013 at 01:23:20 UTC, bearophile wrote:

Meta:

I think there's been mention a couple times of a ctfeWrite 
function that can print values at compile-time, but so far 
nobody's implemented it.


That's not true. This is the ER:
http://d.puremagic.com/issues/show_bug.cgi?id=3952

And the patch:
https://github.com/D-Programming-Language/dmd/pull/237



A significant problem with GitHub is that it's managed mostly 
as as a LIFO (last in first out), it means the last added 
patches get exposure, are discussed, and they often get added 
to the code. But if they are not merged in few weeks, they get 
out of radar and they get forgotten, and they rot for years. 
This is a significant problem for D development that so far 
doesn't have a solution. I don't remember Andrei or Walter 
every discussing about this problem or proposing solutions...


Although that's a problem, it's not at all what's happened here. 
That patch would not have worked, essentially because it mixes 
values-inside-CTFE with literals-outside-CTFE. It worked in 
simple cases, but not in general. That's not a simple problem 
with the patch that could have been fixed at the time.




Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot  wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more useful 
for real time applications. If custom allocation methods ever 
come about then the @nogc may be obsolete are extremely useful 
depending on how the alternate memory models are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc dependent.


I do agree D needs something like @nogc, something like untraced 
pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of the 
default allocation mode.


In such case it looks more as a workaround instead of fixing the 
real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting with 
compiler support to minimize increment/decrement operations.


--
Paulo


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

...


@nogc itself does not help here as this code will still be 
affected by stop-the-world. Those issues are related, but not 
directly.


It will help to avoid memory leaks when you switch the GC off 
though.


Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot  wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their code, 
the sooner the better(else, at some point, it because useless 
because there is too much old code to mark). @nogc respects 
function composition... so if two functions do not rely on the gc 
then if one calls the other it will not break anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more useful 
for real time applications. If custom allocation methods ever 
come about then the @nogc may be obsolete are extremely useful 
depending on how the alternate memory models are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc dependent.




Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot  wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have

no problems in being accepted into the mainstream once properly
implemented. It is mostly a matter of volunteer wanting to get 
dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a backwards 
way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like @nogc 
where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks before 
anything else, that is so common in the C and C++ communities.


--
Paulo


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 08:16:55 UTC, Mr. Anonymous wrote:
I thought about allowing attributes to be applied to a whole 
module, such as:

@safe @nogc module foo_bar;

Then, "@system", "@allowheap" and friends could be used where 
needed.


You can do it, but it is not always possible to "disable" 
attribute/qualifier.


@safe:
@system void foo() {} // ok

immutable:
int a; // oh, where is my "mutable" keyword?

pure:
void foo(); // oops, no "impure"

If a generic notion becomes accepted that even default behavior 
should have its own attribute, this will become less of an issue.


Re: Memory management design

2013-07-10 Thread Mr. Anonymous

On Wednesday, 10 July 2013 at 08:09:46 UTC, Dicebot wrote:
Yes, this is a common issue not unique to @nogc. I am 
personally much in favor of having restrictive attributes 
enabled by default and then adding "mutable" "@system" and 
"@allowheap" where those are actually needed. But unfortunately 
there is no way to add something that backwards-incompatible 
and attribute inference seems the only practical way (though I 
hate it).


I thought about allowing attributes to be applied to a whole 
module, such as:

@safe @nogc module foo_bar;

Then, "@system", "@allowheap" and friends could be used where 
needed.


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:
I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


I think those issues are actually orthogonal. I'd love to have 
verified @noheap attribute even in my old C code. Sometimes the 
very fact that allocation happens is more important that 
algorithm how it is later collected.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a backwards 
way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


Yes, this is a common issue not unique to @nogc. I am personally 
much in favor of having restrictive attributes enabled by default 
and then adding "mutable" "@system" and "@allowheap" where those 
are actually needed. But unfortunately there is no way to add 
something that backwards-incompatible and attribute inference 
seems the only practical way (though I hate it).


Re: Memory management design

2013-07-10 Thread Manu
On 10 July 2013 17:53, Dicebot  wrote:

> On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:
>
>> ...
>>
>
> I am pretty sure stuff like @nogc (or probably @noheap. or both) will have
> no problems in being accepted into the mainstream once properly
> implemented. It is mostly a matter of volunteer wanting to get dirty with
> the compiler.
>

I'd push for an ARC implementation. I've become convinced that's what I
actually want, and that GC will never completely satisfy my requirements.

Additionally, while I can see some value in @nogc, I'm not actually sold on
that personally... it feels explicit attribution is a backwards way of
going about it. ie, most functions may actually be @nogc, but only the ones
that are explicitly attributed will enjoy that recognition... seems kinda
backwards.


Re: Memory management design

2013-07-10 Thread Dicebot

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
1. "scope": refers to stack-allocated memory (which seems to be 
the original design behind "scope"). "scope" references may not 
be stashed anywhere where they might become invalid. Since this 
is the "safest" type of reference, any object may be passed by 
"scope ref".


2. "owned": refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. "owned" 
references may only be stashed in other "owned" references. Any 
non-scope object may be passed by "owned ref". This storage 
class might not be usable in @safe code without further 
restrictions.


I think merging "scope" and "owned" can be usable enough to be 
interesting without introducing any new concepts. Simply make it 
that "scope" in a variable declaration means it is a 
stack-allocated entity with unique ownership and "scope" as a 
function parameter attribute is required to accept scope data, 
verifying no references to it are taken / stored. Expecting 
mandatory deadalnix comment about lifetime definition ;)


Only thing I have no idea about is if "scope" attribute should be 
shallow or transitive. Former is dangerous, latter severely harms 
usability.


Re: Memory management design

2013-07-10 Thread JS

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
Given all of this talk about memory management, it would seem 
that it's time for people to start putting forward some ideas 
for improved memory management designs. I've got an idea or two 
of my own, but I'd like to discuss my ideas before I draft a 
DIP so I can try to get everything fleshed out and polished.


Anyway the core idea behind my design is that object lifetimes 
tend to be managed in one of three ways:


1. Tied to a stack frame
2. Tied to an "owner" object
3. Not tied to any one object (managed by the GC)

To distinguish between these types of objects, one could use a 
set of three storage classes:


1. "scope": refers to stack-allocated memory (which seems to be 
the original design behind "scope"). "scope" references may not 
be stashed anywhere where they might become invalid. Since this 
is the "safest" type of reference, any object may be passed by 
"scope ref".


2. "owned": refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. "owned" 
references may only be stashed in other "owned" references. Any 
non-scope object may be passed by "owned ref". This storage 
class might not be usable in @safe code without further 
restrictions.


3. GC-managed: the default storage class. Fairly 
self-explanatory. GC-managed references may not refer to 
"scope" or "owned" objects.


Besides helping with the memory management issue, this design 
could also help tame the hairy mess of "auto ref"; "scope ref" 
can safely take any stack-allocated object, including 
temporaries, so a function could have "scope auto ref" 
parameters without needing to be a template function and with 
greater safety than "auto ref" currently provides.


One can already choose their own memory model in their own code. 
The issue is with the core library and pre-existing code that 
forces you to use the GC model.


@nogc has been proposed several years ago but not gotten any 
footing. By having the ability to mark stuff has @nogc phobos 
could be migrated slowly and, at least, some libraries would be 
weaned off the GC and available.


I think the use of custom allocators would be better. Plug your 
own memory management model into D.


IMHO nothing will be done because this kinda talk has been going 
on for years(nearly a decade it as some posts go back to 2006).


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:

...


I am pretty sure stuff like @nogc (or probably @noheap. or both) 
will have no problems in being accepted into the mainstream once 
properly implemented. It is mostly a matter of volunteer wanting 
to get dirty with the compiler.


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Paulo Pinto

On Tuesday, 9 July 2013 at 22:40:31 UTC, bearophile wrote:

Walter Bright:


It isn't off-topic at all. It's very relevant to D.

I also agree with what he says about GC.


There's a long way from recognizing those problems, to having 
good enough solutions in D.


Some possible attack strategies for D are:
- A less allocating Phobos to produce a bit less garbage;
- A more precise GC to avoid some memory leaks;
- Perhaps an annotation to disallow heap allocations in a 
function or a piece of code;
- Some good way to allocate variable length arrays on the stack 
(http://d.puremagic.com/issues/show_bug.cgi?id=9832 ) (so some 
arrays produce no garbage);

- The implementation of "scope" maybe helps a bit;
- Multiple alias this opens some opportunities to use structs 
in more places, avoiding some heap allocations.

- Currently Phobos Scoped/emplace are not very good.

Is that enough? Rust language designers seem to think that's 
not enough. Opinions are welcome.


Bye,
bearophile


I agree.

Additionally I think it might be worthwhile to also have a look 
at other system languages with GC.


The ones I had some contact with:

- Oberon, Oberon-2, Oberon-7, Component Pascal, Active Oberon 
(Basically Oberon Family)


- Modula-3

- Sing# (C# 2.0 with extensions for Singularity)


The main problem is that they failed to enter the industry, as 
such for some of them (Modula-3) it is very hard to get proper 
information nowadays.


In the Oberon family for example, GC only works when New or 
string manipulations are involved. The rest of memory is the 
usual static allocation at compile time or VLA arrays as 
bearophile mentions.


Both Modula-3 and Active Oberon allow for the declaration of 
untraced pointers, which boils down to manual memory management 
with compiler help.


In all of them it is also possible to circumvent the type system 
and do manual memory management via the SYSTEM package/unsafe 
constructs. Although it is seen as something to do with expert 
hat on and big red alert. :)


--
Paulo


Re: [OT] Why mobile web apps are slow

2013-07-10 Thread Paulo Pinto

On Tuesday, 9 July 2013 at 19:44:26 UTC, Joakim wrote:

On Tuesday, 9 July 2013 at 19:27:22 UTC, QAston wrote:

On Tuesday, 9 July 2013 at 18:12:24 UTC, Paulo Pinto wrote:

A bit off-topic, but well worth reading,

http://sealedabstract.com/rants/why-mobile-web-apps-are-slow/

--
Paulo


I think that the garbage collection part of the atricle is 
very relevant to the usage of D on mobile.

Nobody uses D on mobile, so it's not very relevant. ;)

If Intel ever makes a comeback on mobile- Samsung is putting 
x86 chips in their next generation of Galaxy Tab tablets, so 
it's possible- it might not take much effort to port D/ldc to 
Android/x86 though, so maybe it will be relevant someday.


Good article, with some good data.


There are embedded devices that can be targeted with GC enabled 
systems programming languages.


For example Oberon-7 for ARM Cortex-M3 and NXP LPC2000 
microcontrollers:


http://www.astrobe.com/default.htm

What happens is that you can also control memory outside the GC 
if really needed, via the usual stack/global memory allocation 
and the SYSTEM package.


--
Paulo