Knowledge of managed memory pointers

2014-04-16 Thread Manu via Digitalmars-d
It occurs to me that a central issue regarding the memory management
debate, and a major limiting factor with respect to options, is the fact
that, currently, it's impossible to tell a raw pointer apart from a gc
pointer.

Is this is a problem worth solving? And would it be as big an enabler to
address some tricky problems as it seems to be at face value?

What are some options? Without turning to fat pointers or convoluted
changes in the type system, are there any clever mechanisms that could be
applied to distinguish managed from unmanaged pointers. If an API could be
provided in druntime, it may be used by GC's, ARC, allocators, or systems
that operate at the barrier between languages.

Obviously it needs to be super trivial to gather this information from the
pointer...
On embedded systems with fixed/limited memory it's easy, just make the gc
allocate pages in course physically aligned blocks and check a bit indexed
by a couple of bits in pointers whether that page is owned by the GC or not.

On large scale OS's with unknown quantity (perhaps lots) of memory, it's
not so simple, but they have other advantages, like virtual memory
managers. Can virtual pages be attributed with a bit of data somehow that's
easy to look up?

What about 'hacks' like an unlikely sentinel value at ptr[-1]?


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d
On Thursday, 17 April 2014 at 03:14:21 UTC, Manu via 
Digitalmars-d wrote:
Obviously, a critical part of ARC is the compilers ability to 
reduce

redundant inc/dec sequences.


You need whole program opimization to do this well. Which I am 
strongly in favour of, btw.


I've never heard of Obj-C users complaining about the inc/dec 
costs.


Obj-C has lots of overhead.

Further problems with ARC are inability to mix ARC references 
with non-ARC

references, seriously hampering generic code.



That's why the only workable solution is that all references 
are ARC references.


I never understood why you cannot mix. If your module owns a 
shared object you should be able to use regular pointers from 
that module.


So then consider ARC seriously. If it can't work, articulate 
why.


It can work if the language is designed for it, and code is 
written to enable optimizations.


IMHO you need a seperate layer to enable compiletime proofs if 
you want to have safe and efficient system level programming. A 
bit more than @safe, @pure etc.


iOS is a competent realtime platform, Apple are well known for 
their

commitment to silky-smooth, jitter-free UI and general feel.


Foundational libraries does not use ARC? Only higher level stuff?

Ola


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d

On Wednesday, 16 April 2014 at 23:14:27 UTC, Walter Bright wrote:
On 4/16/2014 3:45 PM, "Ola Fosheim Grøstad" I've written 
several myself that do not use malloc.


If it is shared or can call brk() it should be annotated.

Even the Linux kernel does not use malloc. Windows offers many 
ways to allocate memory without malloc. Trying to have a core 
language detect attempts to write a storage allocator is way, 
way beyond the scope of what is reasonable for it to do.


Library and syscalls can be marked, besides you can have dynamic 
tracing in debug mode.



And, frankly, I don't see a point for such a capability.


Safe and contention free use of libraries in critical code paths. 
The alternative is to guess if it is safe to use.


malloc is hardly the only problem people will encounter with 
realtime callbacks. You'll want to avoid disk I/O, network 
access, etc., too.


Yes, all syscalls. But malloc is easier to overlook and it might 
call brk() seldom, so detecting it without support might be 
difficult.




Re: DIP60: @nogc attribute

2014-04-16 Thread Manu via Digitalmars-d
On 17 April 2014 10:06, Michel Fortin via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On 2014-04-16 23:20:07 +, Walter Bright 
> said:
>
>  On 4/16/2014 3:42 PM, Adam Wilson wrote:
>>
>>> ARC may in fact be the most advantageous for a specific use case, but
>>> that in no
>>> way means that all use cases will see a performance improvement, and in
>>> all
>>> likelihood, may see a decrease in performance.
>>>
>>
>> Right on. Pervasive ARC is very costly, meaning that one will have to
>> define alongside it all kinds of schemes to mitigate those costs, all of
>> which are expensive for the programmer to get right.
>>
>
> It's not just ARC. As far as I know, most GC algorithms require some
> action to be taken when changing the value of a pointer. If you're seeing
> this as unnecessary bloat, then there's not much hope in a better GC for D
> either.
>

Indeed.

But beyond that I wonder if @nogc won't entrench that stance even more.


This is *precisely* my concern. I'm really worried about this.


Re: DIP60: @nogc attribute

2014-04-16 Thread Manu via Digitalmars-d
On 17 April 2014 09:20, Walter Bright via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On 4/16/2014 3:42 PM, Adam Wilson wrote:
>
>> ARC may in fact be the most advantageous for a specific use case, but
>> that in no
>> way means that all use cases will see a performance improvement, and in
>> all
>> likelihood, may see a decrease in performance.
>>
>
> Right on. Pervasive ARC is very costly, meaning that one will have to
> define alongside it all kinds of schemes to mitigate those costs, all of
> which are expensive for the programmer to get right.
>

GC is _very_ costly. From my experience comparing iOS and Android, it's
clear that GC is vastly more costly and troublesome than ARC. What measure
do you use to make that assertion?
You're also making a hidden assertion that the D GC will never improve,
since most GC implementations require some sort of work similar to ref
fiddling anyway...


Re: DIP60: @nogc attribute

2014-04-16 Thread Manu via Digitalmars-d
On 17 April 2014 08:42, Adam Wilson via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Wed, 16 Apr 2014 04:50:51 -0700, Manu via Digitalmars-d <
> digitalmars-d@puremagic.com> wrote:
>
>  I am convinced that ARC would be acceptable, and I've never heard anyone
>>
>> suggest any proposal/fantasy/imaginary GC implementation that would be
>> acceptable...
>> In complete absence of a path towards an acceptable GC implementation, I'd
>> prefer to see people that know what they're talking about explore how
>> refcounting could be used instead.
>> GC backed ARC sounds like it would acceptably automate the circular
>> reference catching that people fuss about, while still providing a
>> workable
>> solution for embedded/realtime users; disable(/don't link) the backing GC,
>> make sure you mark weak references properly.
>>
>
> I'm just going to leave this here. I mentioned it previously in a debate
> over ARC vs. GC but I couldn't find the link at the time.
>
> http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf
>
> The paper is pretty math heavy.
>
> Long story short, Tracing vs. Ref-counting are algorithmic duals and
> therefore do not significantly differ. My read of the article is that all
> the different GC styles are doing is pushing the cost somewhere else.
>

Of course, I generally agree. Though realtime/embedded values smooth
predictability/reliability more than burst+stutter operation.

That said, I do think that GC incurs greater cost than ARC in aggregate
though. The scanning process, and the cache implications of scanning the
heap are cataclysmic. I don't imagine that some trivial inc/dec's would sum
to the same amount of work, even though they're happening more frequently.

GC has a nasty property where its workload in inversely proportional to
available memory.
As free memory decreases, frequency of scans increase. Low-memory is an
important class of native language users that shouldn't be ignored
(embedded, games consoles, etc).

Further, The cost of a GC sweep increases with the size of the heap. So, as
free memory decreases, you expect longer scans, more often... Yeah, win!

There are some other disturbing considerations; over time, as device memory
grows, GC costs will increase proportionally.
This is silly, and I'm amazed a bigger deal isn't made about the
future-proof-ness of GC. In 5 years when we all have 512gb of ram in our
devices, how much time is the GC going to spend scanning that much memory?

GC might work okay in the modern sweet-spot of 100-mb's to low-gb of total
memory, but I think as memory grows with time, GC will become more
problematic.

ARC on the other hand has a uniform, predictable, constant cost, that never
changes with respect to any of these quantities. ARC will always perform
the same speed, even 10 years from now, even on my Nintendo Wii, even on my
PIC microcontroller. As an embedded/realtime programmer, I can work with
this.


ARC may in fact be the most advantageous for a specific use case, but that
> in no way means that all use cases will see a performance improvement, and
> in all likelihood, may see a decrease in performance.
>

If you had to choose one as a default foundation, would you choose one that
eliminates a whole class of language users, or one that is an acceptable
compromise for all parties?
I'd like to see an argument for "I *need* GC. GC-backed-ARC is unacceptable
for my use case!". I'll put money on that requirement never emerging, and I
have no idea who that user would be.

Also, if you do see a decrease in performance, I suspect that it's only
under certain conditions. As said above, if your device begins to run low
on memory, or your users are working on unusually large projects/workloads,
all of a sudden your software starts performing radically differently than
you observe during development.
Naturally you don't typically profile that environment, but it's not
unlikely to occur in the wild.


That makes ARC a specialization for a certain type of programming, which
> would then remove D the "Systems" category and place it in a "Specialist"
> category.


What it does, is NOT eliminate a whole class of users. Are you going to
tell me that you have a hard dependency on the GC, and something else that
does exactly the same thing is incompatible with your requirements?
There's nothing intrinsically "systems" about GC over ARC, whatever that
means.


One could argue that due to the currently non-optional status of the GC
> that D is currently a "Specialist" language, and I would be hard pressed to
> argue against that.
>

So what's wrong with a choice that does exactly the same thing, but is less
exclusive?


@nogc removes the shackles of the GC from the language and thus brings it
> closer to the definition of "Systems". @nogc allows programmers to revert
> to C-style resource management without enforcing a specialized RM system,
> be it GC or ARC. @nogc might not make you run through the fields singing
> D's praises, but it is en

Re: DIP60: @nogc attribute

2014-04-16 Thread Manu via Digitalmars-d
On 17 April 2014 03:37, Walter Bright via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On 4/16/2014 4:50 AM, Manu via Digitalmars-d wrote:
>
>> I am convinced that ARC would be acceptable,
>>
>
> ARC has very serious problems with bloat and performance.
>

This is the first I've heard of it, and I've been going on about it for
ages.


Every time a copy is made of a pointer, the ref count must be dealt with,
> engendering bloat and slowdown. C++ deals with this by providing all kinds
> of ways to bypass doing this, but the trouble is such is totally unsafe.
>

Obviously, a critical part of ARC is the compilers ability to reduce
redundant inc/dec sequences. At which point your 'every time' assertion is
false. C++ can't do ARC, so it's not comparable.
With proper elimination, transferring ownership results in no cost, only
duplication/destruction, and those are moments where I've deliberately
committed to creation/destruction of an instance of something, at which
point I'm happy to pay for an inc/dec; creation/destruction are rarely
high-frequency operations.

Have you measured the impact? I can say that in realtime code and embedded
code in general, I'd be much happier to pay a regular inc/dec cost (a
known, constant quantity) than commit to unknown costs at unknown times.
I've never heard of Obj-C users complaining about the inc/dec costs.

If an inc/dec becomes a limiting factor in hot loops, there are lots of
things you can do to eliminate them from your loops. I just don't buy that
this is a significant performance penalty, but I can't say that
experimentally... can you?

How often does ref fiddling occur in reality? My guess is that with
redundancy elimination, it would be surprisingly rare, and insignificant.
I can imagine that I would be happy with this known, controlled, and
controllable cost. It comes with a whole bunch of benefits for
realtime/embedded use (immediate destruction, works in
little-to-no-free-memory environments, predictable costs, etc).


Further problems with ARC are inability to mix ARC references with non-ARC
> references, seriously hampering generic code.


That's why the only workable solution is that all references are ARC
references.
The obvious complication is reconciling malloc pointers, but I'm sure this
can be addressed with some creativity.

I imagine it would look something like:
By default, pointers are fat: struct ref { void* ptr, ref_t* rc; }
malloc pointers could conceivably just have a null entry for 'rc' and
therefore interact comfortably with rc pointers.
I imagine that a 'raw-pointer' type would be required to refer to a thin
pointer. Raw pointers would implicitly cast to fat pointers, and a
fat->thin casts may throw if the fat pointer's rc is non-null, or compile
error if it can be known at compile time.

Perhaps a solution is possible where an explicit rc record is not required
(such that all pointers remain 'thin' pointers)...
A clever hash of the pointer itself can look up the rc?
Perhaps the rc can be found at ptr[-1]? But then how do you know if the
pointer is rc allocated or not? An unlikely sentinel value at ptr[-1]?
Perhaps the virtual memory page can imply whether pointers allocated in
that region are ref counted or not? Some clever method of assigning the
virtual address space so that recognition of rc memory can amount to
testing a couple of bits in pointers?

I'm just making things up, but my point is, there are lots of creative
possibilities, and I have never seen any work to properly explore the
options.

 and I've never heard anyone suggest
>> any proposal/fantasy/imaginary GC implementation that would be
>> acceptable...
>>
>
> Exactly.


So then consider ARC seriously. If it can't work, articulate why. I still
don't know, nobody has told me.
It works well in other languages, and as far as I can tell, it has the
potential to produce acceptable results for _all_ D users.
iOS is a competent realtime platform, Apple are well known for their
commitment to silky-smooth, jitter-free UI and general feel. Android on the
other hand is a perfect example of why GC is not acceptable.


 In complete absence of a path towards an acceptable GC implementation, I'd
>> prefer to see people that know what they're talking about explore how
>> refcounting could be used instead.
>> GC backed ARC sounds like it would acceptably automate the circular
>> reference
>> catching that people fuss about, while still providing a workable
>> solution for
>> embedded/realtime users; disable(/don't link) the backing GC, make sure
>> you mark
>> weak references properly.
>>
>
> I have, and I've worked with a couple others here on it, and have
> completely failed at coming up with a workable, safe, non-bloated,
> performant way of doing pervasive ARC.
>

Okay. Where can I read about that? It doesn't seem to have surfaced, at
least, it was never presented in response to my many instances of raising
the topic.
What are the impasses?

I'm very worried about this. ARC is the 

Re: DIP60: @nogc attribute

2014-04-16 Thread Mike via Digitalmars-d

On Wednesday, 16 April 2014 at 22:42:23 UTC, Adam Wilson wrote:

Long story short, Tracing vs. Ref-counting are algorithmic 
duals and therefore do not significantly differ. My read of the 
article is that all the different GC styles are doing is 
pushing the cost somewhere else.


All memory management schemes cost, even manual memory 
management.  IMO that's not the point.  The point is that each 
memory management scheme distributes the cost differently.  One 
distribution may be more suitable for a certain problem domain 
than another.


ARC may in fact be the most advantageous for a specific use 
case, but that in no way means that all use cases will see a 
performance improvement, and in all likelihood, may see a 
decrease in performance.


The same can be said about stop-the-world mark-and-sweep.  It is 
also specialized to a specific problem domain.  As an example, it 
doesn't scale well to the real-time/embedded domain.


That makes ARC a specialization for a certain type of 
programming, which would then remove D the "Systems" category 
and place it in a "Specialist" category. One could argue that 
due to the currently non-optional status of the GC that D is 
currently a "Specialist" language, and I would be hard pressed 
to argue against that.


D is currently in the "Specialist" category.  It is already 
specialized/biased to PC/Server applications.  C/C++ are the only 
languages I know of that scale reasonably well to all systems.  I 
think D has the potential to change that, but it will require, 
first, recognition that D is not yet a "Systems" language like 
C/C++ are, and second, the will to change it.


@nogc removes the shackles of the GC from the language and thus 
brings it closer to the definition of "Systems". @nogc allows 
programmers to revert to C-style resource management without 
enforcing a specialized RM system, be it GC or ARC. @nogc might 
not make you run through the fields singing D's praises, but it 
is entirely consistent with the goals and direction of D.


@nogc doesn't allow users to revert to C-style resource 
management because they don't have control over implicit 
allocations in druntime and elsewhere.  It just disables them.  
Users still have to build alternatives.  There's no escaping the 
cost of memory management, but one could choose how to distribute 
the cost.


Mike


Re: on interfacing w/C++

2014-04-16 Thread Daniel Murphy via Digitalmars-d

"John Colvin"  wrote in message news:qbwxwxekffpegmbck...@forum.dlang.org...

Which, if you did, would enable you to use C++ classes from D somewhat 
transparently, no?


Potentially, yes.  You'd need to be very careful that there was always a 
gc-visible reference to the class to keep it alive, so no using malloc for 
arrays of class pointers etc in the C++ code.  This is done in DDMD by using 
a wrapper which forwards to GC.malloc. 



Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 5:06 PM, Michel Fortin wrote:

It's not just ARC. As far as I know, most GC algorithms require some action to
be taken when changing the value of a pointer. If you're seeing this as
unnecessary bloat, then there's not much hope in a better GC for D either.


Yeah, those are called write gates. The write gate is used to tell the GC that 
"I wrote to this section of memory, so that bucket is dirty now." They're fine 
in a language without pointers, but I just don't see how one could write fast 
loops using pointers with write gates.



But beyond that I wonder if @nogc won't entrench that stance even more. Here's
the question: is assigning to a pointer allowed in a @nogc function?  Of course
it's allowed! Assigning to a pointer does not involve the GC in its current
implementation... but what if another GC implementation to be used later needs
something to be done every time a pointer is modified, is this "something to be
done" allowed in a @nogc function?


It would have to be.



Re: DIP60: @nogc attribute

2014-04-16 Thread Michel Fortin via Digitalmars-d

On 2014-04-16 23:20:07 +, Walter Bright  said:


On 4/16/2014 3:42 PM, Adam Wilson wrote:
ARC may in fact be the most advantageous for a specific use case, but 
that in no

way means that all use cases will see a performance improvement, and in all
likelihood, may see a decrease in performance.


Right on. Pervasive ARC is very costly, meaning that one will have to 
define alongside it all kinds of schemes to mitigate those costs, all 
of which are expensive for the programmer to get right.


It's not just ARC. As far as I know, most GC algorithms require some 
action to be taken when changing the value of a pointer. If you're 
seeing this as unnecessary bloat, then there's not much hope in a 
better GC for D either.


But beyond that I wonder if @nogc won't entrench that stance even more. 
Here's the question: is assigning to a pointer allowed in a @nogc 
function? Of course it's allowed! Assigning to a pointer does not 
involve the GC in its current implementation... but what if another GC 
implementation to be used later needs something to be done every time a 
pointer is modified, is this "something to be done" allowed in a @nogc 
function?


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-16 Thread Rikki Cattermole via Digitalmars-d

On Wednesday, 16 April 2014 at 15:32:05 UTC, sclytrack wrote:
What about adding custom annotations that don't do any checking 
by

itself. Like when @nogc doesn't actually verify that the
~ is not used for strings.

void hello() require(@nogc)
{

}

Just a verification by the compiler that you use only routines
that are marked with certain annotations.

void boe()
{
}

@(nasaverified)
void test()
{
}

//

void hello() require(@(nasaverified))
{
  test(); // ok
  boe();  // not ok.
}


I really REALLY like this.
I can see it being rather useful. Assuming its expanded to
support UDA's.
Not quite sure what a use case is for it though.


Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

Walter Bright:


malloc is hardly the only problem people will encounter with
realtime callbacks. You'll want to avoid disk I/O, network
access, etc., too.


It seems a good idea to offer a way to extend the type system 
with new semantically meaningful annotations in user code. (Koka 
language does this too, with its effects management). I have seen 
an almost nice idea in this same thread.


Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 3:42 PM, Adam Wilson wrote:

ARC may in fact be the most advantageous for a specific use case, but that in no
way means that all use cases will see a performance improvement, and in all
likelihood, may see a decrease in performance.


Right on. Pervasive ARC is very costly, meaning that one will have to define 
alongside it all kinds of schemes to mitigate those costs, all of which are 
expensive for the programmer to get right.


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d
On 4/16/2014 3:45 PM, "Ola Fosheim Grøstad" 
" wrote:

On Wednesday, 16 April 2014 at 22:34:35 UTC, Walter Bright wrote:

malloc is hardly the only storage allocator.


Except for syscalls such as brk/sbrk, which ones are you thinking of?


I've written several myself that do not use malloc. Even the Linux kernel does 
not use malloc. Windows offers many ways to allocate memory without malloc. 
Trying to have a core language detect attempts to write a storage allocator is 
way, way beyond the scope of what is reasonable for it to do.


And, frankly, I don't see a point for such a capability. malloc is hardly the 
only problem people will encounter with realtime callbacks. You'll want to avoid 
disk I/O, network access, etc., too.




Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d

On Wednesday, 16 April 2014 at 22:34:35 UTC, Walter Bright wrote:

malloc is hardly the only storage allocator.


Except for syscalls such as brk/sbrk, which ones are you thinking 
of?




Re: DIP60: @nogc attribute

2014-04-16 Thread Adam Wilson via Digitalmars-d
On Wed, 16 Apr 2014 04:50:51 -0700, Manu via Digitalmars-d  
 wrote:



I am convinced that ARC would be acceptable, and I've never heard anyone
suggest any proposal/fantasy/imaginary GC implementation that would be
acceptable...
In complete absence of a path towards an acceptable GC implementation,  
I'd

prefer to see people that know what they're talking about explore how
refcounting could be used instead.
GC backed ARC sounds like it would acceptably automate the circular
reference catching that people fuss about, while still providing a  
workable
solution for embedded/realtime users; disable(/don't link) the backing  
GC,

make sure you mark weak references properly.


I'm just going to leave this here. I mentioned it previously in a debate  
over ARC vs. GC but I couldn't find the link at the time.


http://www.cs.virginia.edu/~cs415/reading/bacon-garbage.pdf

The paper is pretty math heavy.

Long story short, Tracing vs. Ref-counting are algorithmic duals and  
therefore do not significantly differ. My read of the article is that all  
the different GC styles are doing is pushing the cost somewhere else.


ARC may in fact be the most advantageous for a specific use case, but that  
in no way means that all use cases will see a performance improvement, and  
in all likelihood, may see a decrease in performance.


That makes ARC a specialization for a certain type of programming, which  
would then remove D the "Systems" category and place it in a "Specialist"  
category. One could argue that due to the currently non-optional status of  
the GC that D is currently a "Specialist" language, and I would be hard  
pressed to argue against that.


@nogc removes the shackles of the GC from the language and thus brings it  
closer to the definition of "Systems". @nogc allows programmers to revert  
to C-style resource management without enforcing a specialized RM system,  
be it GC or ARC. @nogc might not make you run through the fields singing  
D's praises, but it is entirely consistent with the goals and direction of  
D.


--
Adam Wilson
GitHub/IRC: LightBender
Aurora Project Coordinator


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d
On 4/16/2014 2:14 PM, "Ola Fosheim Grøstad" 
" wrote:

If the custom allocators are in D then you
should be able to track all the way down to malloc.


malloc is hardly the only storage allocator.


Re: DIP60: @nogc attribute

2014-04-16 Thread froglegs via Digitalmars-d



I am really looking forward to .NET Native becoming widespread.

Then this type of comparisons (C# vs C++) will be quite 
different.



 I don't think it will make a major difference. Taking a GC based 
language and giving it a native compiler doesn't automatically 
make it performance competitive with C++(see Haskel and D(without 
dumping GC) on anything besides micro bench marks).


C# is still GC based, still makes heavy use of indirection(See 
Herb Sutters recent talk on arrays).


C++ exposes SSE/AVX intrinsics, C# does not.
Many programs don't use these, but if you have a few hot spots 
involving number crunching, they can make a major difference.


My current project spends about 80% of its CPU time in SSE 
amenable locations, some template magic mixed with SSE 
intrinsics, and now those spots run 4x faster.
 You might be thinking auto vectorization can compete, but I've 
yet to see the one in VS2013 accomplish much of anything.
  Also I doubt very much that an auto vectorizer can squash 
branches, which is very possible with intrinsics. True branches 
and vectorized code don't mix well...




Re: DIP60: @nogc attribute

2014-04-16 Thread Timon Gehr via Digitalmars-d

On 04/16/2014 10:10 PM, Peter Alexander wrote:


However, that raises a second question: since err is allocated when a
new thread is created, does that mean @nogc functions cannot create
threads in the presence of such static initialisation?


This does not allocate on the GC heap.


Re: DIP60: @nogc attribute

2014-04-16 Thread Paulo Pinto via Digitalmars-d

Am 16.04.2014 22:49, schrieb froglegs:

Well, most of the new games (Unity3D) are done in C# nowadays and
people live with it even though game development is one of the biggest
C++ loving and GC hating crowd there is.


  Unity3D the engine is written primarily in C++, not C#.

The Unity editor and gameplay code is written in C# because that type of
code is generally not performance sensitive.


I am really looking forward to .NET Native becoming widespread.

Then this type of comparisons (C# vs C++) will be quite different.

--
Paulo


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d

On Wednesday, 16 April 2014 at 17:39:32 UTC, Walter Bright wrote:
Not practical. malloc() is only one way of allocating memory - 
user defined custom allocators are commonplace.


Not sure why this is not practical. If the custom allocators are 
in D then you should be able to track all the way down to malloc. 
In sensitive code like NMIs you DO want to use custom allocators 
(allocating from a pool, ring buffer etc) or none at all.


However, I think it falls into the same group as tracking 
syscalls in a call chain. And I guess you would have to think 
about library/syscall tracers such as ltrace, dtrace/truss, 
strace, ktrace, SystemTap etc too…




Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 12:44 PM, Peter Alexander wrote:

* Is it perhaps too early to introduce this? We don't have allocators yet, so it
can be quite hard to avoid the GC in some situations.


Not that hard.


* Many Phobos functions use 'text' and 'format' in asserts. What should be done
about those?


Redo to use output ranges instead.



* Does @nogc => nothrow?


No. They are orthogonal.



If I'm not mistaken, throw must through a GC-allocated Throwable.
* If the above is true, does that mean exceptions cannot be used at all in @nogc
code?


They can use preallocated exceptions, or be templatized and infer the 
attributes. It is a problem, though.




* I worry about the number of attributes being added. Where do we draw the line?
Are we going to add every attribute that someone finds a use for? @logicalconst
@nonrecursive @nonreentrant @guaranteedtermination @neverreturns


That's essentially true of every language feature.



Re: DIP60: @nogc attribute

2014-04-16 Thread froglegs via Digitalmars-d
Well, most of the new games (Unity3D) are done in C# nowadays 
and people live with it even though game development is one of 
the biggest C++ loving and GC hating crowd there is.


 Unity3D the engine is written primarily in C++, not C#.

The Unity editor and gameplay code is written in C# because that 
type of code is generally not performance sensitive.


Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

Peter Alexander:


(I assume that nothrow isn't meant to be there?)


In D nothrow functions can throw errors.



You could do something like this:

void foo() @nogc
{
static err = new Error();
if (badthing)
{
err.setError("badthing happened");
throw err;
}
}


To be mutable err also needs to be __gshared.

Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

Peter Alexander:


   err.setError("badthing happened");


And that is usually written:

err.msg = "badthing happened";

Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread Peter Alexander via Digitalmars-d

On Wednesday, 16 April 2014 at 20:29:17 UTC, bearophile wrote:

Peter Alexander:


(I assume that nothrow isn't meant to be there?)


In D nothrow functions can throw errors.


Of course, ignore me :-)



You could do something like this:

void foo() @nogc
{
   static err = new Error();
   if (badthing)
   {
   err.setError("badthing happened");
   throw err;
   }
}


To be mutable err also needs to be __gshared.


But then it isn't thread safe. Two threads trying to set and 
throw the same Error is a recipe for disaster.




Re: DIP60: @nogc attribute

2014-04-16 Thread monarch_dodra via Digitalmars-d
On Wednesday, 16 April 2014 at 19:44:19 UTC, Peter Alexander 
wrote:

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


Some initial thoughts:

* Is it perhaps too early to introduce this? We don't have 
allocators yet, so it can be quite hard to avoid the GC in some 
situations.


* Many Phobos functions use 'text' and 'format' in asserts. 
What should be done about those?


As a rule of thumb, format and text should *already* be avoided 
altogether in (non-static) asserts, as they can throw exceptions, 
preventing a function from being @nothrow. For example, sort:

https://github.com/D-Programming-Language/phobos/pull/2075/files#diff-ff74a46362b5953e8c88120e2490f839R9344

That said, the issue remains relevant for @nogc. Not only the 
exception itself, but also for the ".msg" field. How do we 
allocate it? Who cleans it up? Does the catcher have to do it? 
Can the catcher know he has to do it?


* Does @nogc => nothrow? If I'm not mistaken, throw must 
through a GC-allocated Throwable.


Even then, what about asserts? Well, I guess it's OK if Errors 
leak, since you are supposed to terminate shortly afterwards.


* If the above is true, does that mean exceptions cannot be 
used at all in @nogc code?


* I worry about the number of attributes being added. Where do 
we draw the line? Are we going to add every attribute that 
someone finds a use for? @logicalconst @nonrecursive 
@nonreentrant @guaranteedtermination @neverreturns


I like the concept of having an "@everything" attribute. It 
"future proofs" code (in a way, if you are also fine with it 
potentially breaking). Also, it is often easier (I think) to not 
think in terms of "what guarantees does my function provide", but 
rather "what guarantees does my function *not* provide"? EG:


void myFun(int* p) @everyting impure;

My function is safe, nothrow, nogc etc... except pure.

BUT, I think this should be the subject of another thread. Let's 
focus on @nogc.


Function Pointer Challenge

2014-04-16 Thread Jonathan Marler via Digitalmars-d

In a library I was writing I was in need of the following:

Write code that takes a function pointer/delegate and an array of 
strings that calls the function by parsing each string into the 
given functions arguments.  And if the function has a return 
value, the code will also return the functions return value after 
the call.


I had written this functionality before in .NET (C# specifically) 
using .NET's runtime reflection.  The code looks nice but runtime 
reflection has poor performance.  Using D's compile-time features 
I was able to write a D template function that implemented this 
in about 10 lines of code.


ReturnType!Function call(Function)(Function func, const char[][] 
args...) if (isCallable!Function)

{
  alias Args = ParameterTypeTuple!Function;

  if(args.length != Args.length)
throw new Exception(format("Expected %d arguments but got 
%d", Args.length, args.length));


  Args argsTuple;

  foreach(i,Arg;Args) {
argsTuple[i] = to!Arg(args[i]);
  }

  return func(argsTuple);
}

Here's a unit test to demonstrate its usage:

unittest
{
  void voidFunction()
  {
writeln("[Test] Called voidFunction()");
  }
  void randomFunction(int i, uint u, string s, char c)
  {
writefln("[Test] Called randomFunction(%s, %s, \"%s\", 
'%s')", i, u, s, c);

  }
  ulong echoUlong(ulong value)
  {
writefln("[Test] Called echoUlong(%s)", value);
return value;
  }

 (&voidFunction).call();
 (&randomFunction).call("-1000", "567", "HelloWorld!", "?");

  string passedValue = "123456789";
  ulong returnValue = (&echoUlong).call(passedValue);
  writefln("[Test] echoUlong(%s) = %s", passedValue, returnValue);

  try {
(&randomFunction).call("wrong number of args");
assert(0);
  } catch(Exception e) {
writefln("[Test] Caught %s: '%s'", typeid(e), e.msg);
  }

  writeln("[Test] Success");
}

I think this challenge does a great job in demonstrating D's 
compile-time power. I couldn't think of a way to do this in C 
without doing some type of code generation.  The reason I needed 
this functionality was because I was writing a remote procedure 
call type of library, where the function's being called were 
known at compile time, but the arguments (passed over a socket) 
had to be processed at runtime.  I was wondering if anyone had 
good solutions to this problem in other languages.  I was very 
pleased with the D solution but I predict that solutions in other 
languages are going to be much uglier.


Re: About the coolest tech thing I've ever seen...

2014-04-16 Thread Joakim via Digitalmars-d

On Wednesday, 16 April 2014 at 20:04:38 UTC, Nordlöw wrote:

This make me proud of being an engineer:

http://www.wimp.com/powerquadcopters/

I wonder what type system they used when modelling the 
algorithms ;)


Excuse me for posting this a bit of topic...I just had to share 
this experience with you all brilliant people.


This was a very popular video from last year, with millions of 
views, here's a better link:


http://www.youtube.com/watch?v=w2itwFJCgFQ

Here's another one I liked, lit quad-copters moving in formation 
against a night sky:


http://www.youtube.com/watch?v=ShGl5rQK3ew


Re: DIP60: @nogc attribute

2014-04-16 Thread Peter Alexander via Digitalmars-d

On Wednesday, 16 April 2014 at 19:53:01 UTC, bearophile wrote:

Peter Alexander:

* Does @nogc => nothrow? If I'm not mistaken, throw must 
through a GC-allocated Throwable.


* If the above is true, does that mean exceptions cannot be 
used at all in @nogc code?


This should work:

void foo() @nogc nothrow {
static const err = new Error("error");
throw err;
}

Bye,
bearophile


(I assume that nothrow isn't meant to be there?)

What if the exception needs information about the error?

You could do something like this:

void foo() @nogc
{
static err = new Error();
if (badthing)
{
err.setError("badthing happened");
throw err;
}
}

However, that raises a second question: since err is allocated 
when a new thread is created, does that mean @nogc functions 
cannot create threads in the presence of such static 
initialisation?


Re: XCB Bindings?

2014-04-16 Thread Xavier Bigand via Digitalmars-d

Le 16/04/2014 00:38, Jeroen Bollen a écrit :

Does anyone know of any (preferably complete) XCB bindings for D?


You can take a look to my bindings :
https://github.com/D-Quick/XCB

As I don't use them for the moment I am not sure there is no mistake in 
it, but it sure that it can be build without error.


Sadly with my friend we do a break on the DQuick project and so many 
things are stopped in a intermediate step.


Cause I did a special repo for XCB bindings, you can fork it easily and 
if everything goes right try to request an integration in deimos.


About the coolest tech thing I've ever seen...

2014-04-16 Thread Nordlöw

This make me proud of being an engineer:

http://www.wimp.com/powerquadcopters/

I wonder what type system they used when modelling the algorithms 
;)


Excuse me for posting this a bit of topic...I just had to share 
this experience with you all brilliant people.


Re: Finally full multidimensional arrays support in D

2014-04-16 Thread bearophile via Digitalmars-d

Stefan Frijters:

First of all, thank you very much for making such nice 
additions to D available for general use. I finally got around 
to giving this a spin.


Recently I've shown a possible usage example of the 
multidimensional arrays indexing and slicing syntax:

http://forum.dlang.org/thread/cizugfrkaunlkzyjp...@forum.dlang.org

Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

Peter Alexander:

* Does @nogc => nothrow? If I'm not mistaken, throw must 
through a GC-allocated Throwable.


* If the above is true, does that mean exceptions cannot be 
used at all in @nogc code?


This should work:

void foo() @nogc nothrow {
static const err = new Error("error");
throw err;
}

Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread Peter Alexander via Digitalmars-d

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


Some initial thoughts:

* Is it perhaps too early to introduce this? We don't have 
allocators yet, so it can be quite hard to avoid the GC in some 
situations.


* Many Phobos functions use 'text' and 'format' in asserts. What 
should be done about those?


* Does @nogc => nothrow? If I'm not mistaken, throw must through 
a GC-allocated Throwable.


* If the above is true, does that mean exceptions cannot be used 
at all in @nogc code?


* I worry about the number of attributes being added. Where do we 
draw the line? Are we going to add every attribute that someone 
finds a use for? @logicalconst @nonrecursive @nonreentrant 
@guaranteedtermination @neverreturns


Re: Finally full multidimensional arrays support in D

2014-04-16 Thread Stefan Frijters via Digitalmars-d
On Monday, 17 March 2014 at 17:39:41 UTC, Denis Shelomovskij 
wrote:
Multidimensional arrays indexing and slicing syntax is finally 
added [1] (thanks to Kenji Hara). So it was a good cause to 
update my multidimensional arrays library implementation and 
add support for the new syntax. So here we are: [2].


Also should we add it to the standard library?

[1] https://github.com/D-Programming-Language/dmd/pull/443
[2] 
http://denis-sh.bitbucket.org/unstandard/unstd.multidimarray.html


First of all, thank you very much for making such nice additions 
to D available for general use. I finally got around to giving 
this a spin. I'm using it for a proof-of-context HPC simuation 
code written in D (currently mostly experimenting with D's 
features), and as such I'm interfacing with the C MPI library to 
communicate between processes. The basis of the simulation is a 
3D lattice, so I was eagerly awaiting a nice solution in D. So 
far I've run into two things while using your library. The first 
is that I need to provide void pointers to the data to the MPI 
functions, so I currently hacked your code to make the _data 
storage array publicly accessible and that seems to work. To give 
an idea, I currently have code like this (just a snippet):


arr = multidimArray!T(nxH, nyH, nzH);
// [...] fill the array with data
// Prepare a buffer to receive a slice from another process.
rbuffer = multidimArray!T(haloSize, nyH, nzH);
// Prepare a buffer to send a slice to another process.
sbuffer = arr[$-2*haloSize-1..$ - haloSize-1, 0..$, 0..$].dup;
// Here I now use the pointer of the storage arrays to send the 
buffer around.
MPI_Sendrecv(sbuffer._data.ptr, nyH * nzH, MPI_INT, M.nbx[1], 0, 
rbuffer._data.ptr, nyH * nzH, mpiType, M.nbx[0], 0, M.comm, 
&mpiStatus);

// Put the buffer in the correct spot in the main array.
arr[0..haloSize, 0..$, 0..$] = rbuffer;

Am I missing a nicer way to accomplish this? I like the 
compactness of the code (compared to what I'm currently used to 
with our F90 simulation code). Secondly, the property that 
returns the dimensions of the array is called 'dimentions' (with 
a t), this should be fixed.


Regards,

Stefan


A crazy idea for accurately tracking source position

2014-04-16 Thread Alix Pexton via Digitalmars-d

TL;DR

Here is some under documented, incomplete and untested code.
CAVEAT IMPLEMENTOR: some details have been omitted to keep things brief!

struct someRange
{
ulong seq;
bool fresh = true;
long line;
dchar front;
// and lets just pretend that there is
// somewhere for more characters to come from!

void popFront()
{
// advance by whatever means to update front.
if (front.isNewline)
{
++line;
fresh = true;
return;
}
if (fresh)
{
if (front.isTab)
{
seq = 0x,,,fffeL;
}
else
{
seq = 0x1L;
}
fresh = false;
}
else
{
seq <<= 1;
if (!front.isTab)
{
seq |= 0x1L;
}
}
}

// and the rest...
}


A long time ago I wrote a very rudimentary XML lexer/parser in pascal. 
At the time I thought it was a good idea to point to the exact character 
where a error was detected. Knowing that tabs could be involved and that 
they can have different widths I stored the line position as a 
tabs/spaces tuple, because no one would ever put a space anywhere but at 
the beginning of the line, right!


Jump forward a decade or so and I know better. I.e. just knowing the 
number of tabs and spaces isn't enough, because when tabs can be 
anywhere, sometimes the spaces are swallowed up. What is needed is a 
string of tabs and spaces that matches the sequence of tabs and non-tabs 
in the source. Such could be built while lexing for immediate use if an 
invalid character is encountered and then thrown away at each newline, 
but it would not be practical to store that much information in every 
token. The sequence could be split between tokens from a single line, 
with each token having just the pattern since the last token, but 
reversing the reading of the tokens in order to reconstruct the sequence 
or building it while parsing just in case it is needed are at best 
impractical.


What would help would be a way of fitting that sequence of tabs and 
spaces into a smaller format.


Here is the crazy part...

Using a ulong (or longer if possible) to store our tab sequence...

Upon starting to lex a fresh line we check the first character, if its a 
tab we set all but the lsb to 1 . If that first char is anything other 
than a tab, we set all bits but the lsb to 0 .


On each subsequent character in the line we shift the sequence left 1 
bit and set the new lsb as 0 if its a tab and 1 if it is anything else.


If the line is longer than the number of bits in our ulong[er] we throw 
our toys out of the pram and go home in tears.


Any time a token is emitted the current (or most relevant) value of the 
ulong can be stored in it.


To decode the sequence if it is needed, we check the msb, if it is 1 
then the first character is a tab and we shift the whole value until the 
first 0 reaches the msb (keeping track of how many shifts we do so as 
not to reach apple headquarters) and then one more shift to account for 
the first character. If the msb is 0 then the first character is a space 
and we shift left until one past the first 1 . For each remaining bit we 
add a tab when the msb is 0 and a space when it is 1 .


Thus we have reconstructed a string that when displayed above or below 
the line that generated it, will end at the correct character, 
regardless of the number of tabs or spaces used to represent them. Hurrah!


For any type of source where lexed lines are regularly contain more 
characters than there are bits in our longest integer, this technique 
will fail. However, I reason that in most cases the lines that are all 
non tabs and full width are often not parsed (i.e. they are comments 
etc). Lines that start hard to the left are often short and lines that 
reach the right are often the ones with many tabs in them. In other 
words, many lines that are too wide, are not too long.


Am I on to something or should I be on something?

A...


Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Russel Winder via Digitalmars-d
On Wed, 2014-04-16 at 16:06 +0200, Sönke Ludwig via Digitalmars-d wrote:
[…]
> 
> I agree, but I also wonder why you still keep ignoring vibe.d. It 
> achieves exactly that - right now! Integration with std.concurrency 
> would be great, but at least for now it has an API compatible 
> replacement that can be merged later when Sean's pull request is done.

Vibe.d is a single-thread event system, which is great (*) for the sort
of problems Node.js, Vert.x, Tornado, Flask, Sinatra, Ratpack are used
for. The point here is that CSP and dataflow are a concurrency and
parallelism model that D has not got.

std.concurrency is a heavyweight thread system so not really useful
except to build thread pools and fork-join infrastructure. (OK that is a
gross oversimplification.) std.parallelism is a great beginning of data
parallelism on a thread pool. It needs more work. The thread pool needs
to be brought front and centre, as a separate thing usable by other
modules. On this CSP, dataflow, actors, etc. can be built.

Due to other commitments, not least leading a massive update of GPars, I
cannot lead on working on D things. If however someone can drive, I will
certainly contribute, along the lines as I did when David Simcha wrote
std.parallelism – mostly as a tester and reviewer.

This also raises the issue of the D infrastructure having an obvious and
documented way for people to contribute to things like std.parallelism.
Whatever the truth, the perception is that to work on something like
std.parallelism, you have to fork the whole of Phobos. In fact,
std.parallelism is a single file 4,500 lines long (**).


(*) It would be even better if it supported mocking for unit tests ;-)

(**) I am still not a fan of single files this big.
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Russel Winder via Digitalmars-d
On Wed, 2014-04-16 at 13:59 +, Bienlein via Digitalmars-d wrote:
> When looking at the success of Go it seems to me that it is 
> caused to a large extend by the kind of multi-threading Go offers 
> which is something like "spawn as many thousand threads as you 
> like".

A critically important thing here is the separation of goroutine and
thread, i.e. the concurrency and parallelism is about abstraction of the
programmers' expression and the underlying implementation — thread pool.
Go is not about multi-threading, it is about using goroutines and
programmers don't care about threads at all (to a third approximation). 

> Being able to spawn as many thousand threads as needed without 
> caring about it seems to be an important aspect for being an 
> interesting offering for developing server-side software. It 
> would be nice if D could also play in that niche. This could be 
> some killer domain for D beyond being a better C++.

Go does not spawn thousands of threads, see above :-)

C++11, and increasingly C++17 are making C++ into a totally different
language that 1980s C++ and C++99. It even has proposals for a
reasonable concurrency and parallelism layer over the now standardized
threads. Sadly though there are some really bad proposals being made to
the standards committee. C++ is suffering from the fact that people with
the right ideas are not proposing them for C++. Anthony Williams, Roger
Orr, Jonathan Wakeley and others are doing as good a job as they can
trying to make good stuff so there is some hope it will turn out well.
It is a great shame that the same effort is not going into improving D's
offerings here: D is in a far better position to do so much better that
C++ and what it has.

> While Go uses channels and goroutines D takes some actor-style 
> approach when spawning threads. This is also fine, but the 
> problems remains that you cannot create as many D kernel threads 
> just as you like. Maybe this could be something to improve in D 
> and promote in order to give D a further boost. I don't mean to 
> be pushy, it's just about exchanging ideas ;-). The 
> FiberScheduler by Sean Kelly could achieve something in that 
> direction. What do you think?

Go doesn't spawn threads, see above :-)

D would be significantly improved for a CSP implementation (which is
what goroutines and channels realize). Also a fork-join framework would
be a useful addition.

The problem is resource. Vibe.d, std.parallelism, std.concurrency
provide some tools but for CSP and dataflow, no-one has scratched the
itch.  I had been intending to do one project with D and GtkD, but ended
up switching to Go + QML because it was easier to do that than write a
CSP system for D.  For another C++ and Gtk project, it is easier to wait
for early C++17 implementations than it is to port the code to D (*).


(*) There is an element of "how many programmers risk" here not just
technical one. There are many more C++ programmers around who can use
new C++ style and features than there are D programmers. 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

Walter Bright:

Not practical. malloc() is only one way of allocating memory - 
user defined custom allocators are commonplace.


OK, then I'll have to close my ER about @noheap.

Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-16 Thread Adam D. Ruppe via Digitalmars-d

On Wednesday, 16 April 2014 at 17:39:32 UTC, Walter Bright wrote:
Not practical. malloc() is only one way of allocating memory - 
user defined custom allocators are commonplace.


What I want is a __trait that scans for all call expressions in a 
particular function and returns all those functions.


Then, we can check them for UDAs using the regular way and start 
to implement library defined things like @safe, @nogc, etc. (safe 
and gc are a bit different because they also are affected by 
built-in language features, not just functions, but the same idea 
of recursively scanning for an annotation in the function body).


Of course, this wouldn't always be perfect, separate compilation 
could be used to lie about or hide annotations in a function 
prototype, but meh I don't really care about that, the main use 
for me would eb static asserts right under the function 
definition anyway.


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d
On 4/16/2014 1:49 AM, "Ola Fosheim Grøstad" 
" wrote:

Btw, I think you should add @noalloc also which prevents both new and malloc. It
would be useful for real time callbacks, interrupt handlers etc.


Not practical. malloc() is only one way of allocating memory - user defined 
custom allocators are commonplace.


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 4:50 AM, Manu via Digitalmars-d wrote:

I am convinced that ARC would be acceptable,


ARC has very serious problems with bloat and performance.

Every time a copy is made of a pointer, the ref count must be dealt with, 
engendering bloat and slowdown. C++ deals with this by providing all kinds of 
ways to bypass doing this, but the trouble is such is totally unsafe.


Further problems with ARC are inability to mix ARC references with non-ARC 
references, seriously hampering generic code.



and I've never heard anyone suggest
any proposal/fantasy/imaginary GC implementation that would be acceptable...


Exactly.



In complete absence of a path towards an acceptable GC implementation, I'd
prefer to see people that know what they're talking about explore how
refcounting could be used instead.
GC backed ARC sounds like it would acceptably automate the circular reference
catching that people fuss about, while still providing a workable solution for
embedded/realtime users; disable(/don't link) the backing GC, make sure you mark
weak references properly.


I have, and I've worked with a couple others here on it, and have completely 
failed at coming up with a workable, safe, non-bloated, performant way of doing 
pervasive ARC.




Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 8:01 AM, qznc wrote:

However, what is still an open issue is that @nogc can be stopped by allocations
in another thread. We need threads which are not affected by stop-the-world. As
far as I know, creating threads via pthreads C API directly achieves that, but
integration with @nogc could provide more type safety. Stuff for another DIP?


That's a completely separate issue.


Re: DIP60: @nogc attribute

2014-04-16 Thread Walter Bright via Digitalmars-d

On 4/16/2014 2:03 AM, JN wrote:

I'd have to agree. I doubt @nogc will change anything, people will just start
complaining about limitations of @nogc (no array concat, having to use own
libraries which may be incompatible with phobos). The complaints mostly come
from the fact that D wants to offer a choice, in other languages people just
accept what they have. You don't see C# developers complaining much about having
to use GC, or C++ programmers all over the world asking for GC. Well, most of
the new games (Unity3D) are done in C# nowadays and people live with it even
though game development is one of the biggest C++ loving and GC hating crowd
there is.


We have to try. Especially since @nogc is a low risk thing - it doesn't break 
anything, and is a fairly simple addition to the compiler.




Another issue is the quality of D garbage collector, but adding alternative
memory management ways doesn't help, fragmenting the codebase.


No improvement to the GC is acceptable to people who want to manually manage 
memory. That much is quite clear.


Re: DIP60: @nogc attribute

2014-04-16 Thread Gary Willoughby via Digitalmars-d

On Wednesday, 16 April 2014 at 17:22:02 UTC, Gary Willoughby
wrote:

On Tuesday, 15 April 2014 at 21:41:37 UTC, Brad Anderson wrote:

Yes, please. Too few of the attributes have inverse attributes.

Being able to stick your defaults up at the top of your module 
and then overriding them only when needed would be very nice 
and make the code a lot more tidy.


I actually think this will make code harder to read. e.g.:

@nogc:

void foo()
{
   ...
}

void bar() @gc
{
   ...
}

@gc
{
   void baz() @nogc
   {
  ...
   }
}

@gc:

void quxx() @nogc
{
   ...
}

Ewww... nasty stuff.


My point was opposite attributes complicate the code hugely.


Re: D gc on local objects

2014-04-16 Thread Paulo Pinto via Digitalmars-d

Am 16.04.2014 18:51, schrieb Adam D. Ruppe:

This is one of the things the `scope` storage class on local variables
can do, but since it isn't implemented properly, it is not memory safe
and thus its usage is deprecated.

I really really really want to see scope be fully implemented, including
not allowing a reference to the variable to escape the scope, but this
is easier said than done.


Not allowing a variable to escape scope should be easily done with 
dataflow analysis if I recall correctly.


Now, I don't have any idea how easy/simple it is to implement it in the 
existing code base, in a compatible way across all three compilers.


So just speaking out of my soap box.

--
Paulo


Re: on interfacing w/C++

2014-04-16 Thread John Colvin via Digitalmars-d

On Wednesday, 16 April 2014 at 17:16:07 UTC, Daniel Murphy wrote:
If you are using 'new' in C++ it will not use D's GC heap, 
unless you overrode the global 'new' operator or something.


Which, if you did, would enable you to use C++ classes from D 
somewhat transparently, no?


Re: DIP60: @nogc attribute

2014-04-16 Thread Gary Willoughby via Digitalmars-d

On Tuesday, 15 April 2014 at 21:41:37 UTC, Brad Anderson wrote:

Yes, please. Too few of the attributes have inverse attributes.

Being able to stick your defaults up at the top of your module 
and then overriding them only when needed would be very nice 
and make the code a lot more tidy.


I actually think this will make code harder to read. e.g.:

@nogc:

void foo()
{
   ...
}

void bar() @gc
{
   ...
}

@gc
{
   void baz() @nogc
   {
  ...
   }
}

@gc:

void quxx() @nogc
{
   ...
}

Ewww... nasty stuff.


Re: on interfacing w/C++

2014-04-16 Thread Daniel Murphy via Digitalmars-d
"monnoroch"  wrote in message news:kqjrnqecnfejmiwnk...@forum.dlang.org... 


What about namespaces?


Zero support currently.


Re: D gc on local objects

2014-04-16 Thread Adam D. Ruppe via Digitalmars-d

On Wednesday, 16 April 2014 at 17:14:55 UTC, John Colvin wrote:
I would love to have a "scope" that works properly, with or 
without blade-guards to stop me chopping off my hands when the 
function returns.


The blade guards are the important part though: if you just want 
the allocation pattern, you can do that fairly easily yourself 
with plain library code and stuff like scope(exit) or raii 
destructors.


Re: on interfacing w/C++

2014-04-16 Thread Daniel Murphy via Digitalmars-d
"Moritz Maxeiner"  wrote in message 
news:nadswyordzxwa...@forum.dlang.org...


That sounds very cool, I've had a look at [1] and [2], which seem to be 
the two files with the new C++ class interfacing. As far as I could tell, 
you need to create any instances of C++ classes with C++ code / you don't 
bind to the constructors directly from D and the new instance will not be 
managed by D's GC? Because if I used this new interfacing for e.g. llvm-d, 
I need to be sure, that D's GC won't touch any of the instances under any 
circumstances, since they are freed by LLVM's internal logic they GC 
cannot track.


This is correct, if you want to construct a class implemented in C++ you 
will need to call a factory function also implemented in C++, and the same 
for the other direction.


If you are using 'new' in C++ it will not use D's GC heap, unless you 
overrode the global 'new' operator or something.


The simplest model is to do all lifetime management in the original language 
and ensure that objects stay alive while there are live references in the 
other language. 



Ranges again

2014-04-16 Thread John Colvin via Digitalmars-d

Construction and Initialisation
===

As previously discussed, it's annoying having the conflation of 
construction and initialisation. It leads to two options:


1) construction is initialisation. This is nasty for some ranges 
where the range is truly destructive, amongst a whole load of 
other problems.


2) lazy initialisation. This is annoying to implement (hello 
bugs) and potentially slow.



How about a convention to overcome this:


A new, optional, range primitive named initialize or prime is 
introduced that does the initialisation.


Constructors are not guaranteed to initialise a range if an 
initialize/prime member is present.


factory functions (e.g. std.range.iota for std.range.Iota) call 
the constructor, followed by initialise/prime if present, unless 
explicitly documented otherwise.


All entities that accept a range expect that range to be 
pre-initialised unless explicitly documented otherwise.



What we do:

Document it. Publicise it.

Change all phobos ranges that are implemented as private 
aggregates (struct/class) with public factory functions to the 
new convention. Note that many will not need any change at all as 
they have no nasty initialisation needs. This is a non-breaking 
change.


Consider changing some phobos ranges with public aggregates, 
where the improvement is sufficient to justify the potential 
breakage for people using the raw constructors.


Nothing else. All current code with it's various initialisation 
conventions will function exactly as before, any breakage is 
strictly controlled on a case-by-case basis for each individual 
range type.



Open Questions:
Does save always return an initialised range?


Iteration Protocol
==

Proposal:

A) The   !empty -> front (as many times as you like*) -> popFront 
-> repeat   sequence is used for all ranges.


B) empty and front must appear to do what they say they do** when 
used according to the above, but may internally do whatever they 
want. One exception: Consecutive calls to front are *not* 
guaranteed to return the same value, as this doesn't play well 
with random access ranges, in particular std.algorithm.map.***


C) as a consequence of B: empty and front are both independently 
optional. Of course it is still an error to call front or 
popFront on an empty range.


D) Ranges are not guaranteed to be internally buffered (e.g. see 
caveat about front in B), but many will be for performance and/or 
correctness (see WRT streams below). Inevitably some sort of 
caching wrapper and a byEfficientChunk primitive are likely to 
emerge, whether standardised or not.


* not deliberately by design, more just as an accidental 
side-effect of B.
** non-destructive, does not advance the range, does what it says 
on the tin.
*** Also, a range might work with data that is being mutated 
while the range is iterating/being indexed/sliced.


WRT streams:

empty for streams requires (attempting to) read from the stream, 
which is in turn destructive. Thus, implementations of streams as 
ranges *must* be buffered, to preserve the illusion of empty 
being a simple non-destructive check. My understanding is that 
they will always be buffered on some level anyway for performance.



General notes on everything above
=
Ranges are not the solution to every possible iteration problem. 
They are not a panacea. I believe that attempts to overly 
generalise them will cause more harm than good. With some 
sensible conventions we can have a really great tool, but there 
will always be some situations where a different approach will be 
more performant, intuitive or safer. Yes, that means you 
sometimes can't use std.algorithm and std.range, which is sad, 
but you can't have everything.


Those who want to really squeeze every last bit of performance 
out of ranges or make use of non-conforming range-like objects 
can use UDAs to tag various properties of their types, and then 
have their generic algorithms special-case on those attributes 
for performance and/or correctness. This enables the concept of 
ranges to be extended without overly straining the core mechanics.
There might even be a place for some standardised UDAs in 
std.range and some special-casing in std.algorithm/range.


However we proceed, we need an "I don't satisfy this range 
requirement even though it looks like I do" indicator. I suggest 
using a UDA, introduced as standard (in std.range, not the 
language). E.g. @(std.range.notRandomAccessRange) myRange { ... } 
would fail isRandomAccessRange even if it contains the necessary 
primitives.




P.S.
Most of the parts of this have probably been proposed by other 
people before, which I've probably read and forgotten about. No 
originality claimed here :)


Re: D gc on local objects

2014-04-16 Thread John Colvin via Digitalmars-d

On Wednesday, 16 April 2014 at 16:51:57 UTC, Adam D. Ruppe wrote:
This is one of the things the `scope` storage class on local 
variables can do, but since it isn't implemented properly, it 
is not memory safe and thus its usage is deprecated.


I really really really want to see scope be fully implemented, 
including not allowing a reference to the variable to escape 
the scope, but this is easier said than done.


I would love to have a "scope" that works properly, with or 
without blade-guards to stop me chopping off my hands when the 
function returns.


Re: D gc on local objects

2014-04-16 Thread Adam D. Ruppe via Digitalmars-d
This is one of the things the `scope` storage class on local 
variables can do, but since it isn't implemented properly, it is 
not memory safe and thus its usage is deprecated.


I really really really want to see scope be fully implemented, 
including not allowing a reference to the variable to escape the 
scope, but this is easier said than done.


D gc on local objects

2014-04-16 Thread monnoroch via Digitalmars-d

I often see, that D developers say something like "remove
allocations from std lib", and it seems, that the main reason to
do it is eliminate gc calls.
What about the idea, that local objects do not use gc at all?
Maby, all temporary variables can be destroyed just like in C++,
when out of scope without stop-the-world?
Here's a silly example:

bool hasDot(string s1, string s2) {
 auto tmp = s1 + s2;
 return tmp.find(".") != -1;
}

Clearly, the tmp variable allocates, but there is no point to do
it via gc, since all memory is allocated and used in specific
scope.

What if dmd could fins those variables and swich gc allocation to
just malloc+constructor call, and destructor+free call at the end
of a scope?


Re: on interfacing w/C++

2014-04-16 Thread monnoroch via Digitalmars-d

What about namespaces?


Re: std.stream replacement

2014-04-16 Thread sclytrack via Digitalmars-d
On Saturday, 14 December 2013 at 15:16:50 UTC, Jacob Carlborg 
wrote:

On 2013-12-14 15:53, Steven Schveighoffer wrote:

I realize this is really old, and I sort of dropped off the D 
cliff

because all of a sudden I had 0 extra time.

But I am going to get back into working on this (if it's still 
an issue,
I still need to peruse the NG completely to see what has 
happened in the

last few months).


Yeah, it still need to be replaced. In this case you can have a 
look at the review queue to see what's being worked on:


http://wiki.dlang.org/Review_Queue



SINK, TAP
-


https://github.com/schveiguy/phobos/blob/new-io/std/io.d

What about adding a single property named sink or tap depending
on how you want the chain to be. That could be either a struct or
a class. Each sink would provide another interface.


struct/class ArchiveWriter(SINK)
{
@property sink  //pointer to sink
}



writer.sink.sink.sink
arch.sink.sink.sink.open("filename");


ArchiveReader!(InputStream) * reader;


"Warning: As usual I don't know what I'm talking about."









Re: DIP60: @nogc attribute

2014-04-16 Thread sclytrack via Digitalmars-d

On Wednesday, 16 April 2014 at 10:13:06 UTC, bearophile wrote:

JN:

I doubt @nogc will change anything, people will just start 
complaining about limitations of @nogc


Having a way to say "this piece of program doesn't cause heap 
activity" is quite useful for certain piece of code. It makes a 
difference in both performance and safety.
But not being able to call core.stdc.stdlib.alloca in a "@nogc 
pure" function sub-three is not good.


Bye,
bearophile


What about adding custom annotations that don't do any checking by
itself. Like when @nogc doesn't actually verify that the
~ is not used for strings.

void hello() require(@nogc)
{

}

Just a verification by the compiler that you use only routines
that are marked with certain annotations.

void boe()
{
}

@(nasaverified)
void test()
{
}

//

void hello() require(@(nasaverified))
{
  test(); // ok
  boe();  // not ok.
}
















Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Sönke Ludwig via Digitalmars-d

Am 16.04.2014 16:43, schrieb Bienlein:

On Wednesday, 16 April 2014 at 14:21:03 UTC, Sönke Ludwig wrote:


I still don't understand what you mean by "distributed". Spawning
50.000 tasks:

import vibe.core.core;
import std.stdio;

void main()
{
foreach (i; 0 .. 50_000)
runTask({
writefln("Hello, World!");
});
}

Alternatively, runWorkerTask will also distribute the tasks among a
set of worker threads, which would be more in line with Go AFAIK.


All right, I see. I spent some time looking at the vibe.d homepage and I
never saw any other code than something like this:

shared static this()
{
 auto settings = new HTTPServerSettings;
 settings.port = 8080;

 listenHTTP(settings, &handleRequest);
}

void handleRequest(HTTPServerRequest req,
HTTPServerResponse res)
{
 res.writeBody("Hello, World!", "text/plain");
}

Not wanting just to be right, but things like that should still be in
some base library of the language and not in some 3rd party library. The
vibe.d homepage says "As soon as a running fiber calls a special yield()
function, it returns control to the function that started the fiber.".
Yielding in the FiberScheduler by Sean Kelly is transparent. That's an
important point to be easy to use, I think. Also the use of libevent is
mentioned. I don't understand what the implications of that exactly is.


It *is* transparent. Once a blocking operation, such as I/O or waiting 
for a message, is triggered, it will implicitly yield. But the text 
indeed doesn't make it very clear. The explicit yield() function is 
meant for (rare) cases where more control is needed, for example during 
lengthy computations.


Libevent is just an abstraction layer above the various asynchronous I/O 
APIs. It provides a platform independent way to get notified about 
finished operations. There is also a native WinAPI based implementation 
that enables integration with GUI applications.




What I mean is that some nice transparent solution in a base library for
the "some ten thousand threads thing" would be nice.


That would indeed be nice to have in the standard library, but it also 
needs to be carefully planned out, as it has a lot of implications on 
the existing code, such as making all blocking functions compatible with 
the fiber based model. Without a full work over it will most likely do 
more harm than good. And having it in a third party library allows for 
evolution until a stable state is reached before starting to introduce 
breaking changes in the standard library.


Re: DIP60: @nogc attribute

2014-04-16 Thread qznc via Digitalmars-d

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


Good start.

However, what is still an open issue is that @nogc can be stopped 
by allocations in another thread. We need threads which are not 
affected by stop-the-world. As far as I know, creating threads 
via pthreads C API directly achieves that, but integration with 
@nogc could provide more type safety. Stuff for another DIP?


Re: on interfacing w/C++

2014-04-16 Thread Moritz Maxeiner via Digitalmars-d

On Wednesday, 16 April 2014 at 14:00:24 UTC, Daniel Murphy wrote:
"Moritz Maxeiner"  wrote in message 
news:kvzwlecwougswrqka...@forum.dlang.org...


Is this[1] then out of date and I can interface with 
non-virtual methods? Because that's what your post seems to 
imply (unless I misunderstood).


[1] http://dlang.org/cpp_interface.html


Yes.  The best place to look for concrete examples of what is 
supported is probably the C++ tests in the test suite.  (ie 
files containing "EXTRA_CPP_SOURCES")


That sounds very cool, I've had a look at [1] and [2], which seem 
to be the two files with the new C++ class interfacing. As far as 
I could tell, you need to create any instances of C++ classes 
with C++ code / you don't bind to the constructors directly from 
D and the new instance will not be managed by D's GC? Because if 
I used this new interfacing for e.g. llvm-d, I need to be sure, 
that D's GC won't touch any of the instances under any 
circumstances, since they are freed by LLVM's internal logic they 
GC cannot track.


[1] 
https://github.com/D-Programming-Language/dmd/blob/master/test/runnable/externmangle.d
[2] 
https://github.com/D-Programming-Language/dmd/blob/master/test/runnable/extra-files/externmangle.cpp


Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Dicebot via Digitalmars-d

On Wednesday, 16 April 2014 at 14:16:30 UTC, Bienlein wrote:

On Wednesday, 16 April 2014 at 14:06:13 UTC, Sönke Ludwig wrote:

I agree, but I also wonder why you still keep ignoring vibe.d. 
It achieves exactly that - right now! Integration with 
std.concurrency would be great, but at least for now it has an 
API compatible replacement that can be merged later when 
Sean's pull request is done.


The point is that vibe.d is a distributed solution. Nothing 
wrong about that. But in a single instance of some Go program 
you can >locally< spawn "as many threads as you like" with the 
number of threads easily being 50.000 and more. It seems that 
in the Go community people often do that without going 
distributed. Looks like this makes things a lot simpler when 
writing server-side applications.


Goroutines are not threads and by calling them as such you only 
confuse yourself. Their D counterpart is fiber and you can 
definitely spawn 50 000 fibers for single local thread.


It is not the first time you try to advocate some Go features 
without first actually exploring relevant domain. All you 
describe is already implemented in vibe.d


Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Bienlein via Digitalmars-d

On Wednesday, 16 April 2014 at 14:21:03 UTC, Sönke Ludwig wrote:

I still don't understand what you mean by "distributed". 
Spawning 50.000 tasks:


import vibe.core.core;
import std.stdio;

void main()
{
foreach (i; 0 .. 50_000)
runTask({
writefln("Hello, World!");
});
}

Alternatively, runWorkerTask will also distribute the tasks 
among a set of worker threads, which would be more in line with 
Go AFAIK.


All right, I see. I spent some time looking at the vibe.d 
homepage and I never saw any other code than something like this:


shared static this()
{
auto settings = new HTTPServerSettings;
settings.port = 8080;

listenHTTP(settings, &handleRequest);
}

void handleRequest(HTTPServerRequest req,
   HTTPServerResponse res)
{
res.writeBody("Hello, World!", "text/plain");
}

Not wanting just to be right, but things like that should still 
be in some base library of the language and not in some 3rd party 
library. The vibe.d homepage says "As soon as a running fiber 
calls a special yield() function, it returns control to the 
function that started the fiber.". Yielding in the FiberScheduler 
by Sean Kelly is transparent. That's an important point to be 
easy to use, I think. Also the use of libevent is mentioned. I 
don't understand what the implications of that exactly is.


What I mean is that some nice transparent solution in a base 
library for the "some ten thousand threads thing" would be nice.


Re: DMD coding style rules?

2014-04-16 Thread Daniel Murphy via Digitalmars-d
"DanielKozákvia Digitalmars-d"  wrote in message 
news:mailman.99.1397656987.2763.digitalmar...@puremagic.com...



sorry I post coding styles for D not for dmd


They are similar. 



Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Sönke Ludwig via Digitalmars-d

Am 16.04.2014 16:16, schrieb Bienlein:

On Wednesday, 16 April 2014 at 14:06:13 UTC, Sönke Ludwig wrote:


I agree, but I also wonder why you still keep ignoring vibe.d. It
achieves exactly that - right now! Integration with std.concurrency
would be great, but at least for now it has an API compatible
replacement that can be merged later when Sean's pull request is done.


The point is that vibe.d is a distributed solution. Nothing wrong about
that. But in a single instance of some Go program you can >locally<
spawn "as many threads as you like" with the number of threads easily
being 50.000 and more. It seems that in the Go community people often do
that without going distributed. Looks like this makes things a lot
simpler when writing server-side applications.


I still don't understand what you mean by "distributed". Spawning 50.000 
tasks:


import vibe.core.core;
import std.stdio;

void main()
{
foreach (i; 0 .. 50_000)
runTask({
writefln("Hello, World!");
});
}

Alternatively, runWorkerTask will also distribute the tasks among a set 
of worker threads, which would be more in line with Go AFAIK.


Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Bienlein via Digitalmars-d

On Wednesday, 16 April 2014 at 14:06:13 UTC, Sönke Ludwig wrote:

I agree, but I also wonder why you still keep ignoring vibe.d. 
It achieves exactly that - right now! Integration with 
std.concurrency would be great, but at least for now it has an 
API compatible replacement that can be merged later when Sean's 
pull request is done.


The point is that vibe.d is a distributed solution. Nothing wrong 
about that. But in a single instance of some Go program you can 
>locally< spawn "as many threads as you like" with the number of 
threads easily being 50.000 and more. It seems that in the Go 
community people often do that without going distributed. Looks 
like this makes things a lot simpler when writing server-side 
applications.


Re: "Spawn as many thousand threads as you like" and D

2014-04-16 Thread Sönke Ludwig via Digitalmars-d

Am 16.04.2014 15:59, schrieb Bienlein:

When looking at the success of Go it seems to me that it is caused to a
large extend by the kind of multi-threading Go offers which is something
like "spawn as many thousand threads as you like".

Being able to spawn as many thousand threads as needed without caring
about it seems to be an important aspect for being an interesting
offering for developing server-side software. It would be nice if D
could also play in that niche. This could be some killer domain for D
beyond being a better C++.

While Go uses channels and goroutines D takes some actor-style approach
when spawning threads. This is also fine, but the problems remains that
you cannot create as many D kernel threads just as you like. Maybe this
could be something to improve in D and promote in order to give D a
further boost. I don't mean to be pushy, it's just about exchanging
ideas ;-). The FiberScheduler by Sean Kelly could achieve something in
that direction. What do you think?

Regards, Bienlein


I agree, but I also wonder why you still keep ignoring vibe.d. It 
achieves exactly that - right now! Integration with std.concurrency 
would be great, but at least for now it has an API compatible 
replacement that can be merged later when Sean's pull request is done.


Re: DIP60: @nogc attribute

2014-04-16 Thread Andrei Alexandrescu via Digitalmars-d

On 4/16/14, 2:03 AM, JN wrote:

On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:

I don't believe users hesitant to use D will suddenly come to D now
that there is a @nogc attribute.  I also don't believe they want to
avoid the GC, even if they say they do.  I believe what they really
want is to have an alternative to the GC.


I'd have to agree. I doubt @nogc will change anything, people will just
start complaining about limitations of @nogc (no array concat, having to
use own libraries which may be incompatible with phobos). The complaints
mostly come from the fact that D wants to offer a choice, in other
languages people just accept what they have. You don't see C# developers
complaining much about having to use GC, or C++ programmers all over the
world asking for GC. Well, most of the new games (Unity3D) are done in
C# nowadays and people live with it even though game development is one
of the biggest C++ loving and GC hating crowd there is.

Another issue is the quality of D garbage collector, but adding
alternative memory management ways doesn't help, fragmenting the codebase.


My perception is the opposite. Time will tell. -- Andrei


Re: on interfacing w/C++

2014-04-16 Thread Daniel Murphy via Digitalmars-d
"Moritz Maxeiner"  wrote in message 
news:kvzwlecwougswrqka...@forum.dlang.org...


Is this[1] then out of date and I can interface with non-virtual methods? 
Because that's what your post seems to imply (unless I misunderstood).


[1] http://dlang.org/cpp_interface.html


Yes.  The best place to look for concrete examples of what is supported is 
probably the C++ tests in the test suite.  (ie files containing 
"EXTRA_CPP_SOURCES") 



Re: DMD coding style rules?

2014-04-16 Thread Daniel Kozák via Digitalmars-d
V Wed, 16 Apr 2014 13:32:45 +
asman via Digitalmars-d  napsáno:

> is there? if so, where is it?

sorry I post coding styles for D not for dmd



Re: DMD coding style rules?

2014-04-16 Thread Daniel Kozák via Digitalmars-d
V Wed, 16 Apr 2014 13:32:45 +
asman via Digitalmars-d  napsáno:

> is there? if so, where is it?

http://dlang.org/dstyle.html



"Spawn as many thousand threads as you like" and D

2014-04-16 Thread Bienlein via Digitalmars-d
When looking at the success of Go it seems to me that it is 
caused to a large extend by the kind of multi-threading Go offers 
which is something like "spawn as many thousand threads as you 
like".


Being able to spawn as many thousand threads as needed without 
caring about it seems to be an important aspect for being an 
interesting offering for developing server-side software. It 
would be nice if D could also play in that niche. This could be 
some killer domain for D beyond being a better C++.


While Go uses channels and goroutines D takes some actor-style 
approach when spawning threads. This is also fine, but the 
problems remains that you cannot create as many D kernel threads 
just as you like. Maybe this could be something to improve in D 
and promote in order to give D a further boost. I don't mean to 
be pushy, it's just about exchanging ideas ;-). The 
FiberScheduler by Sean Kelly could achieve something in that 
direction. What do you think?


Regards, Bienlein


Re: DMD coding style rules?

2014-04-16 Thread Daniel Murphy via Digitalmars-d

"asman"  wrote in message news:maojdlxhbwuhxqrmv...@forum.dlang.org...

is there? if so, where is it? Also, I see dmd is written in C++ but still 
uses C style to do stuff eg, printf() instead of cout. Is this why C++ 
libraries can increase a lot size (and performance?) of executable which 
current style don't?


There are no formal rules that I'm aware of (unless somebody added them to 
the wiki) but the (frontend) code is fairly consistent.


The C-ish style is mostly due to age and Walter's preferences.  This has 
turned out to be a huge advantage as it makes conversion to D much easier 
than C++-style code with heavy stl usage etc. 



Re: DIP60: @nogc attribute

2014-04-16 Thread Paulo Pinto via Digitalmars-d
On Wednesday, 16 April 2014 at 11:51:07 UTC, Manu via 
Digitalmars-d wrote:
On 16 April 2014 19:03, JN via Digitalmars-d 
wrote:



On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:

I don't believe users hesitant to use D will suddenly come to 
D now that
there is a @nogc attribute.  I also don't believe they want 
to avoid the
GC, even if they say they do.  I believe what they really 
want is to have

an alternative to the GC.



I'd have to agree. I doubt @nogc will change anything, people 
will just
start complaining about limitations of @nogc (no array concat, 
having to
use own libraries which may be incompatible with phobos). The 
complaints
mostly come from the fact that D wants to offer a choice, in 
other
languages people just accept what they have. You don't see C# 
developers
complaining much about having to use GC, or C++ programmers 
all over the
world asking for GC. Well, most of the new games (Unity3D) are 
done in C#
nowadays and people live with it even though game development 
is one of the

biggest C++ loving and GC hating crowd there is.

Another issue is the quality of D garbage collector, but adding
alternative memory management ways doesn't help, fragmenting 
the codebase.




I don't really have an opinion on @nogc, but I feel like I'm 
one of the

people that definitely should.
I agree with these comments somewhat though.

I have as big a GC-phobia as anyone, but I have never said the 
proper
strategy is to get rid of it, and I'm not sure how helpful 
@nogc is.
I don't *mind* the idea of a @nogc attribute; I do like the 
idea that I may
have confidence some call tree doesn't invoke the GC, but I 
can't say I'm

wildly excited about this change. I'm not sure about the larger
implications for the language, or what the result of this will 
do to code
at large. I'm not yet sure how annoying I'll find typing it 
everywhere, and

whether that's a worthwhile tradeoff.
I have a short list of things I'm dying for in D for years, and 
this is not
on it. Nowhere near. (rvalue temp -> ref args plase! Linear 
algebra in

D really sucks!!)

The thing is, this doesn't address the problem. I *want* to 
like the GC...

I want a GC that is acceptable.

I am convinced that ARC would be acceptable, and I've never 
heard anyone
suggest any proposal/fantasy/imaginary GC implementation that 
would be

acceptable...
In complete absence of a path towards an acceptable GC 
implementation, I'd
prefer to see people that know what they're talking about 
explore how

refcounting could be used instead.
GC backed ARC sounds like it would acceptably automate the 
circular
reference catching that people fuss about, while still 
providing a workable
solution for embedded/realtime users; disable(/don't link) the 
backing GC,

make sure you mark weak references properly.

That would make this whole effort redundant because there would 
be no fear

of call trees causing a surprise collect under that environment.

Most importantly, it maintains compatibility with phobos and 
all other
libs. It doesn't force realtime/embedded users into their own 
little
underground world where they have @nogc everywhere and totally 
different
allocation API's than the rest of the D universe, producing 
endless
problems interacting with libraries. These are antiquated 
problems we've
suffered in C++ for decades that I _really_ don't want to see 
transfer into

D.

I'd like to suggest experts either, imagine/invent/design a GC 
that is
acceptable to the realtime/embedded crowd (seriously, can 
anyone even
_imagine_ a feasible solution in D? I can't, but I'm not an 
expert by any
measure), or take ARC seriously and work out how it can be 
implemented;
what are the hurdles, are they surmountable? Is there room for 
an

experimental fork?



Specially when C# is already blessed on the PS4, although not for 
AAA games of course.


http://tirania.org/blog/archive/2014/Apr-14.html

--
Paulo


DMD coding style rules?

2014-04-16 Thread asman via Digitalmars-d
is there? if so, where is it? Also, I see dmd is written in C++ 
but still uses C style to do stuff eg, printf() instead of cout. 
Is this why C++ libraries can increase a lot size (and 
performance?) of executable which current style don't?


Re: on interfacing w/C++

2014-04-16 Thread Moritz Maxeiner via Digitalmars-d

On Tuesday, 15 April 2014 at 11:04:42 UTC, Daniel Murphy wrote:
"Manu via Digitalmars-d"  wrote in 
message 
news:mailman.9.1397553786.2763.digitalmar...@puremagic.com...



Huh? Do methods work now? Since when?


Since I needed them for DDMD.



Is this[1] then out of date and I can interface with non-virtual 
methods? Because that's what your post seems to imply (unless I 
misunderstood).


[1] http://dlang.org/cpp_interface.html



Re:

2014-04-16 Thread Andrej Mitrovic via Digitalmars-d
On 4/16/14, Jonathan M Davis via Digitalmars-d
 wrote:
> Yikes. This is making it much harder to read what comes from who what with
> "via Digitalmars-d" tacked onto the end of everyone's name.

Also, it's broken for some emails. For example this dforum post by
Ola: http://forum.dlang.org/post/twzyvsbjphimphihb...@forum.dlang.org

In gmail it's displayed as coming from " via Digitalmars-d
 " (literally begins with "via").


Re: XCB Bindings?

2014-04-16 Thread Adam D. Ruppe via Digitalmars-d

On Wednesday, 16 April 2014 at 12:24:18 UTC, Jeroen Bollen wrote:
Surely people must have communicated with X in other ways than 
the xlib in D?


I thought about doing XCB a few times but I keep going back to 
xlib because it really isn't that bad.


Re: XCB Bindings?

2014-04-16 Thread Jeroen Bollen via Digitalmars-d

On Tuesday, 15 April 2014 at 23:26:18 UTC, Marco Leise wrote:

Am Tue, 15 Apr 2014 22:38:48 +
schrieb "Jeroen Bollen" :

Does anyone know of any (preferably complete) XCB bindings for 
D?


2 of the 2 people I know who looked into this decided that D
bindings for C bindings for X is a silly exercise, since these
C bindings are 95% generated automatically from XML files and
there is no reason why that generator couldn't be adapted to
directly output D code to create a "XDB".
The remaining 5% (mostly login and X server connection
handling) that have to be written manually never got
implemented though :p


Surely people must have communicated with X in other ways than 
the xlib in D?


Re: DIP60: @nogc attribute

2014-04-16 Thread Manu via Digitalmars-d
On 16 April 2014 19:03, JN via Digitalmars-d wrote:

> On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:
>
>> I don't believe users hesitant to use D will suddenly come to D now that
>> there is a @nogc attribute.  I also don't believe they want to avoid the
>> GC, even if they say they do.  I believe what they really want is to have
>> an alternative to the GC.
>>
>
> I'd have to agree. I doubt @nogc will change anything, people will just
> start complaining about limitations of @nogc (no array concat, having to
> use own libraries which may be incompatible with phobos). The complaints
> mostly come from the fact that D wants to offer a choice, in other
> languages people just accept what they have. You don't see C# developers
> complaining much about having to use GC, or C++ programmers all over the
> world asking for GC. Well, most of the new games (Unity3D) are done in C#
> nowadays and people live with it even though game development is one of the
> biggest C++ loving and GC hating crowd there is.
>
> Another issue is the quality of D garbage collector, but adding
> alternative memory management ways doesn't help, fragmenting the codebase.
>

I don't really have an opinion on @nogc, but I feel like I'm one of the
people that definitely should.
I agree with these comments somewhat though.

I have as big a GC-phobia as anyone, but I have never said the proper
strategy is to get rid of it, and I'm not sure how helpful @nogc is.
I don't *mind* the idea of a @nogc attribute; I do like the idea that I may
have confidence some call tree doesn't invoke the GC, but I can't say I'm
wildly excited about this change. I'm not sure about the larger
implications for the language, or what the result of this will do to code
at large. I'm not yet sure how annoying I'll find typing it everywhere, and
whether that's a worthwhile tradeoff.
I have a short list of things I'm dying for in D for years, and this is not
on it. Nowhere near. (rvalue temp -> ref args plase! Linear algebra in
D really sucks!!)

The thing is, this doesn't address the problem. I *want* to like the GC...
I want a GC that is acceptable.

I am convinced that ARC would be acceptable, and I've never heard anyone
suggest any proposal/fantasy/imaginary GC implementation that would be
acceptable...
In complete absence of a path towards an acceptable GC implementation, I'd
prefer to see people that know what they're talking about explore how
refcounting could be used instead.
GC backed ARC sounds like it would acceptably automate the circular
reference catching that people fuss about, while still providing a workable
solution for embedded/realtime users; disable(/don't link) the backing GC,
make sure you mark weak references properly.

That would make this whole effort redundant because there would be no fear
of call trees causing a surprise collect under that environment.

Most importantly, it maintains compatibility with phobos and all other
libs. It doesn't force realtime/embedded users into their own little
underground world where they have @nogc everywhere and totally different
allocation API's than the rest of the D universe, producing endless
problems interacting with libraries. These are antiquated problems we've
suffered in C++ for decades that I _really_ don't want to see transfer into
D.

I'd like to suggest experts either, imagine/invent/design a GC that is
acceptable to the realtime/embedded crowd (seriously, can anyone even
_imagine_ a feasible solution in D? I can't, but I'm not an expert by any
measure), or take ARC seriously and work out how it can be implemented;
what are the hurdles, are they surmountable? Is there room for an
experimental fork?


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-16 Thread Rene Zwanenburg via Digitalmars-d

On Wednesday, 16 April 2014 at 06:59:30 UTC, Steve Teale wrote:
On Tuesday, 15 April 2014 at 16:02:33 UTC, Steven Schveighoffer 
wrote:


Sorry, I had this wrong. The [] on the left hand side is 
actually part of the []= operator. But on the right hand side, 
it simply is a [] operator, not tied to the =. I erroneously 
thought the arr[] = ... syntax was special for arrays, but I 
forgot that it's simply another operator.


Steve, where do I find the []= operator in the documentation? 
It does not seem to be under Expressions like the other 
operators. Has it just not got there yet?


Steve


It's under op assignment operator overloading:
http://dlang.org/operatoroverloading.html#OpAssign


Re: DIP60: @nogc attribute

2014-04-16 Thread Paulo Pinto via Digitalmars-d

On Wednesday, 16 April 2014 at 09:17:48 UTC, Ola Fosheim Grøstad
wrote:

On Wednesday, 16 April 2014 at 09:03:22 UTC, JN wrote:
I'd have to agree. I doubt @nogc will change anything, people 
will just start complaining about limitations of @nogc (no 
array concat, having to use own libraries which may be 
incompatible with phobos). The complaints mostly come from the 
fact that D wants to offer a choice, in other languages people 
just accept what they have.


The complaints mostly come from the fact that D claims to be a 
system programming language capable of competing with C/C++.


Stuff like @nogc, @noalloc, @nosyscalls etc will make system 
level programming with reuse of code more manageable.


I find it troublesome that D is compared to Java, C# and Go, 
because those languages are not system level programming 
languages.


A system level programming language is a language that can be
used to write a full stack OS with it, excluding the required
Assembly parts.

There are a few examples of research OS written in the said
languages.

--
Paulo


Re:

2014-04-16 Thread H. S. Teoh via Digitalmars-d
On Wed, Apr 16, 2014 at 01:58:32AM -0700, Jonathan M Davis via Digitalmars-d 
wrote:
> On Monday, April 14, 2014 20:47:06 Brad Roberts via Digitalmars-d wrote:
> > Another flurry of bounces floated through today (which I handled by
> > removing the suspensions, again).  The only practical choice is a
> > fairly intrusive one.  I've enabled the from_is_list option, meaning
> > that the 'from' address from mail originating through the list will
> > be different.  I have no idea how well or badly this will work out,
> > but it's that or stop the mail/news gateway which is an even worse
> > option.  I've set the mailman from_is_list option to 'wrap_message'.
> > This will likely be the first message through the list with that
> > option set, so we'll see how it works out.
[...]
> Yikes. This is making it much harder to read what comes from who what
> with "via Digitalmars-d" tacked onto the end of everyone's name. Bleh.
> The guys at Yahoo are definitely making life harder for the rest of
> us.
[...]

It also has the interesting side-effect of masking the email address of
the sender, which may or may not be a good thing. It's a bummer that
subscribers no longer have the option of sending a private email to
another subscriber, though, since now you don't know the email address
of that person (unless you knew it beforehand).

Is it possible to configure mailman to add a custom header for that?
(X-Original-Sender maybe?)


T

-- 
Why are you blatanly misspelling "blatant"? -- Branden Robinson


Re: DIP60: @nogc attribute

2014-04-16 Thread justme via Digitalmars-d

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60


Walter, the DIP has a funny creation date.


Re: DIP60: @nogc attribute

2014-04-16 Thread bearophile via Digitalmars-d

JN:

I doubt @nogc will change anything, people will just start 
complaining about limitations of @nogc


Having a way to say "this piece of program doesn't cause heap 
activity" is quite useful for certain piece of code. It makes a 
difference in both performance and safety.
But not being able to call core.stdc.stdlib.alloca in a "@nogc 
pure" function sub-three is not good.


Bye,
bearophile


Re: Not receiving emails from issues.dlang.org

2014-04-16 Thread Kevin Lamonte via Digitalmars-d

Thank you, I'm all set now.

On Wednesday, 16 April 2014 at 00:51:37 UTC, Brad Roberts via 
Digitalmars-d wrote:
I've kicked things a little, but need to figure out better why 
it didn't go out on it's own.


On 4/15/14, 5:26 PM, Kevin Lamonte via Digitalmars-d wrote:
I am trying to reset my password on the bug tracker in order 
to file a new bug, but the reset emails
appear to be disappearing in the ether.  Anyone else have this 
problem?




Re:

2014-04-16 Thread Jonathan M Davis via Digitalmars-d
On Monday, April 14, 2014 20:47:06 Brad Roberts via Digitalmars-d wrote:
> Another flurry of bounces floated through today (which I handled by removing
> the suspensions,  again).  The only practical choice is a fairly intrusive
> one.  I've enabled the from_is_list option, meaning that the 'from' address
> from mail originating through the list will be different.  I have no idea
> how well or badly this will work out, but it's that or stop the mail/news
> gateway which is an even worse option.  I've set the mailman from_is_list
> option to 'wrap_message'.  This will likely be the first message through
> the list with that option set, so we'll see how it works out.
> 
> I've done this for only the digitalmars.d list so far, but if it works well
> enough, I'll make the  same change to every list.
> 
> If any of you work at yahoo, would you please visit whatever team is
> responsible for deciding to  cause this world of pain and thank them for
> me?

Yikes. This is making it much harder to read what comes from who what with 
"via Digitalmars-d" tacked onto the end of everyone's name. Bleh. The guys at 
Yahoo are definitely making life harder for the rest of us.

- Jonathan M Davis


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d

On Wednesday, 16 April 2014 at 09:03:22 UTC, JN wrote:
I'd have to agree. I doubt @nogc will change anything, people 
will just start complaining about limitations of @nogc (no 
array concat, having to use own libraries which may be 
incompatible with phobos). The complaints mostly come from the 
fact that D wants to offer a choice, in other languages people 
just accept what they have.


The complaints mostly come from the fact that D claims to be a 
system programming language capable of competing with C/C++.


Stuff like @nogc, @noalloc, @nosyscalls etc will make system 
level programming with reuse of code more manageable.


I find it troublesome that D is compared to Java, C# and Go, 
because those languages are not system level programming 
languages.


Re: DIP60: @nogc attribute

2014-04-16 Thread JN via Digitalmars-d

On Wednesday, 16 April 2014 at 01:57:29 UTC, Mike wrote:
I don't believe users hesitant to use D will suddenly come to D 
now that there is a @nogc attribute.  I also don't believe they 
want to avoid the GC, even if they say they do.  I believe what 
they really want is to have an alternative to the GC.


I'd have to agree. I doubt @nogc will change anything, people 
will just start complaining about limitations of @nogc (no array 
concat, having to use own libraries which may be incompatible 
with phobos). The complaints mostly come from the fact that D 
wants to offer a choice, in other languages people just accept 
what they have. You don't see C# developers complaining much 
about having to use GC, or C++ programmers all over the world 
asking for GC. Well, most of the new games (Unity3D) are done in 
C# nowadays and people live with it even though game development 
is one of the biggest C++ loving and GC hating crowd there is.


Another issue is the quality of D garbage collector, but adding 
alternative memory management ways doesn't help, fragmenting the 
codebase.


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d
On Wednesday, 16 April 2014 at 08:46:56 UTC, Ola Fosheim Grøstad 
wrote:
On Tuesday, 15 April 2014 at 23:54:24 UTC, Matej Nanut via 
Digitalmars-d wrote:
This shouldn't be a problem if you plonk @nogc: at the top of 
your own file, as it won't compile anymore if you try to call 
@gc functions.


It is a problem if you are allowed to override @nogc with @gc, 
which is what the post I responded to suggested.


Btw, I think you should add @noalloc also which prevents both new 
and malloc. It would be useful for real time callbacks, interrupt 
handlers etc.


Re: DIP60: @nogc attribute

2014-04-16 Thread via Digitalmars-d
On Tuesday, 15 April 2014 at 23:54:24 UTC, Matej Nanut via 
Digitalmars-d wrote:
This shouldn't be a problem if you plonk @nogc: at the top of 
your own file, as it won't compile anymore if you try to call 
@gc functions.


It is a problem if you are allowed to override @nogc with @gc, 
which is what the post I responded to suggested.


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-16 Thread Kagamin via Digitalmars-d
On Tuesday, 15 April 2014 at 15:59:31 UTC, Steven Schveighoffer 
wrote:
Requiring it simply adds unneeded hoops through which you must 
jump, the left hand side denotes the operation, the right hand 
side does not


Unfortunately, this particular operation is denoted by both sides.

Note -- it would be nice (and more consistent IMO) if arr[] = 
range worked identically to arr[] = arr.


Range or array, there are still two ways how it can work. The 
idea is to give the choice to programmer instead of the compiler.


Sorry, I had this wrong. The [] on the left hand side is 
actually part of the []= operator.


There's no such operator. You can assign fixed-size array without 
slice syntax.


Re: DIP60: @nogc attribute

2014-04-16 Thread w0rp via Digitalmars-d

On Wednesday, 16 April 2014 at 03:26:24 UTC, Meta wrote:
This would go fairly well with Andrei's idea of passing true or 
false to an attribute to enable or disable it.


@gc(false) void fun() {}



I don't like this because it's hard to read. It's a bad idea. 
Never use booleans in interfaces like that. @gc and @nogc are 
better.


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-16 Thread Steve Teale via Digitalmars-d
On Tuesday, 15 April 2014 at 16:02:33 UTC, Steven Schveighoffer 
wrote:


Sorry, I had this wrong. The [] on the left hand side is 
actually part of the []= operator. But on the right hand side, 
it simply is a [] operator, not tied to the =. I erroneously 
thought the arr[] = ... syntax was special for arrays, but I 
forgot that it's simply another operator.


Steve, where do I find the []= operator in the documentation? 
It does not seem to be under Expressions like the other 
operators. Has it just not got there yet?


Steve