Re: DIP60: @nogc attribute

2014-04-17 Thread Paulo Pinto via Digitalmars-d

On Thursday, 17 April 2014 at 04:19:00 UTC, Manu via
Digitalmars-d wrote:

On 17 April 2014 09:20, Walter Bright via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:


On 4/16/2014 3:42 PM, Adam Wilson wrote:

ARC may in fact be the most advantageous for a specific use 
case, but

that in no
way means that all use cases will see a performance 
improvement, and in

all
likelihood, may see a decrease in performance.



Right on. Pervasive ARC is very costly, meaning that one will 
have to
define alongside it all kinds of schemes to mitigate those 
costs, all of

which are expensive for the programmer to get right.



GC is _very_ costly. From my experience comparing iOS and 
Android, it's
clear that GC is vastly more costly and troublesome than ARC. 
What measure

do you use to make that assertion?
You're also making a hidden assertion that the D GC will never 
improve,
since most GC implementations require some sort of work similar 
to ref

fiddling anyway...


Except Dalvik's GC sucks, because it is hardly improved since
Android 2.3 and very simple when compared to any other commercial
JVM for embedded scenarios, for example Jamaica JVM
https://www.aicas.com/cms/.

Even Windows Phone .NET GC is better and additionally .NET is
compiled to native code on the store.

There is a reason why Dalvik is being replaced by ART.

--
Paulo





Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 06:56:11 UTC, Paulo Pinto wrote:

There is a reason why Dalvik is being replaced by ART.


AoT compilation?

Btw, AFAIK the GC is deprecated for Objective-C from OS-X 10.8. 
Appstore requires apps to be GC free... Presumably for good 
reasons.




Re: DIP60: @nogc attribute

2014-04-17 Thread Paulo Pinto via Digitalmars-d
On Thursday, 17 April 2014 at 08:05:42 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 06:56:11 UTC, Paulo Pinto wrote:

There is a reason why Dalvik is being replaced by ART.


AoT compilation?


Not only. Dalvk was left to bit rotten and has hardly seen any 
updates since 2.3.




Btw, AFAIK the GC is deprecated for Objective-C from OS-X 10.8. 
Appstore requires apps to be GC free... Presumably for good 
reasons.


Because Apple sucks at implementing GCs.

It was not possible to mix binary libraries compiled with GC 
enabled and with ones compiled with it disabled.


I already mentioned this multiple times here and can hunt the 
posts with respective links if you will.


The forums were full of crash descriptions.

Their ARC solution is based on Cocoa patterns and only applies to 
Cocoa and other Objective-C frameworks with the same lifetime 
semantics.


Basically the compiler inserts the appropriate [... retain] / 
[... release] calls in the places where an Objective-C programmer 
is expected to write them by hand. Additionally a second pass 
removes extra invocation pairs.


This way there is no interoperability issues between compiled 
libraries, as from the point of view from generated code there is 
no difference other that the optimized calls.


Of course it was sold at WWDC as "ARC is better than GC" and not 
as "ARC is better than the crappy GC implementation we have done".


--
Paulo


Re: Knowledge of managed memory pointers

2014-04-17 Thread Kagamin via Digitalmars-d
You can do anything, what fits your task, see RefCounted and 
Unique for an example on how to write smart pointers.


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
Of course it was sold at WWDC as "ARC is better than GC" and 
not as "ARC is better than the crappy GC implementation we have 
done".


I have never seen a single instance of a GC based system doing 
anything smooth in the realm of audio/visual real time 
performance without being backed by a non-GC engine.


You can get decent performance from GC backed languages on the 
higher level constructs on top of a low level engine. IMHO the 
same goes for ARC. ARC is a bit more predictable than GC. GC is a 
bit more convenient and less predictable.


I think D has something to learn from this:

1. Support for manual memory management is important for low 
level engines.


2. Support for automatic memory management is important for high 
level code on top of that.


The D community is torn because there is some idea that libraries 
should assume point 2 above and then be retrofitted to point 1. I 
am not sure if that will work out.


Maybe it is better to just say that structs are bound to manual 
memory management and classes are bound to automatic memory 
management.


Use structs for low level stuff with manual memory management.
Use classes for high level stuff with automatic memory management.

Then add language support for "union-based inheritance" in 
structs with a special construct for programmer-specified subtype 
identification.


That is at least conceptually easy to grasp and the type system 
can more easily safeguard code than in a mixed model.


Most successful frameworks that allow high-level programming have 
two layers:

- Python/heavy duty c libraries
- Javascript/browser engine
- Objective-C/C and Cocoa / Core Foundation
- ActionScript / c engine

etc

I personally favour the more integrated approach that D appears 
to be aiming for, but I am somehow starting to feel that for most 
programmers that model is going to be difficult to grasp in real 
projects, conceptually. Because they don't really want the low 
level stuff. And they don't want to have their high level code 
bastardized by low level requirements.


As far as I am concerned D could just focus on the structs and 
the low level stuff, and then later try to work in the high level 
stuff. There is no efficient GC in sight and the language has not 
been designed for it either.


ARC with whole-program optimization fits better into the 
low-level paradigm than GC. So if you start from low-level 
programming and work your way up to high-level programming then 
ARC is a better fit.


Ola.


Re: DIP60: @nogc attribute

2014-04-17 Thread Dejan Lekic via Digitalmars-d

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


This is a good start, but I am sure I am not the only person who 
thought "maybe we should have this on a module level". This would 
allow people to nicely group pieces of the application that 
should not use GC.


Re: DIP60: @nogc attribute

2014-04-17 Thread Paulo Pinto via Digitalmars-d
On Thursday, 17 April 2014 at 08:52:28 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
Of course it was sold at WWDC as "ARC is better than GC" and 
not as "ARC is better than the crappy GC implementation we 
have done".


I have never seen a single instance of a GC based system doing 
anything smooth in the realm of audio/visual real time 
performance without being backed by a non-GC engine.


You can get decent performance from GC backed languages on the 
higher level constructs on top of a low level engine. IMHO the 
same goes for ARC. ARC is a bit more predictable than GC. GC is 
a bit more convenient and less predictable.


I think D has something to learn from this:

1. Support for manual memory management is important for low 
level engines.


2. Support for automatic memory management is important for 
high level code on top of that.


The D community is torn because there is some idea that 
libraries should assume point 2 above and then be retrofitted 
to point 1. I am not sure if that will work out.


Maybe it is better to just say that structs are bound to manual 
memory management and classes are bound to automatic memory 
management.


Use structs for low level stuff with manual memory management.
Use classes for high level stuff with automatic memory 
management.


Then add language support for "union-based inheritance" in 
structs with a special construct for programmer-specified 
subtype identification.


That is at least conceptually easy to grasp and the type system 
can more easily safeguard code than in a mixed model.


Most successful frameworks that allow high-level programming 
have two layers:

- Python/heavy duty c libraries
- Javascript/browser engine
- Objective-C/C and Cocoa / Core Foundation
- ActionScript / c engine

etc

I personally favour the more integrated approach that D appears 
to be aiming for, but I am somehow starting to feel that for 
most programmers that model is going to be difficult to grasp 
in real projects, conceptually. Because they don't really want 
the low level stuff. And they don't want to have their high 
level code bastardized by low level requirements.


As far as I am concerned D could just focus on the structs and 
the low level stuff, and then later try to work in the high 
level stuff. There is no efficient GC in sight and the language 
has not been designed for it either.


ARC with whole-program optimization fits better into the 
low-level paradigm than GC. So if you start from low-level 
programming and work your way up to high-level programming then 
ARC is a better fit.


Ola.


Looking at the hardware specifications of usable desktop OSs 
built with automatic memory managed system programming languages, 
we have:


Interlisp, Mesa/Cedar, ARC with GC for cycle collection, running 
on Xerox 1132 (Dorado) and Xerox 1108 (Dandelion).


http://archive.computerhistory.org/resources/access/text/2010/06/102660634-05-05-acc.pdf

Oberon running on Ceres,

ftp://ftp.inf.ethz.ch/pub/publications/tech-reports/1xx/070.pdf

Bluebottle, Oberon's sucessor has a primitive video editor,
http://www.ocp.inf.ethz.ch/wiki/Documentation/WindowManager?action=download&upname=AosScreenshot1.jpg

Spin running on DEC Alpha, http://en.wikipedia.org/wiki/DEC_Alpha

Any iOS device runs circles around those systems, hence why I 
always like to make clear it was Apple's failure to make a 
workable GC in a C based language and not the virtues of pure ARC 
over pure GC.


Their solution has its merits, and as I mentioned the benefit of 
generating the same code, while releasing the developer of pain 
to write those retain/release themselves.


Similar approach was taken by Microsoft with their C++/CX and COM 
integration.


So any pure GC basher now uses Apple's example, with a high 
probability of not  knowing the technical issues why it came to 
be like that.


--
Paulo


Re: DIP60: @nogc attribute

2014-04-17 Thread Rikki Cattermole via Digitalmars-d

On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


This is a good start, but I am sure I am not the only person 
who thought "maybe we should have this on a module level". This 
would allow people to nicely group pieces of the application 
that should not use GC.


Sure it does.

module mymodule;
@nogc:

 void myfunc(){}

 class MyClass {
 void mymethod() {}
 }


Everything in above code has @nogc applied to it.
Nothing special about it, can do it for most attributes like
static, final and UDA's.
Unless of course you can think of another way it could be done? 
Or I've missed something.


Re: DIP60: @nogc attribute

2014-04-17 Thread bearophile via Digitalmars-d

Walter Bright:


http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


If I have this program:

__gshared int x = 5;
int main() {
int[] a = [x, x + 10, x * x];
return a[0] + a[1] + a[2];
}


If I compile with all optimizations DMD produces this X86 asm, 
that contains the call to __d_arrayliteralTX, so that main can't 
be @nogc:


__Dmain:
L0: pushEAX
pushEAX
mov EAX,offset FLAT:_D11TypeInfo_Ai6__initZ
pushEBX
pushESI
pushEDI
push3
pushEAX
callnear ptr __d_arrayliteralTX
mov EBX,EAX
mov ECX,_D4test1xi
mov [EBX],ECX
mov EDX,_D4test1xi
add EDX,0Ah
mov 4[EBX],EDX
mov ESI,_D4test1xi
imulESI,ESI
mov 8[EBX],ESI
mov EAX,3
mov ECX,EBX
mov 014h[ESP],EAX
mov 018h[ESP],ECX
add ESP,8
mov EDI,010h[ESP]
mov EAX,[EDI]
add EAX,4[EDI]
add EAX,8[EDI]
pop EDI
pop ESI
pop EBX
add ESP,8
ret


If I compile that code with ldc2 without optimizations the result 
is similar, there is a call to __d_newarrayvT:


__Dmain:
pushl   %ebp
movl%esp, %ebp
pushl   %esi
andl$-8, %esp
subl$32, %esp
leal__D11TypeInfo_Ai6__initZ, %eax
movl$3, %ecx
movl%eax, (%esp)
movl$3, 4(%esp)
movl%ecx, 12(%esp)
calll   __d_newarrayvT
movl%edx, %ecx
movl__D4test1xi, %esi
movl%esi, (%edx)
movl__D4test1xi, %esi
addl$10, %esi
movl%esi, 4(%edx)
movl__D4test1xi, %esi
imull   __D4test1xi, %esi
movl%esi, 8(%edx)
movl%eax, 16(%esp)
movl%ecx, 20(%esp)
movl20(%esp), %eax
movl20(%esp), %ecx
movl(%eax), %eax
addl4(%ecx), %eax
movl20(%esp), %ecx
addl8(%ecx), %eax
leal-4(%ebp), %esp
popl%esi
popl%ebp
ret



But if I compile the code with ldc2 with full optimizations the 
compiler is able to perform a bit of escape analysis, and to see 
the array doesn't need to be allocated, and produces the asm:


__Dmain:
movl__D4test1xi, %eax
movl%eax, %ecx
imull   %ecx, %ecx
addl%eax, %ecx
leal10(%eax,%ecx), %eax
ret

Now there are no memory allocations.

So what's the right behavour of @nogc? Is it possible to compile 
this main with a future version of ldc2 if I compile the code 
with full optimizations?


Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-17 Thread bearophile via Digitalmars-d

Adam D. Ruppe:

What I want is a __trait that scans for all call expressions in 
a particular function and returns all those functions.


Then, we can check them for UDAs using the regular way and 
start to implement library defined things like @safe, @nogc, 
etc.


This is the start of a nice idea to extend the D type system a 
little in user defined code. But I think it still needs some 
refinement.


I also think there can be a more automatic way to test them than 
"the regular way" of putting a static assert outside the function.


Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
Any iOS device runs circles around those systems, hence why I 
always like to make clear it was Apple's failure to make a 
workable GC in a C based language and not the virtues of pure 
ARC over pure GC.


I am not making an argument for pure ARC. Objective-C allows you 
to mix and Os-X is most certainly not pure ARC based.


If we go back in time to the timeslot you point to even C was 
considered wy too slow for real time graphics.


On the C64 and the Amiga you wrote in assembly and optimized for 
the hardware. E.g. using hardware scroll register on the C64 and 
the copperlist (a specialized scanline triggered processor 
writing to hardware registers) on the Amiga. No way you could do 
real time graphics in a GC backed language back then without a 
dedicated engine with HW support. Real time audio was done with 
DSPs until the mid 90s.


Re: DIP60: @nogc attribute

2014-04-17 Thread bearophile via Digitalmars-d
Is it possible to compile this main with a future version of 
ldc2 if I compile the code with full optimizations?


Sorry, I meant to ask if it's possible to compile this main with 
a @nogc applied to it if I compile it with ldc2 with full 
optimizations.


Bye,
bearophile


re-open of Issue 2757

2014-04-17 Thread Nick B via Digitalmars-d
I have noticed that Walter has re-open this enhancement (re 
Resourcement Management) quite recently (Feb 2014). I originally 
filed it in 2009.   Is anyone able to say why ?


Nick


Re: re-open of Issue 2757

2014-04-17 Thread Brad Roberts via Digitalmars-d
According to the modification history for that bug, you reopened it back on May 4, 2009.  Walter 
merely changed the version id recently from 1.041 to D1.


   https://issues.dlang.org/show_activity.cgi?id=2757

On 4/17/14, 2:55 AM, Nick B via Digitalmars-d wrote:

I have noticed that Walter has re-open this enhancement (re Resourcement 
Management) quite recently
(Feb 2014). I originally filed it in 2009.   Is anyone able to say why ?

Nick


Re: DIP60: @nogc attribute

2014-04-17 Thread Artur Skawina via Digitalmars-d
On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:
> On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
>> On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:
>>> http://wiki.dlang.org/DIP60
>>>
>>> Start on implementation:
>>>
>>> https://github.com/D-Programming-Language/dmd/pull/3455
>>
>> This is a good start, but I am sure I am not the only person who thought 
>> "maybe we should have this on a module level". This would allow people to 
>> nicely group pieces of the application that should not use GC.
> 
> Sure it does.
> 
> module mymodule;
> @nogc:
> 
>  void myfunc(){}
> 
>  class MyClass {
>  void mymethod() {}
>  }
> 
> 
> Everything in above code has @nogc applied to it.
> Nothing special about it, can do it for most attributes like
> static, final and UDA's.

It does not work like that. User defined attributes only apply to
the current scope, ie your MyClass.mymethod() would *not* have the
attribute. With built-in attributes it becomes more "interesting" -
for example '@safe' will include child scopes, but 'nothrow" won't.

Yes, the current attribute situation in D is a mess. No, attribute
inference isn't the answer.

artur


Re: DIP60: @nogc attribute

2014-04-17 Thread Paulo Pinto via Digitalmars-d
On Thursday, 17 April 2014 at 09:55:38 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 09:32:52 UTC, Paulo Pinto wrote:
Any iOS device runs circles around those systems, hence why I 
always like to make clear it was Apple's failure to make a 
workable GC in a C based language and not the virtues of pure 
ARC over pure GC.


I am not making an argument for pure ARC. Objective-C allows 
you to mix and Os-X is most certainly not pure ARC based.


If we go back in time to the timeslot you point to even C was 
considered wy too slow for real time graphics.


On the C64 and the Amiga you wrote in assembly and optimized 
for the hardware. E.g. using hardware scroll register on the 
C64 and the copperlist (a specialized scanline triggered 
processor writing to hardware registers) on the Amiga. No way 
you could do real time graphics in a GC backed language back 
then without a dedicated engine with HW support. Real time 
audio was done with DSPs until the mid 90s.


Sure, old demoscener here.


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d
On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via 
Digitalmars-d wrote:

Yes, the current attribute situation in D is a mess.


A more coherent D syntax would make the language more 
approachable. I find the current syntax to be somewhat annoying.


I'd also like to see coherent naming conventions for attributes 
etc, e.g.


@nogc// assert/prove no gc (for compiled code)
@is_nogc // assume/guarantee no gc (for linked code, or 
"unprovable" code)


Re: DIP60: @nogc attribute

2014-04-17 Thread Rikki Cattermole via Digitalmars-d

On Thursday, 17 April 2014 at 10:38:54 UTC, Artur Skawina via
Digitalmars-d wrote:

On 04/17/14 11:33, Rikki Cattermole via Digitalmars-d wrote:

On Thursday, 17 April 2014 at 09:22:55 UTC, Dejan Lekic wrote:
On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright 
wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


This is a good start, but I am sure I am not the only person 
who thought "maybe we should have this on a module level". 
This would allow people to nicely group pieces of the 
application that should not use GC.


Sure it does.

module mymodule;
@nogc:

 void myfunc(){}

 class MyClass {
 void mymethod() {}
 }


Everything in above code has @nogc applied to it.
Nothing special about it, can do it for most attributes like
static, final and UDA's.


It does not work like that. User defined attributes only apply 
to
the current scope, ie your MyClass.mymethod() would *not* have 
the
attribute. With built-in attributes it becomes more 
"interesting" -
for example '@safe' will include child scopes, but 'nothrow" 
won't.


Yes, the current attribute situation in D is a mess. No, 
attribute

inference isn't the answer.

artur


Good point yes, in the case of a class/struct its methods won't
have it applied to them.
No idea post manually adding it to start of those declarations
can be done. Either that or we need language changes.

@nogc
module mymodule;

@("something")
module mymodule;

Well it is a possible option for improvement. Either way, I'm not 
gonna advocate this.


Re: DIP60: @nogc attribute

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 18:22, Paulo Pinto via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC
> is better than the crappy GC implementation we have done".
>

The argument is, GC is not appropriate for various classes of software. It
is unacceptable. No GC that anyone has yet imagined/proposed will address
this fact.
ARC offers a solution that is usable by all parties. We're not making
comparisons between contestants or their implementation quality here, GC is
not in the race.


Re: DIP60: @nogc attribute

2014-04-17 Thread w0rp via Digitalmars-d
I'm not convinced that any automatic memory management scheme 
will buy much with real time applications. Generally with 
real-time processes, you need to pre-allocate. I think GC could 
be feasible for a real-time application if the GC is precise and 
collections are scheduled, instead of run randomly. Scoped memory 
also helps.


Re: DIP60: @nogc attribute

2014-04-17 Thread John Colvin via Digitalmars-d
On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via 
Digitalmars-d wrote:

ARC offers a solution that is usable by all parties.


Is this a proven statement?

If that paper is right then ARC with cycle management is in fact 
equivalent to Garbage Collection.

Do we have evidence to the contrary?


My very vague reasoning on the topic:

Sophisticated GCs use various methods to avoid scanning the whole 
heap, and by doing so they in fact implement something equivalent 
to ARC, even if it doesn't appear that way on the surface. In the 
other direction, ARC ends up implementing a GC to deal with 
cycles. I.e.


Easy work (normal data): A clever GC effectively implements ARC. 
ARC does what it says on the tin.


Hard Work (i.e. cycles): Even a clever GC must be somewhat 
conservative*. ARC effectively implements a GC.


*in the normal sense, not GC-jargon.

Ergo they aren't really any different?


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Wed, 16 Apr 2014 13:39:36 -0400, Walter Bright  
 wrote:


On 4/16/2014 1:49 AM, "Ola Fosheim Grøstad"  
" wrote:
Btw, I think you should add @noalloc also which prevents both new and  
malloc. It

would be useful for real time callbacks, interrupt handlers etc.


Not practical. malloc() is only one way of allocating memory - user  
defined custom allocators are commonplace.


More practical:

Mechanism for the compiler to apply arbitrary "transitive" attributes to  
functions.


In other words, some mechanism that you can tell the compiler "all the  
functions this @someattribute function calls must have @someattribute  
attached to it," that also applies the attribute automatically for  
templates.


Then, you can come up with whatever restrictive schemes you want.

Essentially, this is the same as @nogc, except the compiler has special  
hooks to the GC (e.g. new) that need to be handled. The compiler has no  
such hooks for C malloc, or whatever allocation scheme you use, so it's  
all entirely up to the library and user code.


-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 18:52, via Digitalmars-d wrote:

> On Thursday, 17 April 2014 at 08:22:32 UTC, Paulo Pinto wrote:
>
>> Of course it was sold at WWDC as "ARC is better than GC" and not as "ARC
>> is better than the crappy GC implementation we have done".
>>
>
> I have never seen a single instance of a GC based system doing anything
> smooth in the realm of audio/visual real time performance without being
> backed by a non-GC engine.
>
> You can get decent performance from GC backed languages on the higher
> level constructs on top of a low level engine. IMHO the same goes for ARC.
> ARC is a bit more predictable than GC. GC is a bit more convenient and less
> predictable.
>
> I think D has something to learn from this:
>
> 1. Support for manual memory management is important for low level engines.
>
> 2. Support for automatic memory management is important for high level
> code on top of that.
>
> The D community is torn because there is some idea that libraries should
> assume point 2 above and then be retrofitted to point 1. I am not sure if
> that will work out.
>

See, I just don't find managed memory incompatible with 'low level'
realtime or embedded code, even on tiny microcontrollers in principle.
ARC would be fine in low level code, assuming the language supported it to
the fullest of it's abilities. I'm confident that programmers would learn
it's performance characteristics and be able to work effectively with it in
very little time.
It's well understood, and predictable. You know exactly how it works, and
precisely what the costs are. There are plenty of techniques to move any
ref fiddling out of your function if you identify that to be the source of
a bottleneck.

I think with some care and experience, you could use ARC just as
effectively as full manual memory management in the inner loops, but also
gain the conveniences it offers on the periphery where the performance
isn't critical.
_Most_ code exists in this periphery, and therefore the importance of that
convenience shouldn't be underestimated.


Maybe it is better to just say that structs are bound to manual memory
> management and classes are bound to automatic memory management.
>
Use structs for low level stuff with manual memory management.
> Use classes for high level stuff with automatic memory management.
>
> Then add language support for "union-based inheritance" in structs with a
> special construct for programmer-specified subtype identification.
>
> That is at least conceptually easy to grasp and the type system can more
> easily safeguard code than in a mixed model.
>

No. It misses basically everything that compels the change. Strings, '~',
closures. D largely depends on it's memory management. That's the entire
reason why library solutions aren't particularly useful.
I don't want to see D evolve to another C++ where libraries/frameworks are
separated or excluded by allocation practise.

Auto memory management in D is a reality. Unless you want to build yourself
into a fully custom box (I don't!), then you have to deal with it. Any
library that wasn't written by a gamedev will almost certainly rely on it,
and games are huge complex things that typically incorporate lots of
libraries. I've spent my entire adult lifetime dealing with these sorts of
problems.


Most successful frameworks that allow high-level programming have two
> layers:
> - Python/heavy duty c libraries
> - Javascript/browser engine
> - Objective-C/C and Cocoa / Core Foundation
> - ActionScript / c engine
>
> etc
>
> I personally favour the more integrated approach that D appears to be
> aiming for, but I am somehow starting to feel that for most programmers
> that model is going to be difficult to grasp in real projects,
> conceptually. Because they don't really want the low level stuff. And they
> don't want to have their high level code bastardized by low level
> requirements.
>
> As far as I am concerned D could just focus on the structs and the low
> level stuff, and then later try to work in the high level stuff. There is
> no efficient GC in sight and the language has not been designed for it
> either.
>
> ARC with whole-program optimization fits better into the low-level
> paradigm than GC. So if you start from low-level programming and work your
> way up to high-level programming then ARC is a better fit.
>

The thing is, D is not particularly new, it's pretty much 'done', so there
will be no radical change in direction like you seem to suggest.
But I generally agree with your final points.

The future is not manual memory management. But D seems to be pushing us
back into that box without a real solution to this problem.
Indeed, it is agreed that there is no fantasy solution via GC on the
horizon... so what?

Take this seriously. I want to see ARC absolutely killed dead rather than
dismissed.


Re: std.stream replacement

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d

On Wed, 16 Apr 2014 12:09:49 -0400, sclytrack  wrote:


On Saturday, 14 December 2013 at 15:16:50 UTC, Jacob Carlborg wrote:

On 2013-12-14 15:53, Steven Schveighoffer wrote:


I realize this is really old, and I sort of dropped off the D cliff
because all of a sudden I had 0 extra time.

But I am going to get back into working on this (if it's still an  
issue,
I still need to peruse the NG completely to see what has happened in  
the

last few months).


Yeah, it still need to be replaced. In this case you can have a look at  
the review queue to see what's being worked on:


http://wiki.dlang.org/Review_Queue



SINK, TAP
-


https://github.com/schveiguy/phobos/blob/new-io/std/io.d

What about adding a single property named sink or tap depending
on how you want the chain to be. That could be either a struct or
a class. Each sink would provide another interface.


Chaining i/o objects is something I have yet to tackle. I have ideas, but  
I'll wait until I have posted some updated code (hopefully soon). I want  
it to work like ranges/unix pipes.


The single most difficult thing is making it drop-in-replacement for  
std.stdio.File. But I'm close...


-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Michel Fortin via Digitalmars-d
On 2014-04-17 03:13:48 +, Manu via Digitalmars-d 
 said:



Obviously, a critical part of ARC is the compilers ability to reduce
redundant inc/dec sequences. At which point your 'every time' assertion is
false. C++ can't do ARC, so it's not comparable.
With proper elimination, transferring ownership results in no cost, only
duplication/destruction, and those are moments where I've deliberately
committed to creation/destruction of an instance of something, at which
point I'm happy to pay for an inc/dec; creation/destruction are rarely
high-frequency operations.


You're right that transferring ownership does not cost with ARC. What 
costs you is return values and temporary local variables.


While it's nice to have a compiler that'll elide redundant 
retain/release pairs, function boundaries can often makes this 
difficult. Take this first example:


Object globalObject;

Object getObject()
{
return globalObject; // implicit: retain(globalObject)
}

void main()
{
auto object = getObject();
writeln(object);
// implicit: release(object)
}

It might not be obvious, but here the getObject function *has to* 
increment the reference count by one before returning. There's no other 
convention that'll work because another implementation of getObject 
might return a temporary object. Then, at the end of main, 
globalObject's reference counter is decremented. Only if getObject gets 
inlined can the compiler detect the increment/decrement cycle is 
unnecessary.


But wait! If writeln isn't pure (and surely it isn't), then it might 
change the value of globalObject (you never know what's in 
Object.toString, right?), which will in turn release object. So main 
*has to* increment the reference counter if it wants to make sure its 
local variable object is valid until the end of the writeln call. Can't 
elide here.


Let's take this other example:

Object globalObject;
Object otherGlobalObject;

void main()
{
auto object = globalObject; // implicit: retain(globalObject)
foo(object);
// implicit: release(object)
}

Here you can elide the increment/decrement cycle *only if* foo is pure. 
If foo is not pure, then it might set another value to globalObject 
(you never know, right?), which will decrement the reference count and 
leave the "object" variable in main the sole owner of the object. 
Alternatively, if foo is not pure but instead gets inlined it might be 
provable that it does not touch globalObject, and elision might become 
a possibility.


I think ARC needs to be practical without eliding of redundant calls. 
It's a good optimization, but a difficult one unless everything is 
inlined. Many such elisions that would appear to be safe at first 
glance aren't provably safe for the compiler because of function calls.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: DIP60: @nogc attribute

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 21:57, John Colvin via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:
>
>> ARC offers a solution that is usable by all parties.
>>
>
> Is this a proven statement?
>
> If that paper is right then ARC with cycle management is in fact
> equivalent to Garbage Collection.
> Do we have evidence to the contrary?
>

People who care would go to the effort of manually marking weak references.
If you make a commitment to that in your software, you can eliminate the
backing GC. Turn it off, or don't even link it.
The backing GC is so that 'everyone else' would be unaffected by the shift.
They'd likely see an advantage too, in that the GC would have a lot less
work to do, since the ARC would clean up most of the memory (fall generally
in the realm you refer to below).


My very vague reasoning on the topic:
>
> Sophisticated GCs use various methods to avoid scanning the whole heap,
> and by doing so they in fact implement something equivalent to ARC, even if
> it doesn't appear that way on the surface. In the other direction, ARC ends
> up implementing a GC to deal with cycles. I.e.
>
> Easy work (normal data): A clever GC effectively implements ARC. ARC does
> what it says on the tin.
>
> Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*.
> ARC effectively implements a GC.
>
> *in the normal sense, not GC-jargon.
>
> Ergo they aren't really any different?


Nobody has proposed a 'sophisticated' GC for D. As far as I can tell, it's
considered impossible by the experts.
It also doesn't address the fundamental issue with the nature of a GC,
which is that it expects plenty of free memory. You can't use a GC in a
low-memory environment, no matter how it's designed. It allocates until it
can't, then spends a large amount of time re-capturing unreferenced memory.
As free memory decreases, this becomes more and more frequent.


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Wed, 16 Apr 2014 02:59:29 -0400, Steve Teale  
 wrote:



On Tuesday, 15 April 2014 at 16:02:33 UTC, Steven Schveighoffer wrote:

Sorry, I had this wrong. The [] on the left hand side is actually part  
of the []= operator. But on the right hand side, it simply is a []  
operator, not tied to the =. I erroneously thought the arr[] = ...  
syntax was special for arrays, but I forgot that it's simply another  
operator.


Steve, where do I find the []= operator in the documentation? It does  
not seem to be under Expressions like the other operators. Has it just  
not got there yet?


dlang.org/operatoroverloading.html

Search for opSliceAssign

-Steve


Re: Knowledge of managed memory pointers

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 18:20, Kagamin via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> You can do anything, what fits your task, see RefCounted and Unique for an
> example on how to write smart pointers.
>

... what?

I don't think you understood my post.

void f(void* ptr)
{
  // was ptr allocated with malloc, or new?
}

If we knew this, we may be able to address some problems with designing
better GC's, or cross-language API's.


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d

On Wed, 16 Apr 2014 04:05:57 -0400, Kagamin  wrote:


On Tuesday, 15 April 2014 at 15:59:31 UTC, Steven Schveighoffer wrote:
Requiring it simply adds unneeded hoops through which you must jump,  
the left hand side denotes the operation, the right hand side does not


Unfortunately, this particular operation is denoted by both sides.


Not really. The = operator (opAssign) is different from the [] = operator  
(opSliceAssign).


I actually am ignorant of how this works under the hood for slices, what  
triggers element-wise copy vs. assign. But for custom types, this is how  
you would have to do it I think.


Note -- it would be nice (and more consistent IMO) if arr[] = range  
worked identically to arr[] = arr.


Range or array, there are still two ways how it can work. The idea is to  
give the choice to programmer instead of the compiler.


But programmer cannot define new operators on slices.



Sorry, I had this wrong. The [] on the left hand side is actually part  
of the []= operator.


There's no such operator. You can assign fixed-size array without slice  
syntax.


Fixed size array is a different type with different semantics from slices.  
You cannot assign the pointer/length of a fixed-size array, so opAssign  
devolves to opSliceAssign.


-Steve


[OT] from YourNameHere via Digitalmars-d

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
This is very very annoying. Every time I open one of these messages I get  
a huge pregnant 5-second pause, along with the Mac Beach Ball (hourglass)  
while this message is opened in my news reader.


Whatever this is, can we get rid of it?

-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Orvid King via Digitalmars-d
I'm just going to put my 2-cents into this discussion, it's my
personal opinion that while _allocations_ should be removed from
phobos wherever possible, replacing GC usage with manual calls to
malloc/free has no place in the standard library, as it's quite simply
a mess that is really not needed, and quite simply, one should be
figuring out how to simply not allocate at all rather than trying do
do manual management.

It is possible to implement a much better GC than what D currently
has, and I intend to do exactly that when I have the time needed (in
roughly a month). Firstly by making it heap precise, maybe even adding
a stack precise mode (unlikely). Secondly by making it optionally use
an allocation strategy similar to tcmalloc, which is able to avoid
using a global lock for most allocations, as an interim measure until
DMD gets full escape analysis, which, due to the nature of D, would be
required before I could implement an effective compacting GC.
Depending on if I can grasp the underlying theory behind it, I *may*
also create an async collection mode, but it will be interesting to
see how I am able to tie in the extensible scanning system (Andrei's
allocators) into it. Lastly, I'll add support for stack allocated
classes, however that will likely have to be disabled until DMD gets
full escape analysis. As a final note, this will be the 3rd GC I've
written, although it will be the most complex by far. The first was
just heap precise, the second a generational compacting version of it.

On 4/17/14, Manu via Digitalmars-d  wrote:
> On 17 April 2014 21:57, John Colvin via Digitalmars-d <
> digitalmars-d@puremagic.com> wrote:
>
>> On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d wrote:
>>
>>> ARC offers a solution that is usable by all parties.
>>>
>>
>> Is this a proven statement?
>>
>> If that paper is right then ARC with cycle management is in fact
>> equivalent to Garbage Collection.
>> Do we have evidence to the contrary?
>>
>
> People who care would go to the effort of manually marking weak references.
> If you make a commitment to that in your software, you can eliminate the
> backing GC. Turn it off, or don't even link it.
> The backing GC is so that 'everyone else' would be unaffected by the shift.
> They'd likely see an advantage too, in that the GC would have a lot less
> work to do, since the ARC would clean up most of the memory (fall generally
> in the realm you refer to below).
>
>
> My very vague reasoning on the topic:
>>
>> Sophisticated GCs use various methods to avoid scanning the whole heap,
>> and by doing so they in fact implement something equivalent to ARC, even
>> if
>> it doesn't appear that way on the surface. In the other direction, ARC
>> ends
>> up implementing a GC to deal with cycles. I.e.
>>
>> Easy work (normal data): A clever GC effectively implements ARC. ARC does
>> what it says on the tin.
>>
>> Hard Work (i.e. cycles): Even a clever GC must be somewhat conservative*.
>> ARC effectively implements a GC.
>>
>> *in the normal sense, not GC-jargon.
>>
>> Ergo they aren't really any different?
>
>
> Nobody has proposed a 'sophisticated' GC for D. As far as I can tell, it's
> considered impossible by the experts.
> It also doesn't address the fundamental issue with the nature of a GC,
> which is that it expects plenty of free memory. You can't use a GC in a
> low-memory environment, no matter how it's designed. It allocates until it
> can't, then spends a large amount of time re-capturing unreferenced memory.
> As free memory decreases, this becomes more and more frequent.
>


Re: DIP60: @nogc attribute

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 22:28, Michel Fortin via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> On 2014-04-17 03:13:48 +, Manu via Digitalmars-d <
> digitalmars-d@puremagic.com> said:
>
>  Obviously, a critical part of ARC is the compilers ability to reduce
>> redundant inc/dec sequences. At which point your 'every time' assertion is
>> false. C++ can't do ARC, so it's not comparable.
>> With proper elimination, transferring ownership results in no cost, only
>> duplication/destruction, and those are moments where I've deliberately
>> committed to creation/destruction of an instance of something, at which
>> point I'm happy to pay for an inc/dec; creation/destruction are rarely
>> high-frequency operations.
>>
>
> You're right that transferring ownership does not cost with ARC. What
> costs you is return values and temporary local variables.
>

Why would they cost? If a function receives a reference, it will equally
release it on return. I don't see why a ref should be bumped to pass it to
a function?
Return values I can see, because return values are effectively copying
assignments. But if the assignment is to a local, then the close of scope
implies a dec, which would again cancel out.


While it's nice to have a compiler that'll elide redundant retain/release
> pairs, function boundaries can often makes this difficult. Take this first
> example:
>
> Object globalObject;
>
> Object getObject()
> {
> return globalObject; // implicit: retain(globalObject)
> }
>
> void main()
> {
> auto object = getObject();
> writeln(object);
> // implicit: release(object)
> }
>
> It might not be obvious, but here the getObject function *has to*
> increment the reference count by one before returning. There's no other
> convention that'll work because another implementation of getObject might
> return a temporary object. Then, at the end of main, globalObject's
> reference counter is decremented. Only if getObject gets inlined can the
> compiler detect the increment/decrement cycle is unnecessary.
>

Well in most cases of accessors like this, it would inline properly. It's a
fairly reliable rule that, if a function is not an inline candidate, it is
probably also highly unlikely to appear in a hot loop.

I don't follow why it needs to retain before returning though. It would
seem that it should retain upon assignment after returning (making it
similar to the situation below). Nothing can interfere with the refcount
before and after the function returns.


But wait! If writeln isn't pure (and surely it isn't), then it might change
> the value of globalObject (you never know what's in Object.toString,
> right?), which will in turn release object. So main *has to* increment the
> reference counter if it wants to make sure its local variable object is
> valid until the end of the writeln call. Can't elide here.
>
> Let's take this other example:
>
> Object globalObject;
> Object otherGlobalObject;
>
> void main()
> {
> auto object = globalObject; // implicit:
> retain(globalObject)
> foo(object);
> // implicit: release(object)
> }
>
> Here you can elide the increment/decrement cycle *only if* foo is pure. If
> foo is not pure, then it might set another value to globalObject (you never
> know, right?), which will decrement the reference count and leave the
> "object" variable in main the sole owner of the object. Alternatively, if
> foo is not pure but instead gets inlined it might be provable that it does
> not touch globalObject, and elision might become a possibility.
>

Sure, there is potential that certain bits of code between the
retain/release can break the ability to eliminate the pair, but that's why
I think D has an advantage here over other languages, like Obj-C for
instance. D has so much more richness in the type system which can assist
here. I'm pretty confident that D would offer much better results than
existing implementations.

I think ARC needs to be practical without eliding of redundant calls. It's
> a good optimization, but a difficult one unless everything is inlined. Many
> such elisions that would appear to be safe at first glance aren't provably
> safe for the compiler because of function calls.


I'm very familiar with this class of problem. I have spent much of my
career dealing with precisely this class of problem.
__restrict addresses the exact same problem with raw pointers in C, and
programmers understand the issue, and know how to work around it when it
appears in hot loops.

D has some significant advantages that other ARC languages don't have
though. D's module system makes inlining much more reliable than C/C++ for
instance, pure is an important part of D, and people do use it liberally.


Re: Knowledge of managed memory pointers

2014-04-17 Thread Orvid King via Digitalmars-d
I think the biggest advantage to this distinction would really be the
cross-language API's, the GC can determine which pointers it owns,
although I don't believe it currently exposes this capability.

On 4/17/14, Manu via Digitalmars-d  wrote:
> On 17 April 2014 18:20, Kagamin via Digitalmars-d <
> digitalmars-d@puremagic.com> wrote:
>
>> You can do anything, what fits your task, see RefCounted and Unique for
>> an
>> example on how to write smart pointers.
>>
>
> ... what?
>
> I don't think you understood my post.
>
> void f(void* ptr)
> {
>   // was ptr allocated with malloc, or new?
> }
>
> If we knew this, we may be able to address some problems with designing
> better GC's, or cross-language API's.
>


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d
On Thursday, 17 April 2014 at 12:20:06 UTC, Manu via 
Digitalmars-d wrote:
See, I just don't find managed memory incompatible with 'low 
level' realtime or embedded code, even on tiny microcontrollers 
in principle.


RC isn't incompatible with realtime, since the overhead is O(1).

But it is slower than the alternatives where you want maximum 
performance. E.g. raytracing.


And it is slower and less more "safe" than GC for long running 
servers that have uneven loads. E.g. web services.


I think it would be useful to discuss real scenarios when 
discussing performance:


1. Web server request that can be handled instantly (no database 
lookup): small memory requirements and everything is released 
immediately.


Best strategy might be to use a release pool (allocate 
incrementally and free all upon return in one go).


2. Web server, cached content-objects: lots of cycles, shared 
across threads.


Best strategy is global GC.

3. Non-maskable interrupt: can cut into any running code at any 
time. No deallocation must happen and can only touch code that is 
consistent after atomic single instruction CPU operations.


Best strategy is preallocation and single instruction atomic 
communication.


ARC would be fine in low level code, assuming the language 
supported it to

the fullest of it's abilities.


Yes, but that requires whole program optimization, since function 
calls cross compilation unit boundaries frequently.


No. It misses basically everything that compels the change. 
Strings, '~',

closures. D largely depends on it's memory management.


And that is the problem. Strings can usually be owned objects.

What benefits most from GC are the big complex objects that have 
lots of links to other objects, so you get many circular 
references.


You usually have fewer of those.

If you somehow can limit GC to precise collection of those big 
objects, and forbid foreign references to those, then the 
collection cycle could complete quickly and you could use GC for 
soft real time. Which most code application code is.


I don't know how to do it, but global-GC-everything only works 
for batch programming or servers with downtime.


Take this seriously. I want to see ARC absolutely killed dead 
rather than dismissed.


Why is that? I can see ARC in D3 with whole program optimization. 
I cannot see how D2 could be extended with ARC given all the 
other challenges.


Ola.



Re: DIP60: @nogc attribute

2014-04-17 Thread Regan Heath via Digitalmars-d
On Thu, 17 Apr 2014 14:08:29 +0100, Orvid King via Digitalmars-d  
 wrote:



I'm just going to put my 2-cents into this discussion, it's my
personal opinion that while _allocations_ should be removed from
phobos wherever possible, replacing GC usage with manual calls to
malloc/free has no place in the standard library, as it's quite simply
a mess that is really not needed, and quite simply, one should be
figuring out how to simply not allocate at all rather than trying do
do manual management.


The standard library is a better place to put manual memory management  
than user space because it should be done by experts, peer reviewed and  
then would benefit everyone at no extra cost.


There are likely a number of smaller GC allocations which could be  
replaced by calls to alloca, simultaneously improving performance and  
avoiding GC interaction.


These calls could then be marked @nogc and used in the realtime sections  
of applications without fear of collections stopping the world.


Neither ARC nor a super amazing GC would be able to improve upon the  
efficiency of this sort of change.


Seems like win-win-win to me.


It is possible to implement a much better GC than what D currently
has, and I intend to do exactly that when I have the time needed (in
roughly a month).


Excellent :)

R


Re: DIP60: @nogc attribute

2014-04-17 Thread Regan Heath via Digitalmars-d
On Wed, 16 Apr 2014 18:38:23 +0100, Walter Bright  
 wrote:



On 4/16/2014 8:01 AM, qznc wrote:
However, what is still an open issue is that @nogc can be stopped by  
allocations
in another thread. We need threads which are not affected by  
stop-the-world. As
far as I know, creating threads via pthreads C API directly achieves  
that, but
integration with @nogc could provide more type safety. Stuff for  
another DIP?


That's a completely separate issue.


Yep.  I was thinking an attribute like @rt (realtime) would be super cool  
(but, perhaps impossible).  It would be a super-set of things like @nogc,  
and imply those things.  Adding @nogc does not prevent such a thing being  
done in the future.


R

--
Using Opera's revolutionary email client: http://www.opera.com/mail/


Re: DIP60: @nogc attribute

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 23:17, via Digitalmars-d wrote:

> On Thursday, 17 April 2014 at 12:20:06 UTC, Manu via Digitalmars-d wrote:
>
>> See, I just don't find managed memory incompatible with 'low level'
>> realtime or embedded code, even on tiny microcontrollers in principle.
>>
>
> RC isn't incompatible with realtime, since the overhead is O(1).
>
> But it is slower than the alternatives where you want maximum performance.
> E.g. raytracing.
>

You would never allocate in a ray tracing loop. If you need a temp, you
would use some pre-allocation strategy. This is a tiny, self-contained, and
highly specialised loop, that will always have a highly specialised
allocation strategy.
You also don't make library calls inside a raytrace loop.


And it is slower and less more "safe" than GC for long running servers that
> have uneven loads. E.g. web services.
>

Hey? I don't know what you mean.


I think it would be useful to discuss real scenarios when discussing
> performance:
>
> 1. Web server request that can be handled instantly (no database lookup):
> small memory requirements and everything is released immediately.
>
> Best strategy might be to use a release pool (allocate incrementally and
> free all upon return in one go).
>

Strings are the likely source of allocation. I don't think this suggests a
preference from GC or ARC either way. A high-frequency webserver would use
something more specialised in this case I imagine.

2. Web server, cached content-objects: lots of cycles, shared across
> threads.
>
> Best strategy is global GC.
>

You can't have web servers locking up for 10s-100s of ms at random
intervals... that's completely unacceptable.
Or if there is no realtime allocation, then management strategy is
irrelevant.

3. Non-maskable interrupt: can cut into any running code at any time. No
> deallocation must happen and can only touch code that is consistent after
> atomic single instruction CPU operations.
>
> Best strategy is preallocation and single instruction atomic communication.


Right, interrupts wouldn't go allocating from the master heap.


I don't think these scenarios are particularly relevant.

 ARC would be fine in low level code, assuming the language supported it to
>> the fullest of it's abilities.
>>
>
> Yes, but that requires whole program optimization, since function calls
> cross compilation unit boundaries frequently.


D doesn't usually have compilation unit boundaries. And even if you do,
assuming the source is available, it can still inline if it wants to, since
the source of imported modules is available while compiling a single unit.
I don't think WPO is as critical as you say.

 No. It misses basically everything that compels the change. Strings, '~',
>> closures. D largely depends on it's memory management.
>>
>
> And that is the problem. Strings can usually be owned objects.
>

I find strings are often highly shared objects.

What benefits most from GC are the big complex objects that have lots of
> links to other objects, so you get many circular references.
>
> You usually have fewer of those.
>

These tend not to change much at runtime.
Transient/temporary allocations on the other hand are very unlikely to
contain circular references.

Also, I would mark weak references explicitly.


> Take this seriously. I want to see ARC absolutely killed dead rather than
>> dismissed.
>>
>
> Why is that? I can see ARC in D3 with whole program optimization. I cannot
> see how D2 could be extended with ARC given all the other challenges.


Well it's still not clear to me what all the challenges are... that's my
point. If it's not possible, I want to know WHY.


Re: Knowledge of managed memory pointers

2014-04-17 Thread Manu via Digitalmars-d
On 17 April 2014 23:14, Orvid King via Digitalmars-d <
digitalmars-d@puremagic.com> wrote:

> I think the biggest advantage to this distinction would really be the
> cross-language API's, the GC can determine which pointers it owns,
> although I don't believe it currently exposes this capability.
>

But in a lightning fast way? Let's imagine ARC refcounts were stored at
ptr[-1], how can we know if this is a managed pointer or not?
I think the major hurdle in an ARC implementation is distinguishing a
managed pointer from a malloc pointer without making (breaking?) changes to
the type system.

I would also find this useful in language boundary interaction though. I
have had numerous issues identifying proper handling of cross-language
memory.


Re: DIP60: @nogc attribute

2014-04-17 Thread Orvid King via Digitalmars-d
I should probably have said heap allocation rather than just
allocation, because the alloca calls are the ones that would have the
real benefit, those realtime applications are the reason I hope to be
able to implement an async collection mode. If I were able to
implement even a moderately compacting GC, I would be able to use a
bump-the-pointer allocation strategy, which would be significantly
faster than manual calls to malloc/free.

On 4/17/14, Regan Heath via Digitalmars-d  wrote:
> On Thu, 17 Apr 2014 14:08:29 +0100, Orvid King via Digitalmars-d
>  wrote:
>
>> I'm just going to put my 2-cents into this discussion, it's my
>> personal opinion that while _allocations_ should be removed from
>> phobos wherever possible, replacing GC usage with manual calls to
>> malloc/free has no place in the standard library, as it's quite simply
>> a mess that is really not needed, and quite simply, one should be
>> figuring out how to simply not allocate at all rather than trying do
>> do manual management.
>
> The standard library is a better place to put manual memory management
> than user space because it should be done by experts, peer reviewed and
> then would benefit everyone at no extra cost.
>
> There are likely a number of smaller GC allocations which could be
> replaced by calls to alloca, simultaneously improving performance and
> avoiding GC interaction.
>
> These calls could then be marked @nogc and used in the realtime sections
> of applications without fear of collections stopping the world.
>
> Neither ARC nor a super amazing GC would be able to improve upon the
> efficiency of this sort of change.
>
> Seems like win-win-win to me.
>
>> It is possible to implement a much better GC than what D currently
>> has, and I intend to do exactly that when I have the time needed (in
>> roughly a month).
>
> Excellent :)
>
> R
>


Re: re-open of Issue 2757

2014-04-17 Thread Andrej Mitrovic via Digitalmars-d
On 4/17/14, Brad Roberts via Digitalmars-d  wrote:
> According to the modification history for that bug

Btw, that's the first time I saw that page, and I always wanted this
feature. But, where is it linked from? (how did you find it?)


Re: DIP60: @nogc attribute

2014-04-17 Thread Timon Gehr via Digitalmars-d

On 04/17/2014 02:34 PM, Manu via Digitalmars-d wrote:

On 17 April 2014 21:57, John Colvin via Digitalmars-d
mailto:digitalmars-d@puremagic.com>> wrote:

On Thursday, 17 April 2014 at 11:31:52 UTC, Manu via Digitalmars-d
wrote:

ARC offers a solution that is usable by all parties.
...
You can't use a GC in a
low-memory environment, no matter how it's designed. It allocates until
it can't, then spends a large amount of time re-capturing unreferenced
memory. As free memory decreases, this becomes more and more frequent.


What John was trying to get at is that the two quoted statements above 
are in contradiction with each other. An GC is a subsystem that 
automatically frees dead memory. (Dead as in it will not be accessed 
again, which is a weaker notion than it being unreferenced.)


Maybe the distinction you want to make is between ARC and tracing 
garbage collectors.




Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d
On Thursday, 17 April 2014 at 13:43:17 UTC, Manu via 
Digitalmars-d wrote:
You would never allocate in a ray tracing loop. If you need a 
temp, you

would use some pre-allocation strategy.


Path-tracing is predictable, but regular ray tracing may spawn 
many rays per hit. So you pre-allocate a buffer, but might need 
to extend it.


The point was: RC-per-object is unacceptable.

And it is slower and less more "safe" than GC for long running 
servers that have uneven loads. E.g. web services.


Hey? I don't know what you mean.


1. You can get memory leaks by not collecting cycles with RC.

2. You spend time RC accounting when you need speed and run idle 
when you could run GC collection. GC is faster than ARC.



Best strategy is global GC.


You can't have web servers locking up for 10s-100s of ms at 
random intervals... that's completely unacceptable.


The kind of servers I write can live with occasional 100-200ms 
lockups. That is no worse than the time it takes to get a 
response from a database node with a transaction running on it.


For a game server that is too much so you would need to get down 
to under ~50ms, but then you also tend to run with a in-memory 
database that you cannot run full GC frequently on because of all 
the pointers in it.



D doesn't usually have compilation unit boundaries.


It does if you need multiple entry/return paths. E.g. need to 
compile 2 different versions of a function depending on the 
context. You don't want a copy in each object file.



I find strings are often highly shared objects.


Depends on how you use them. You can usually tie them to the 
object "they describe".


What benefits most from GC are the big complex objects that 
have lots of

links to other objects, so you get many circular references.

You usually have fewer of those.



These tend not to change much at runtime.


On the contrary, content objects are the ones that do change both 
in terms of evolutionary programming which makes it easy to miss 
a cycle, and at runtime.


This is especially true for caching web-servers, I think.


Also, I would mark weak references explicitly.


Which can be difficult to figure out and then you also have to 
deal with "unexpected null references" and the exceptions it 
might cause.


Weak references can be useful with GC too if the semantics are 
right for the situation, but it is a crutch if it is only used 
for killing cycles.


Well it's still not clear to me what all the challenges are... 
that's my

point. If it's not possible, I want to know WHY.


I think it is possible, but I also think shipping D2 as a 
maintained stable product should have first priority. ARC would 
probably set D back one year? I think that would be a bad thing.


Ola.


Re: re-open of Issue 2757

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 10:12:20 -0400, Andrej Mitrovic via Digitalmars-d  
 wrote:


On 4/17/14, Brad Roberts via Digitalmars-d   
wrote:

According to the modification history for that bug


Btw, that's the first time I saw that page, and I always wanted this
feature. But, where is it linked from? (how did you find it?)


It's on the bug page. Look after the "Modified" field at the top.

-Steve


Re: Knowledge of managed memory pointers

2014-04-17 Thread Timon Gehr via Digitalmars-d

On 04/17/2014 08:55 AM, Manu via Digitalmars-d wrote:

It occurs to me that a central issue regarding the memory management
debate, and a major limiting factor with respect to options, is the fact
that, currently, it's impossible to tell a raw pointer apart from a gc
pointer.

Is this is a problem worth solving? And would it be as big an enabler to
address some tricky problems as it seems to be at face value?

What are some options? Without turning to fat pointers or convoluted
changes in the type system, are there any clever mechanisms that could
be applied to distinguish managed from unmanaged pointers.


It does not matter if changes to the type system are 'convoluted'. (They 
don't need to be.)



If an API
could be provided in druntime, it may be used by GC's, ARC, allocators,
or systems that operate at the barrier between languages.



There already is.

bool isGCPointer(void* ptr){
import core.memory;
return !!GC.addrOf(ptr);
}

void main(){
import std.c.stdlib;
auto x=cast(int*)malloc(int.sizeof);
auto y=new int;
assert(!x.isGCPointer() && y.isGCPointer());
}



Re: Knowledge of managed memory pointers

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d

On Thu, 17 Apr 2014 10:52:19 -0400, Timon Gehr  wrote:


On 04/17/2014 08:55 AM, Manu via Digitalmars-d wrote:



If an API
could be provided in druntime, it may be used by GC's, ARC, allocators,
or systems that operate at the barrier between languages.



There already is.

bool isGCPointer(void* ptr){
 import core.memory;
 return !!GC.addrOf(ptr);
}


I don't think this is a viable mechanism to check pointers. It's too slow.

-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d

On Tuesday, 15 April 2014 at 17:01:38 UTC, Walter Bright wrote:

http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


OK, a bit late to the thread, seeing how it has already went to 
ARC off-topic domain :( An attempt to get back to the original 
point.


I was asking for @nogc earlier and I find proposed implementation 
too naive to be practically useful, to the point where I will 
likely be forced to ignore it in general.


=== Problem #1 ===

First problem is that, by an analogy with `pure`, there is no 
such thing as "weakly @nogc@". A common pattern for performance 
intensive code is to use output buffers of some sort:


void foo(OutputRange buffer)
{
buffer.put(42);
}

`foo` can't be @nogc here if OutputRange uses GC as backing 
allocator. However I'd really like to use it to verify that no 
hidden allocations happen other than those explicitly coming from 
user-supplied arguments. In fact, if such "weakly @nogc" thing 
would have been available, it could be used to clean up Phobos 
reliably.


With current limitations @nogc is only useful to verify that 
embedded code which does not have GC at all does not use any 
GC-triggering language features before it comes to weird linker 
errors / rt-asserts. But that does not work good either because 
of next problem:


=== Problem #2 ===

The point where "I told ya" statement is extremely tempting :) 
bearophile has already pointed this out - for some of language 
features like array literals you can't be sure about possible 
usage of GC at compile-time as it depends on optimizations in 
backend. And making @nogc conservative in that regard and marking 
all literals as @nogc-prohibited will cripple the language beyond 
reason.


I can see only one fix for that - defining clear set of array 
literal use cases where optimizing GC away is guaranteed by spec 
and relying on it.


Re: Finally full multidimensional arrays support in D

2014-04-17 Thread CJS via Digitalmars-d

On Monday, 17 March 2014 at 21:25:34 UTC, bearophile wrote:

Jared Miller:

And yes, I think that a matrix / linear algebra library, as 
well as NumPy-style ND-Arrays are great candidates for future 
Phobos modules.


I suggest to not put such library in Phobos before few years of 
usage in the wild.




+1

Good matrix support would be awesome. But getting it wrong would 
be a catastrophe. I don't really support ever putting it in 
phobos, but if it is, then it should only be added after lots of 
experience.


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:

void foo(OutputRange buffer)
{
buffer.put(42);
}

`foo` can't be @nogc here if OutputRange uses GC as backing 
allocator. However I'd really like to use it to verify that no


Can't you write foo as a template? Then if "buffer" is a ring 
buffer the memory might be allocated by GC, which is ok if put() 
does not call the GC and is marked as such.


Where this falls apart is when you introduce a compacting GC and 
the @nogc code is run in a real time priority thread. Then you 
need both @nogc_function_calls and @nogc_memory .


Of course, resorting to templates requires some thinking-ahead, 
and makes reuse more difficult.


You'll probably end up with the @nogc crowd creating their own 
NoGCOutputRange… :-P


Ola.


Re: re-open of Issue 2757

2014-04-17 Thread Andrej Mitrovic via Digitalmars-d
On 4/17/14, Steven Schveighoffer via Digitalmars-d
 wrote:
> It's on the bug page. Look after the "Modified" field at the top.

Ah the "History" link. Thanks.


Re: DIP60: @nogc attribute

2014-04-17 Thread bearophile via Digitalmars-d

Ola Fosheim Grøstad:

Where this falls apart is when you introduce a compacting GC 
and the @nogc code is run in a real time priority thread. Then 
you need both @nogc_function_calls and @nogc_memory .


Perhaps the @nogc proposal is not flexible enough. So probably 
the problem needs to be looked from a higher distance to find a 
smarter and more flexible solution. Koka and other ideas appeared 
in this thread can be seeds for ideas.


Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d
On Thursday, 17 April 2014 at 15:39:38 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:

void foo(OutputRange buffer)
{
   buffer.put(42);
}

`foo` can't be @nogc here if OutputRange uses GC as backing 
allocator. However I'd really like to use it to verify that no


Can't you write foo as a template? Then if "buffer" is a ring 
buffer the memory might be allocated by GC, which is ok if 
put() does not call the GC and is marked as such.


put() may call GC to grow the buffer, this is the very point. 
What is desired is to check if anything _else_ does call GC, thus 
the "weak @nogc" parallel.


Where this falls apart is when you introduce a compacting GC 
and the @nogc code is run in a real time priority thread. Then 
you need both @nogc_function_calls and @nogc_memory .


True hard real-time is always special, I am speaking about 
"softer" but still performance-demanding code (like one that is 
used in Sociomantic).


Of course, resorting to templates requires some thinking-ahead, 
and makes reuse more difficult.


I don't see how templates can help here right now.

You'll probably end up with the @nogc crowd creating their own 
NoGCOutputRange… :-P


Ola.




Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 April 2014 at 15:48:29 UTC, bearophile wrote:

Ola Fosheim Grøstad:

Where this falls apart is when you introduce a compacting GC 
and the @nogc code is run in a real time priority thread. Then 
you need both @nogc_function_calls and @nogc_memory .


Perhaps the @nogc proposal is not flexible enough. So probably 
the problem needs to be looked from a higher distance to find a 
smarter and more flexible solution. Koka and other ideas 
appeared in this thread can be seeds for ideas.


Bye,
bearophile


Reason why @nogc is desired in general is because it is 
relatively simple and can be done right now. That alone buts it 
above all ideas with alternate GC implementation and/or major 
type system tweaks.


It only needs some tweaks to make it actually useful for 
common-enoough practical cases.


Re: A crazy idea for accurately tracking source position

2014-04-17 Thread Alix Pexton via Digitalmars-d

Just fixing an obvious typo in my code (that is still incomplete).


struct someRange
{
ulong seq;
bool fresh = true;
long line;
dchar front;
// and lets just pretend that there is
// somewhere for more characters to come from!

void popFront()
{
// advance by whatever means to update front.
if (front.isNewline)
{
++line;
fresh = true;
return;
}
if (fresh)
{
if (front.isTab)
{
seq = 0x___fffeL;
}
else
{
seq = 0x1L;
}
fresh = false;
}
else
{
seq <<= 1;
if (!front.isTab)
{
seq |= 0x1L;
}
}
}

// and the rest...
}


Re: "Spawn as many thousand threads as you like" and D

2014-04-17 Thread Kagamin via Digitalmars-d

On Wednesday, 16 April 2014 at 13:59:15 UTC, Bienlein wrote:
Being able to spawn as many thousand threads as needed without 
caring about it seems to be an important aspect for being an 
interesting offering for developing server-side software. It 
would be nice if D could also play in that niche. This could be 
some killer domain for D beyond being a better C++.


I believe there was a benchmark comparing vibe.d to go with 
respect to processing of thousands of trivial requests, which 
proved that vibe.d is up to the task. And server doesn't really 
need local concurrency: client requests are isolated and have 
nothing to communicate to each other.


Re: Table lookups - this is pretty definitive

2014-04-17 Thread Alix Pexton via Digitalmars-d
I added a lookup scheme of my own, its not as fast as Walters (in fact 
its the slowest without -inline - release -O) but it uses 1 bit per 
entry in the table instead of a whole byte so you can have lots and lots 
of different tables. I'm even reasonably sure that it works correctly!



===
import core.stdc.stdlib;
import core.stdc.string;

import std.algorithm;
import std.array;
import std.ascii;
import std.datetime;
import std.range;
import std.stdio;
import std.traits;

bool isIdentifierChar0(ubyte c)
{
return isAlphaNum(c) || c == '_' || c == '$';
}

bool isIdentifierChar1(ubyte c)
{
return ((c >= '0' || c == '$') &&
(c <= '9' || c >= 'A')  &&
(c <= 'Z' || c >= 'a' || c == '_') &&
(c <= 'z'));
}

immutable bool[256] tab2;
static this()
{
for (size_t u = 0; u < 0x100; ++u)
{
tab2[u] = isIdentifierChar0(cast(ubyte)u);
}
}

bool isIdentifierChar2(ubyte c)
{
return tab2[c];
}

immutable ulong[4] tab3;
static this()
{
for (size_t u = 0; u < 0x100; ++u)
{
if (isIdentifierChar0(cast(ubyte)u))
{
auto sub = u >>> 6;
auto b = u & 0x3f;
auto mask = 0x01L << b;
tab3[sub] |= mask;
}
}
}

bool isIdentifierChar3(ubyte c)
{
auto sub = c >>> 6;
c &= 0x3f;
auto mask = 0x01L << c;
return (tab3[sub] & mask) > 0;
}

int f0()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar0(cast(ubyte)u);
}
return x;
}

int f1()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar1(cast(ubyte)u);
}
return x;
}

int f2()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar2(cast(ubyte)u);
}
return x;
}

int f3()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar3(cast(ubyte)u);
}
return x;
}

void main()
{
auto r = benchmark!(f0, f1, f2, f3)(10_000);
writefln("Milliseconds %s %s %s %s", r[0].msecs, r[1].msecs, 
r[2].msecs, r[3].msecs);

}


Size_t on x86 is uint,on x64 is ulong,it's a good thing?

2014-04-17 Thread FrankLike via Digitalmars-d

 Size_t  on x86 is uint,on x64 is ulong,it's a good thing?

  I don't think is ok.
  it  creates many convert  thing,such as length is ulong ,must 
cast to int or cast to uint. It will be waste of time ,I think.





Re: DIP60: @nogc attribute

2014-04-17 Thread Dejan Lekic via Digitalmars-d


@nogc
module mymodule;



This is precisely what I had in mind.


Re: DIP60: @nogc attribute

2014-04-17 Thread monarch_dodra via Digitalmars-d

On Thursday, 17 April 2014 at 15:02:27 UTC, Dicebot wrote:

=== Problem #1 ===

First problem is that, by an analogy with `pure`, there is no 
such thing as "weakly @nogc@". A common pattern for performance 
intensive code is to use output buffers of some sort:


void foo(OutputRange buffer)
{
buffer.put(42);
}

`foo` can't be @nogc here if OutputRange uses GC as backing 
allocator. However I'd really like to use it to verify that no 
hidden allocations happen other than those explicitly coming 
from user-supplied arguments. In fact, if such "weakly @nogc" 
thing would have been available, it could be used to clean up 
Phobos reliably.


I don't really see how this is really any different than safe, 
nothrow or pure attributes.


Either your code is templated, and the attributes get inferred.

Or it's not templated, and you have to rely on `put`'s base-class 
signature. If it's not marked @nogc (or safe, pure, or nothrow), 
then that's that.




That said, your proposal could be applied for all attributes in 
general. Not just @nogc in particular. In practice though, a 
simple unittest should cover all your needs. simply create a 
@nogc (pure, nothrow, safe, ctfe-able) unitest, and call it with 
a trivial argument. If it doesn't pass, then it probably means 
you made a gc-related (or impure, throwing, unsafe) call that's 
unrelated to the passed parameters.


In any case, that's how we've been doing it in phobos since we've 
started actually caring about attributes.


Re: Size_t on x86 is uint,on x64 is ulong,it's a good thing?

2014-04-17 Thread John Colvin via Digitalmars-d

On Thursday, 17 April 2014 at 16:36:29 UTC, FrankLike wrote:

 Size_t  on x86 is uint,on x64 is ulong,it's a good thing?

  I don't think is ok.
  it  creates many convert  thing,such as length is ulong ,must 
cast to int or cast to uint. It will be waste of time ,I think.


It's the same in C and it reflects what the hardware is doing 
underneath with regard to memory addresses. That's the point of 
size_t. If it didn't change size then we'd all just use ulong or 
uint for all our array lengths etc.


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/16/2014 8:13 PM, Manu via Digitalmars-d wrote:

On 17 April 2014 03:37, Walter Bright via Digitalmars-d
mailto:digitalmars-d@puremagic.com>> wrote:
ARC has very serious problems with bloat and performance.
This is the first I've heard of it, and I've been going on about it for ages.


Consider two points:

1. I can't think of any performant ARC systems.

2. Java would be a relatively easy language to implement ARC in. There's 
probably a billion dollars invested in Java's GC. Why not ARC?




Obviously, a critical part of ARC is the compilers ability to reduce redundant
inc/dec sequences. At which point your 'every time' assertion is false. C++
can't do ARC, so it's not comparable.


C++ has shared_ptr, with all kinds of escapes.



With proper elimination, transferring ownership results in no cost, only
duplication/destruction, and those are moments where I've deliberately committed
to creation/destruction of an instance of something, at which point I'm happy to
pay for an inc/dec; creation/destruction are rarely high-frequency operations.


inc/dec isn't as cheap as you imply. The dec usually requires the creation of an 
exception handling unwinder to do it.




Have you measured the impact?


No. I don't really know how I could, as I haven't seen an ARC system.



I've never heard of Obj-C users complaining about the inc/dec costs.


Obj-C only uses ARC for a minority of the objects.



How often does ref fiddling occur in reality? My guess is that with redundancy
elimination, it would be surprisingly rare, and insignificant.


Yes, I would be surprised.


Further problems with ARC are inability to mix ARC references with non-ARC
references, seriously hampering generic code.
That's why the only workable solution is that all references are ARC references.
The obvious complication is reconciling malloc pointers, but I'm sure this can
be addressed with some creativity.

I imagine it would look something like:
By default, pointers are fat: struct ref { void* ptr, ref_t* rc; }


First off, now pointers are 24 bytes in size. Secondly, every pointer 
dereference becomes two dereferences (not so good for cache performance).




malloc pointers could conceivably just have a null entry for 'rc' and therefore
interact comfortably with rc pointers.
I imagine that a 'raw-pointer' type would be required to refer to a thin
pointer. Raw pointers would implicitly cast to fat pointers, and a fat->thin
casts may throw if the fat pointer's rc is non-null, or compile error if it can
be known at compile time.


Now we throw in a null check and branch for pointer operations.



Perhaps a solution is possible where an explicit rc record is not required (such
that all pointers remain 'thin' pointers)...
A clever hash of the pointer itself can look up the rc?
Perhaps the rc can be found at ptr[-1]? But then how do you know if the pointer
is rc allocated or not? An unlikely sentinel value at ptr[-1]? Perhaps the
virtual memory page can imply whether pointers allocated in that region are ref
counted or not? Some clever method of assigning the virtual address space so
that recognition of rc memory can amount to testing a couple of bits in 
pointers?

I'm just making things up,


Yes.


but my point is, there are lots of creative
possibilities, and I have never seen any work to properly explore the options.


ARC has been known about for many decades. If you haven't seen it "properly 
explored", perhaps it isn't as simple and cost-effective as it may appear at 
first blush.




So then consider ARC seriously. If it can't work, articulate why. I still don't
know, nobody has told me.
It works well in other languages, and as far as I can tell, it has the potential
to produce acceptable results for _all_ D users.


What other languages?



iOS is a competent realtime platform, Apple are well known for their commitment
to silky-smooth, jitter-free UI and general feel.


A UI is a good use case for ARC. A UI doesn't require high performance.



Okay. Where can I read about that? It doesn't seem to have surfaced, at least,
it was never presented in response to my many instances of raising the topic.
What are the impasses?


I'd have to go look to find the thread. The impasses were as I pointed out here.



I'm very worried about this. ARC is the only imaginary solution I have left. In
lieu of that, we make a long-term commitment to a total fracturing of memory
allocation techniques, just like C++ today where interaction between libraries
is always a massive pain in the arse. It's one of the most painful things about
C/C++, and perhaps one of the primary causes of incompatibility between
libraries and frameworks. This will transfer into D, but it's much worse in D
because of the relatively high number of implicit allocations ('~', closures, 
etc).


There are only about 3 cases of implicit allocation in D, all easily avoided, 
and with @nogc they'll be trivial to avoid. It is not "much worse".




Frameworks and libraries 

Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 9:42 AM, monarch_dodra wrote:

That said, your proposal could be applied for all attributes in general. Not
just @nogc in particular. In practice though, a simple unittest should cover all
your needs. simply create a @nogc (pure, nothrow, safe, ctfe-able) unitest, and
call it with a trivial argument. If it doesn't pass, then it probably means you
made a gc-related (or impure, throwing, unsafe) call that's unrelated to the
passed parameters.


Yup, that should work fine.



Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 8:02 AM, Dicebot wrote:

=== Problem #1 ===

First problem is that, by an analogy with `pure`, there is no such thing as
"weakly @nogc@". A common pattern for performance intensive code is to use
output buffers of some sort:

void foo(OutputRange buffer)
{
 buffer.put(42);
}

`foo` can't be @nogc here if OutputRange uses GC as backing allocator. However
I'd really like to use it to verify that no hidden allocations happen other than
those explicitly coming from user-supplied arguments. In fact, if such "weakly
@nogc" thing would have been available, it could be used to clean up Phobos
reliably.

With current limitations @nogc is only useful to verify that embedded code which
does not have GC at all does not use any GC-triggering language features before
it comes to weird linker errors / rt-asserts. But that does not work good either
because of next problem:


Remember that @nogc will be inferred for template functions. That means that 
whether it is @nogc or not will depend on its arguments being @nogc, which is 
just what is needed.




=== Problem #2 ===

The point where "I told ya" statement is extremely tempting :) bearophile has
already pointed this out - for some of language features like array literals you
can't be sure about possible usage of GC at compile-time as it depends on
optimizations in backend. And making @nogc conservative in that regard and
marking all literals as @nogc-prohibited will cripple the language beyond 
reason.

I can see only one fix for that - defining clear set of array literal use cases
where optimizing GC away is guaranteed by spec and relying on it.


I know that you bring up the array literal issue and gc a lot, but this is 
simply not a major issue with @nogc. The @nogc will tell you if it will allocate 
on the gc or not, on a case by case basis, and you can use easy workarounds as 
necessary.


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 04:35:34 -0400, Walter Bright  
 wrote:



On 4/16/2014 8:13 PM, Manu via Digitalmars-d wrote:





I've never heard of Obj-C users complaining about the inc/dec costs.


Obj-C only uses ARC for a minority of the objects.


Really? Every Obj-C API I've seen uses Objective-C objects, which all use  
RC.


iOS is a competent realtime platform, Apple are well known for their  
commitment

to silky-smooth, jitter-free UI and general feel.


A UI is a good use case for ARC. A UI doesn't require high performance.


I've written video processing/players on iOS, they all use blocks and  
reference counting, including to do date/time processing per frame. All  
while using RC network buffers. And it works quite smoothly.


-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 2:32 AM, Paulo Pinto wrote:

Similar approach was taken by Microsoft with their C++/CX and COM integration.

So any pure GC basher now uses Apple's example, with a high probability of not
knowing the technical issues why it came to be like that.


I also wish to reiterate that GC's use of COM with ref counting contains many, 
many escapes where the user "knows" that he can just use a pointer directly 
without dealing with the ref count. This is critical to making ref counting perform.


But the escapes come with a huge risk for memory corruption, i.e. user mistakes.

Also, in C++ COM, relatively few of the data structures a C++ program uses will 
be in COM. But ARC would mean using ref counting for EVERYTHING.


Using ARC for *everything* means slow and bloat, unless Manu's assumption that a 
sufficiently smart compiler could eliminate nearly all of that bloat is possible.


Which I am not nearly as confident of.



Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 5:34 AM, Manu via Digitalmars-d wrote:

People who care would go to the effort of manually marking weak references.


And that's not compatible with having a guarantee of memory safety.


Re: DIP60: @nogc attribute

2014-04-17 Thread bearophile via Digitalmars-d

Walter Bright:

I know that you bring up the array literal issue and gc a lot, 
but this is simply not a major issue with @nogc. The @nogc will 
tell you if it will allocate on the gc or not, on a case by 
case basis, and you can use easy workarounds as necessary.


Assuming you have seen my examples with dmd/ldcs, so are you 
saying that according to the compilation level the compiler will 
accept or not accept the @nogc attribute on a function?


Bye,
bearophile


Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d

On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright wrote:
With current limitations @nogc is only useful to verify that 
embedded code which
does not have GC at all does not use any GC-triggering 
language features before
it comes to weird linker errors / rt-asserts. But that does 
not work good either

because of next problem:


Remember that @nogc will be inferred for template functions. 
That means that whether it is @nogc or not will depend on its 
arguments being @nogc, which is just what is needed.


No, it looks like I have stated that very wrong because everyone 
understood it in completely opposite way. What I mean is that 
`put()` is NOT @nogc and it still should work. Same as weakly 
pure is kind of pure but allowed to mutate its arguments, 
proposed "weakly @nogc" can only call GC via functions directly 
accessible from its arguments.



=== Problem #2 ===

The point where "I told ya" statement is extremely tempting :) 
bearophile has
already pointed this out - for some of language features like 
array literals you
can't be sure about possible usage of GC at compile-time as it 
depends on
optimizations in backend. And making @nogc conservative in 
that regard and
marking all literals as @nogc-prohibited will cripple the 
language beyond reason.


I can see only one fix for that - defining clear set of array 
literal use cases
where optimizing GC away is guaranteed by spec and relying on 
it.


I know that you bring up the array literal issue and gc a lot, 
but this is simply not a major issue with @nogc. The @nogc will 
tell you if it will allocate on the gc or not, on a case by 
case basis, and you can use easy workarounds as necessary.


Beg my pardon, I have overstated this one indeed but temptation 
was just too high :) On actual topic - what "case by case" basis 
do you have in mind? There are no cases mentioned in spec when 
literals are guaranteed to not allocated AFAIK. Probably compiler 
developers know them but definitely not me.


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 15:49:44 UTC, Dicebot wrote:
put() may call GC to grow the buffer, this is the very point. 
What is desired is to check if anything _else_ does call GC, 
thus the "weak @nogc" parallel.


What do you need that for?

Of course, resorting to templates requires some 
thinking-ahead, and makes reuse more difficult.


I don't see how templates can help here right now.


Wasn't the problem that the type-interface was less constrained 
than the type-interface allowed by a @nogc constrained function?


I perceive the problem as being this: you cannot fully specify 
all types because of the combinatorial explosion. In which case 
templates tend to be the easy-hack-solution where the type system 
falls short?


Re: What's the deal with "Warning: explicit element-wise assignment..."

2014-04-17 Thread Kagamin via Digitalmars-d
On Thursday, 17 April 2014 at 12:38:24 UTC, Steven Schveighoffer 
wrote:
I actually am ignorant of how this works under the hood for 
slices, what triggers element-wise copy vs. assign.


The compiler compiles whatever compiles. Currently only one 
mistake (type) is required to compile the wrong thing. With the 
fix it would require two mistakes (type and syntax), so the 
probability of mistake will be square of current probability. If 
the second mistake (syntax) is ruled out (template), the 
probability is zero.


Range or array, there are still two ways how it can work. The 
idea is to give the choice to programmer instead of the 
compiler.


But programmer cannot define new operators on slices.


Cannot define new, but could choose from predefined ones.


Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d
On Thursday, 17 April 2014 at 17:48:39 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 15:49:44 UTC, Dicebot wrote:
put() may call GC to grow the buffer, this is the very point. 
What is desired is to check if anything _else_ does call GC, 
thus the "weak @nogc" parallel.


What do you need that for?


As a middle-ground between hard-core low level real-time code and 
applications that don't care about garbage at all. As soon as you 
keep your buffers growing and shrink only occasionally "GC vs 
malloc" issues becomes less important. But it is important to not 
generate any actual garbage as it may trigger collection cycles.


Such weak @nogc could help to avoid triggering allocations by an 
accident and encourage usage of output ranges / buffers. 
Currently code in Sociomantic uses similar idioms but having 
compiler help to verify it would help in my opinion.


Of course, resorting to templates requires some 
thinking-ahead, and makes reuse more difficult.


I don't see how templates can help here right now.


Wasn't the problem that the type-interface was less constrained 
than the type-interface allowed by a @nogc constrained function?


No, this is something completely different, see my answer before.


Re: Table lookups - this is pretty definitive

2014-04-17 Thread ixid via Digitalmars-d

On Thursday, 17 April 2014 at 16:27:26 UTC, Alix Pexton wrote:
I added a lookup scheme of my own, its not as fast as Walters 
(in fact its the slowest without -inline - release -O) but it 
uses 1 bit per entry in the table instead of a whole byte so 
you can have lots and lots of different tables. I'm even 
reasonably sure that it works correctly!



===
import core.stdc.stdlib;
import core.stdc.string;

import std.algorithm;
import std.array;
import std.ascii;
import std.datetime;
import std.range;
import std.stdio;
import std.traits;

bool isIdentifierChar0(ubyte c)
{
return isAlphaNum(c) || c == '_' || c == '$';
}

bool isIdentifierChar1(ubyte c)
{
return ((c >= '0' || c == '$') &&
(c <= '9' || c >= 'A')  &&
(c <= 'Z' || c >= 'a' || c == '_') &&
(c <= 'z'));
}

immutable bool[256] tab2;
static this()
{
for (size_t u = 0; u < 0x100; ++u)
{
tab2[u] = isIdentifierChar0(cast(ubyte)u);
}
}

bool isIdentifierChar2(ubyte c)
{
return tab2[c];
}

immutable ulong[4] tab3;
static this()
{
for (size_t u = 0; u < 0x100; ++u)
{
if (isIdentifierChar0(cast(ubyte)u))
{
auto sub = u >>> 6;
auto b = u & 0x3f;
auto mask = 0x01L << b;
tab3[sub] |= mask;
}
}
}

bool isIdentifierChar3(ubyte c)
{
auto sub = c >>> 6;
c &= 0x3f;
auto mask = 0x01L << c;
return (tab3[sub] & mask) > 0;
}

int f0()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar0(cast(ubyte)u);
}
return x;
}

int f1()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar1(cast(ubyte)u);
}
return x;
}

int f2()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar2(cast(ubyte)u);
}
return x;
}

int f3()
{
int x;
for (uint u = 0; u < 0x100; ++u)
{
x += isIdentifierChar3(cast(ubyte)u);
}
return x;
}

void main()
{
auto r = benchmark!(f0, f1, f2, f3)(10_000);
writefln("Milliseconds %s %s %s %s", r[0].msecs, 
r[1].msecs, r[2].msecs, r[3].msecs);

}


I feel like there must be a way of making a fast bit look up but 
my version is only moderate in speed. You can get all the bits 
you need on two 64 bit registers or one SSE register. I haven't 
tried bt, does that work with a 64 bit register?


Re: Finally full multidimensional arrays support in D

2014-04-17 Thread H. S. Teoh via Digitalmars-d
On Thu, Apr 17, 2014 at 03:16:20PM +, CJS via Digitalmars-d wrote:
> On Monday, 17 March 2014 at 21:25:34 UTC, bearophile wrote:
> >Jared Miller:
> >
> >>And yes, I think that a matrix / linear algebra library, as well as
> >>NumPy-style ND-Arrays are great candidates for future Phobos
> >>modules.
> >
> >I suggest to not put such library in Phobos before few years of usage
> >in the wild.
> >
> 
> +1
> 
> Good matrix support would be awesome. But getting it wrong would be a
> catastrophe. I don't really support ever putting it in phobos, but if
> it is, then it should only be added after lots of experience.

I've been longing for a high-quality, flexible, generic linear algebra
library in D. I don't have the time / resources to implement it myself,
otherwise I would.

But I agree that any such candidate library needs to be put in real-life
use for a while before being considered for Phobos.

I think the first step would be to refine Denis' n-dimensional array
library until it's Phobos-quality, then linear algebra specific
adaptations can be built on top. I think the two should be separated,
even if they are still related. Conflating 2D arrays with matrices at a
fundamental level is a mistake IMO. 2D arrays are just one of the
possible representations of a matrix, and any linear algebra library
should be flexible enough to use other representations (e.g., sparse
matrices).


T

-- 
He who laughs last thinks slowest.


Re: Knowledge of managed memory pointers

2014-04-17 Thread Kagamin via Digitalmars-d
On Thursday, 17 April 2014 at 14:59:14 UTC, Steven Schveighoffer 
wrote:
I don't think this is a viable mechanism to check pointers. 
It's too slow.


I suggested to write a smart pointer. It could provide 
compile-time checks and whatever developer feels like.


Re: Knowledge of managed memory pointers

2014-04-17 Thread Kagamin via Digitalmars-d
On Thursday, 17 April 2014 at 12:39:59 UTC, Manu via 
Digitalmars-d wrote:

void f(void* ptr)
{
  // was ptr allocated with malloc, or new?


Then what?


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 18:00:25 UTC, Dicebot wrote:
Such weak @nogc could help to avoid triggering allocations by 
an accident and encourage usage of output ranges / buffers.


Ok, more like a "lintish" feature of the "remind me if I use too 
much of feature X in these sections" variety.


I view @nogc as a safeguard against crashes when I let threads 
run while the garbage collector is in a collection phase. A means 
to bypass "stop-the-world" collection by having pure @nogc 
threads.


No, this is something completely different, see my answer 
before.


Got it, I didn't see that answer until after I wrote my reply.

Ola.


Re: DIP60: @nogc attribute

2014-04-17 Thread Dicebot via Digitalmars-d
On Thursday, 17 April 2014 at 18:18:49 UTC, Ola Fosheim Grøstad 
wrote:

On Thursday, 17 April 2014 at 18:00:25 UTC, Dicebot wrote:
Such weak @nogc could help to avoid triggering allocations by 
an accident and encourage usage of output ranges / buffers.


Ok, more like a "lintish" feature of the "remind me if I use 
too much of feature X in these sections" variety.


I view @nogc as a safeguard against crashes when I let threads 
run while the garbage collector is in a collection phase. A 
means to bypass "stop-the-world" collection by having pure 
@nogc threads.


Yeah for me @nogc is more of a lint thing in general. But it 
can't be done by lint because @nogc needs to affect mangling to 
work with separate compilation reliably.


I think for your scenario having dedicated @nogc threads makes 
more sense, this can be built on top of plain function attribute 
@nogc.


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 18:26:25 UTC, Dicebot wrote:
I think for your scenario having dedicated @nogc threads makes 
more sense, this can be built on top of plain function 
attribute @nogc.


Yes, that could be a life saver. Nothing is more annoying than 
random crashes due to concurrency issues because something 
"slipped in".


But I think both you and Bearophile are right in pointing out 
that it needs more thinking through. Especially the distinction 
between calling into GC code and dealing with GC memory.


For instance, maybe it is possible to have a memory pool split in 
two, so that the no-GC thread can allocate during a collection 
cycle, but be required to have a lock-free book-keeping system 
for all GC memory referenced from the no-GC thread. That way you 
might be able to use GC allocation from the no-GC thread.


Maybe that is a reasonable trade-off.

(I haven't thought this through, it just occurred to me)

Ola.


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:

Obj-C only uses ARC for a minority of the objects.

Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.


And what about all allocated items?



A UI is a good use case for ARC. A UI doesn't require high performance.

I've written video processing/players on iOS, they all use blocks and reference
counting, including to do date/time processing per frame. All while using RC
network buffers. And it works quite smoothly.


And did you use ref counting for all allocations and all pointers?

There's no doubt that ref counting can be used successfully here and there, with 
a competent programmer knowing when he can just convert it to a raw pointer and 
use that.


It's another thing entirely to use ref counting for ALL pointers.

And remember that if you have exceptions, then all the dec code needs to be in 
exception unwind handlers.




Re: DIP60: @nogc attribute

2014-04-17 Thread Francesco Cattoglio via Digitalmars-d

On Tuesday, 15 April 2014 at 19:57:59 UTC, monarch_dodra wrote:
I have an issue related to adding an extra attribute: 
Attributes of non-template functions. Currently, you have to 
mark most functions as already pure, nothrow and @safe. If we 
are adding another attribute. Code will start looking alike 
this:


int someTrivialFunction(int i) @safe pure nothrow @nogc;


don't forget final ;)


Re: A crazy idea for accurately tracking source position

2014-04-17 Thread matovitch via Digitalmars-d

You are doing it all wrong. The easiest way to compute
the col position is the following :

col_pos = 0;

if (non_tab_character_encounter)
 col_pos++;
else
 col_pos += tab_length - col_pos % tab_length;

That's it.


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright  
 wrote:



On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:

Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all  
use RC.


And what about all allocated items?


What do you mean?


A UI is a good use case for ARC. A UI doesn't require high performance.
I've written video processing/players on iOS, they all use blocks and  
reference
counting, including to do date/time processing per frame. All while  
using RC

network buffers. And it works quite smoothly.


And did you use ref counting for all allocations and all pointers?


Yes.

There's no doubt that ref counting can be used successfully here and  
there, with a competent programmer knowing when he can just convert it  
to a raw pointer and use that.


The compiler treats pointers to NSObject-derived differently than pointers  
to structs and raw bytes. There is no need to know, you just use them like  
normal pointers, and the compiler inserts the retain/release calls for you.


But I did not use structs. I only used structs for network packet  
overlays. I still created an object that contained the struct to enjoy the  
benefits of the memory management system.


And remember that if you have exceptions, then all the dec code needs to  
be in exception unwind handlers.


I haven't really used exceptions, but they automatically handle the  
reference counting.


-Steve


Re: Table lookups - this is pretty definitive

2014-04-17 Thread monarch_dodra via Digitalmars-d

On Thursday, 17 April 2014 at 18:07:24 UTC, ixid wrote:
I feel like there must be a way of making a fast bit look up 
but my version is only moderate in speed. You can get all the 
bits you need on two 64 bit registers or one SSE register. I 
haven't tried bt, does that work with a 64 bit register?


http://dlang.org/phobos/core_bitop.html#.bt

?

Note it can be applied to the table in general, rather than the 
byte themselves. EG:


ubyte[256] buf;
auto b = bt(buf.ptr, 428);


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 12:41 PM, Steven Schveighoffer wrote:

On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright 
wrote:


On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:

Obj-C only uses ARC for a minority of the objects.

Really? Every Obj-C API I've seen uses Objective-C objects, which all use RC.


And what about all allocated items?


What do you mean?


Can you call malloc() ?


A UI is a good use case for ARC. A UI doesn't require high performance.

I've written video processing/players on iOS, they all use blocks and reference
counting, including to do date/time processing per frame. All while using RC
network buffers. And it works quite smoothly.


And did you use ref counting for all allocations and all pointers?


Yes.


You never used malloc? for anything? or stack allocated anything? or had any 
pointers to anything that weren't ref counted?


How did that work for printf?



There's no doubt that ref counting can be used successfully here and there,
with a competent programmer knowing when he can just convert it to a raw
pointer and use that.


The compiler treats pointers to NSObject-derived differently than pointers to
structs and raw bytes.


So there *are* regular pointers.


There is no need to know, you just use them like normal
pointers, and the compiler inserts the retain/release calls for you.


I know that with ARC the compiler inserts the code for you. That doesn't make it 
costless.




But I did not use structs. I only used structs for network packet overlays. I
still created an object that contained the struct to enjoy the benefits of the
memory management system.


And remember that if you have exceptions, then all the dec code needs to be in
exception unwind handlers.


I haven't really used exceptions, but they automatically handle the reference
counting.


I know it's done automatically. But you might be horrified at what the generated 
code looks like.




Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 10:41 AM, Dicebot wrote:

On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright wrote:

With current limitations @nogc is only useful to verify that embedded code which
does not have GC at all does not use any GC-triggering language features before
it comes to weird linker errors / rt-asserts. But that does not work good either
because of next problem:


Remember that @nogc will be inferred for template functions. That means that
whether it is @nogc or not will depend on its arguments being @nogc, which is
just what is needed.


No, it looks like I have stated that very wrong because everyone understood it
in completely opposite way. What I mean is that `put()` is NOT @nogc and it
still should work. Same as weakly pure is kind of pure but allowed to mutate its
arguments, proposed "weakly @nogc" can only call GC via functions directly
accessible from its arguments.


I don't see value for this behavior.



I know that you bring up the array literal issue and gc a lot, but this is
simply not a major issue with @nogc. The @nogc will tell you if it will
allocate on the gc or not, on a case by case basis, and you can use easy
workarounds as necessary.


Beg my pardon, I have overstated this one indeed but temptation was just too
high :) On actual topic - what "case by case" basis do you have in mind? There
are no cases mentioned in spec when literals are guaranteed to not allocated
AFAIK. Probably compiler developers know them but definitely not me.


That's why the compiler will tell you if it will allocate or not.



Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d
1986 - Brad Cox and Tom Love create Objective-C, announcing "this 
language has all the memory safety of C combined with all the 
blazing speed of Smalltalk." Modern historians suspect the two 
were dyslexic.


( 
http://james-iry.blogspot.no/2009/05/brief-incomplete-and-mostly-wrong.html 
)




Re: DIP60: @nogc attribute

2014-04-17 Thread John Colvin via Digitalmars-d

On Thursday, 17 April 2014 at 19:51:38 UTC, Walter Bright wrote:

On 4/17/2014 10:41 AM, Dicebot wrote:
On Thursday, 17 April 2014 at 16:57:32 UTC, Walter Bright 
wrote:
With current limitations @nogc is only useful to verify that 
embedded code which
does not have GC at all does not use any GC-triggering 
language features before
it comes to weird linker errors / rt-asserts. But that does 
not work good either

because of next problem:


Remember that @nogc will be inferred for template functions. 
That means that
whether it is @nogc or not will depend on its arguments being 
@nogc, which is

just what is needed.


No, it looks like I have stated that very wrong because 
everyone understood it
in completely opposite way. What I mean is that `put()` is NOT 
@nogc and it
still should work. Same as weakly pure is kind of pure but 
allowed to mutate its
arguments, proposed "weakly @nogc" can only call GC via 
functions directly

accessible from its arguments.


I don't see value for this behavior.


It's a formal promise that the function won't do any GC work 
*itself*, only indirectly if you pass it something that 
implicitly does heap allocation.


E.g. you can implement some complicated function foo that writes 
to a user-provided output range and guarantee that all GC usage 
is in the control of the caller and his output range.


The advantage of having this as language instead of documentation 
is the turtles-all-the-way-down principle: if some function deep 
inside the call chain under foo decides to use a GC buffer then 
it's a compile-time-error.


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 15:55:10 -0400, Walter Bright  
 wrote:



On 4/17/2014 12:41 PM, Steven Schveighoffer wrote:
On Thu, 17 Apr 2014 14:47:00 -0400, Walter Bright  


wrote:


On 4/17/2014 10:05 AM, Steven Schveighoffer wrote:

Obj-C only uses ARC for a minority of the objects.
Really? Every Obj-C API I've seen uses Objective-C objects, which all  
use RC.


And what about all allocated items?


What do you mean?


Can you call malloc() ?


Of course. And then I can wrap it in NSData or NSMutableData.

A UI is a good use case for ARC. A UI doesn't require high  
performance.
I've written video processing/players on iOS, they all use blocks and  
reference
counting, including to do date/time processing per frame. All while  
using RC

network buffers. And it works quite smoothly.


And did you use ref counting for all allocations and all pointers?


Yes.


You never used malloc? for anything? or stack allocated anything? or had  
any pointers to anything that weren't ref counted?


How did that work for printf?


I didn't exactly use printf, iOS has no console. NSLog logs to the xcode  
console, and that works great.


But we used FILE * plenty. And I've had no problems.

There's no doubt that ref counting can be used successfully here and  
there,
with a competent programmer knowing when he can just convert it to a  
raw

pointer and use that.


The compiler treats pointers to NSObject-derived differently than  
pointers to

structs and raw bytes.


So there *are* regular pointers.


Of course, all C code is valid Objective-C code.


There is no need to know, you just use them like normal
pointers, and the compiler inserts the retain/release calls for you.


I know that with ARC the compiler inserts the code for you. That doesn't  
make it costless.


I'm not saying it's costless. I'm saying the cost is something I didn't  
notice performance-wise.


But my point is, pointers are pointers. I use them the same whether they  
are ARC pointers or normal pointers (they are even declared the same way),  
but the compiler treats them differently.


And remember that if you have exceptions, then all the dec code needs  
to be in

exception unwind handlers.


I haven't really used exceptions, but they automatically handle the  
reference

counting.


I know it's done automatically. But you might be horrified at what the  
generated code looks like.




Perhaps a reason to avoid exceptions :) I generally do anyway, even in D.

-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 1:30 PM, Steven Schveighoffer wrote:

I'm not saying it's costless. I'm saying the cost is something I didn't notice
performance-wise.


You won't with FILE*, as it is overwhelmed by file I/O times. Same with UI 
objects.



Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d

On Thursday, 17 April 2014 at 19:55:08 UTC, Walter Bright wrote:
I know that with ARC the compiler inserts the code for you. 
That doesn't make it costless.


No, but Objective-C has some overhead to begin with, so it 
matters less. Cocoa is a very powerful framework that will do 
most of the weight-lifting for you, kinda like a swiss army 
knife. In the same league as Python. Slow high level, a variety 
of highly optimized C functions under the hood. IMHO Python and 
Objective-C wouldn't stand a chance without their libraries.


I know it's done automatically. But you might be horrified at 
what the generated code looks like.


Apple has put a lot of resources into ARC. How much slower than 
manual RC varies, some claim as little as 10%, others 30%, 50%, 
100%. In that sense it is proof-of-concept. It is worse, but not 
a lot worse than manual ref counting if you have a compiler that 
does a very good job of it.


But compiled Objective-C code looks "horrible" to begin with… so 
I am not sure how well that translates to D.


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 16:47:04 -0400, Walter Bright  
 wrote:



On 4/17/2014 1:30 PM, Steven Schveighoffer wrote:
I'm not saying it's costless. I'm saying the cost is something I didn't  
notice

performance-wise.


You won't with FILE*, as it is overwhelmed by file I/O times. Same with  
UI objects.


OK, you beat it out of me. I admit, when I said "Video processing/players  
with network capability" I meant all FILE * I/O, and really nothing to do  
with video processing or networking.


-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread via Digitalmars-d
On Thursday, 17 April 2014 at 20:46:57 UTC, Ola Fosheim Grøstad 
wrote:
But compiled Objective-C code looks "horrible" to begin with… 
so I am not sure how well that translates to D.


Just to make it clear: ARC can make more assumptions than manual 
Objective-C calls to retain/release. So ARC being "surprisingly 
fast" relative to manual RC might be due to getting rid of 
Objective-C inefficiencies caused by explicit calls to 
retain/release rather than ARC being an excellent solution. YMMV.




Re: Size_t on x86 is uint,on x64 is ulong,it's a good thing?

2014-04-17 Thread Nick Sabalausky via Digitalmars-d

On 4/17/2014 12:36 PM,  FrankLike wrote:

  Size_t  on x86 is uint,on x64 is ulong,it's a good thing?

   I don't think is ok.
   it  creates many convert  thing,such as length is ulong ,must cast to
int or cast to uint. It will be waste of time ,I think.




If you want fixed-length, you use uint/ulong/etc. The whole point of 
size_t is for when you need the hardware's native data size.




Re: [OT] from YourNameHere via Digitalmars-d

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 17:29:47 -0400, Nick Sabalausky  
 wrote:



On 4/17/2014 8:51 AM, Steven Schveighoffer wrote:

Every time I open one of these messages I
get a huge pregnant 5-second pause, along with the Mac Beach Ball
(hourglass) while this message is opened in my news reader.



Sounds like something's wrong with your news reader.


But it only happens on these messages that come "via Digitamars-d," and  
consistently so. What could be the difference.


I've used this newsreader for years (opera), never had this problem.

-Steve


Re: [OT] from YourNameHere via Digitalmars-d

2014-04-17 Thread Nick Sabalausky via Digitalmars-d

On 4/17/2014 8:51 AM, Steven Schveighoffer wrote:

Every time I open one of these messages I
get a huge pregnant 5-second pause, along with the Mac Beach Ball
(hourglass) while this message is opened in my news reader.



Sounds like something's wrong with your news reader.



Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 1:03 PM, John Colvin wrote:

E.g. you can implement some complicated function foo that writes to a
user-provided output range and guarantee that all GC usage is in the control of
the caller and his output range.


As mentioned elsewhere here, it's easy enough to do a unit test for this.



The advantage of having this as language instead of documentation is the
turtles-all-the-way-down principle: if some function deep inside the call chain
under foo decides to use a GC buffer then it's a compile-time-error.


And that's how @nogc works.


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:

OK, you beat it out of me. I admit, when I said "Video processing/players with
network capability" I meant all FILE * I/O, and really nothing to do with video
processing or networking.



I would expect that with a video processor, you aren't dealing with ARC 
references inside the routine actually doing the work.




Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d
On 4/17/2014 1:46 PM, "Ola Fosheim Grøstad" 
" wrote:

Apple has put a lot of resources into ARC. How much slower than manual RC
varies, some claim as little as 10%, others 30%, 50%, 100%.


That pretty much kills it, even at 10%.


Re: DIP60: @nogc attribute

2014-04-17 Thread Steven Schveighoffer via Digitalmars-d
On Thu, 17 Apr 2014 18:08:43 -0400, Walter Bright  
 wrote:



On 4/17/2014 1:53 PM, Steven Schveighoffer wrote:
OK, you beat it out of me. I admit, when I said "Video  
processing/players with
network capability" I meant all FILE * I/O, and really nothing to do  
with video

processing or networking.



I would expect that with a video processor, you aren't dealing with ARC  
references inside the routine actually doing the work.


Obviously, if you are dealing with raw data, you are not using ARC while  
accessing the data. But you are using ARC to get a reference to that data.


For instance, you might see:

-(void)processVideoData:(NSData *)data
{
   unsigned char *vdata = data.data;
   // process vdata
   ...
}

During the entire processing, you never increment/decrement a reference  
count, because the caller will have passed data to you with an incremented  
count.


Just because ARC protects the data, doesn't mean you need to constantly  
and needlessly increment/decrement references. If you know the data won't  
go away while you are using it, you can just ignore the reference counting  
aspect.


-Steve


Re: DIP60: @nogc attribute

2014-04-17 Thread Walter Bright via Digitalmars-d

On 4/17/2014 3:18 PM, Steven Schveighoffer wrote:

During the entire processing, you never increment/decrement a reference count,
because the caller will have passed data to you with an incremented count.

Just because ARC protects the data, doesn't mean you need to constantly and
needlessly increment/decrement references. If you know the data won't go away
while you are using it, you can just ignore the reference counting aspect.


The salient point there is "if you know". If you are doing it, it is not 
guaranteed memory safe by the compiler. If the compiler is doing it, how does it 
know?


You really are doing *manual*, not automatic, ARC here, because you are making 
decisions about when ARC can be skipped, and you must make those decisions in 
order to have it run at a reasonable speed.


Re: DIP60: @nogc attribute

2014-04-17 Thread Kapps via Digitalmars-d

On Thursday, 17 April 2014 at 09:46:23 UTC, bearophile wrote:

Walter Bright:


http://wiki.dlang.org/DIP60

Start on implementation:

https://github.com/D-Programming-Language/dmd/pull/3455


If I have this program:

__gshared int x = 5;
int main() {
int[] a = [x, x + 10, x * x];
return a[0] + a[1] + a[2];
}


If I compile with all optimizations DMD produces this X86 asm, 
that contains the call to __d_arrayliteralTX, so that main 
can't be @nogc:


But if I compile the code with ldc2 with full optimizations the 
compiler is able to perform a bit of escape analysis, and to 
see the array doesn't need to be allocated, and produces the 
asm:


Now there are no memory allocations.

So what's the right behavour of @nogc? Is it possible to 
compile this main with a future version of ldc2 if I compile 
the code with full optimizations?


Bye,
bearophile


That code is not @nogc safe, as you're creating a dynamic array 
within it. The fact that LDC2 at full optimizations doesn't 
actually allocate is simply an optimization and does not affect 
the design of the code.


If you wanted it to be @nogc, you could use:
int main() @nogc {
int[3] a = [x, x + 10, x * x];
return a[0] + a[1] + a[2];
}


  1   2   >