Re: Memory management design

2013-07-10 Thread Namespace

Do you know this pull?
https://github.com/D-Programming-Language/dmd/pull/1903


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:

...


I am pretty sure stuff like @nogc (or probably @noheap. or both) 
will have no problems in being accepted into the mainstream once 
properly implemented. It is mostly a matter of volunteer wanting 
to get dirty with the compiler.


Re: Memory management design

2013-07-10 Thread JS

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
Given all of this talk about memory management, it would seem 
that it's time for people to start putting forward some ideas 
for improved memory management designs. I've got an idea or two 
of my own, but I'd like to discuss my ideas before I draft a 
DIP so I can try to get everything fleshed out and polished.


Anyway the core idea behind my design is that object lifetimes 
tend to be managed in one of three ways:


1. Tied to a stack frame
2. Tied to an owner object
3. Not tied to any one object (managed by the GC)

To distinguish between these types of objects, one could use a 
set of three storage classes:


1. scope: refers to stack-allocated memory (which seems to be 
the original design behind scope). scope references may not 
be stashed anywhere where they might become invalid. Since this 
is the safest type of reference, any object may be passed by 
scope ref.


2. owned: refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. owned 
references may only be stashed in other owned references. Any 
non-scope object may be passed by owned ref. This storage 
class might not be usable in @safe code without further 
restrictions.


3. GC-managed: the default storage class. Fairly 
self-explanatory. GC-managed references may not refer to 
scope or owned objects.


Besides helping with the memory management issue, this design 
could also help tame the hairy mess of auto ref; scope ref 
can safely take any stack-allocated object, including 
temporaries, so a function could have scope auto ref 
parameters without needing to be a template function and with 
greater safety than auto ref currently provides.


One can already choose their own memory model in their own code. 
The issue is with the core library and pre-existing code that 
forces you to use the GC model.


@nogc has been proposed several years ago but not gotten any 
footing. By having the ability to mark stuff has @nogc phobos 
could be migrated slowly and, at least, some libraries would be 
weaned off the GC and available.


I think the use of custom allocators would be better. Plug your 
own memory management model into D.


IMHO nothing will be done because this kinda talk has been going 
on for years(nearly a decade it as some posts go back to 2006).


Re: Memory management design

2013-07-10 Thread Dicebot

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
1. scope: refers to stack-allocated memory (which seems to be 
the original design behind scope). scope references may not 
be stashed anywhere where they might become invalid. Since this 
is the safest type of reference, any object may be passed by 
scope ref.


2. owned: refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. owned 
references may only be stashed in other owned references. Any 
non-scope object may be passed by owned ref. This storage 
class might not be usable in @safe code without further 
restrictions.


I think merging scope and owned can be usable enough to be 
interesting without introducing any new concepts. Simply make it 
that scope in a variable declaration means it is a 
stack-allocated entity with unique ownership and scope as a 
function parameter attribute is required to accept scope data, 
verifying no references to it are taken / stored. Expecting 
mandatory deadalnix comment about lifetime definition ;)


Only thing I have no idea about is if scope attribute should be 
shallow or transitive. Former is dangerous, latter severely harms 
usability.


Re: Memory management design

2013-07-10 Thread Manu
On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:

 On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:

 ...


 I am pretty sure stuff like @nogc (or probably @noheap. or both) will have
 no problems in being accepted into the mainstream once properly
 implemented. It is mostly a matter of volunteer wanting to get dirty with
 the compiler.


I'd push for an ARC implementation. I've become convinced that's what I
actually want, and that GC will never completely satisfy my requirements.

Additionally, while I can see some value in @nogc, I'm not actually sold on
that personally... it feels explicit attribution is a backwards way of
going about it. ie, most functions may actually be @nogc, but only the ones
that are explicitly attributed will enjoy that recognition... seems kinda
backwards.


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:
I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


I think those issues are actually orthogonal. I'd love to have 
verified @noheap attribute even in my old C code. Sometimes the 
very fact that allocation happens is more important that 
algorithm how it is later collected.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a backwards 
way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


Yes, this is a common issue not unique to @nogc. I am personally 
much in favor of having restrictive attributes enabled by default 
and then adding mutable @system and @allowheap where those 
are actually needed. But unfortunately there is no way to add 
something that backwards-incompatible and attribute inference 
seems the only practical way (though I hate it).


Re: Memory management design

2013-07-10 Thread Mr. Anonymous

On Wednesday, 10 July 2013 at 08:09:46 UTC, Dicebot wrote:
Yes, this is a common issue not unique to @nogc. I am 
personally much in favor of having restrictive attributes 
enabled by default and then adding mutable @system and 
@allowheap where those are actually needed. But unfortunately 
there is no way to add something that backwards-incompatible 
and attribute inference seems the only practical way (though I 
hate it).


I thought about allowing attributes to be applied to a whole 
module, such as:

@safe @nogc module foo_bar;

Then, @system, @allowheap and friends could be used where 
needed.


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 08:16:55 UTC, Mr. Anonymous wrote:
I thought about allowing attributes to be applied to a whole 
module, such as:

@safe @nogc module foo_bar;

Then, @system, @allowheap and friends could be used where 
needed.


You can do it, but it is not always possible to disable 
attribute/qualifier.


@safe:
@system void foo() {} // ok

immutable:
int a; // oh, where is my mutable keyword?

pure:
void foo(); // oops, no impure

If a generic notion becomes accepted that even default behavior 
should have its own attribute, this will become less of an issue.


Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have

no problems in being accepted into the mainstream once properly
implemented. It is mostly a matter of volunteer wanting to get 
dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a backwards 
way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like @nogc 
where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks before 
anything else, that is so common in the C and C++ communities.


--
Paulo


Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their code, 
the sooner the better(else, at some point, it because useless 
because there is too much old code to mark). @nogc respects 
function composition... so if two functions do not rely on the gc 
then if one calls the other it will not break anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more useful 
for real time applications. If custom allocation methods ever 
come about then the @nogc may be obsolete are extremely useful 
depending on how the alternate memory models are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc dependent.




Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

...


@nogc itself does not help here as this code will still be 
affected by stop-the-world. Those issues are related, but not 
directly.


It will help to avoid memory leaks when you switch the GC off 
though.


Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, but 
only the ones
that are explicitly attributed will enjoy that recognition... 
seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more useful 
for real time applications. If custom allocation methods ever 
come about then the @nogc may be obsolete are extremely useful 
depending on how the alternate memory models are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc dependent.


I do agree D needs something like @nogc, something like untraced 
pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of the 
default allocation mode.


In such case it looks more as a workaround instead of fixing the 
real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting with 
compiler support to minimize increment/decrement operations.


--
Paulo


Re: Memory management design

2013-07-10 Thread Michel Fortin

On 2013-07-10 08:00:42 +, Manu turkey...@gmail.com said:


I'd push for an ARC implementation. I've become convinced that's what I
actually want, and that GC will never completely satisfy my requirements.


There's two ways to implement ARC. You can implement it instead of the 
GC, but things with cycles in them will leak. You can implement it as a 
supplement to the GC, where the GC is used to collect cycles which ARC 
cannot release. Or you can implement it only for a subset of the 
language by having a base reference-counted class and things derived 
from it are reference counted.


The two first ideas, which implement ARC globally, would have to call a 
function at each and every pointer assignment. Implementing this would 
require a different codegen from standard D, and libraries compiled 
with and without those calls on pointer assignment would be 
incompatible with each other.


On the other end, having a reference counted base class is of more 
limited utility because you can't reuse D code that rely on the GC if 
your requirement does not allow the GC to run when it needs too. But it 
does not create codegen fragmentation.


--
Michel Fortin
michel.for...@michelf.ca
http://michelf.ca



Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 10:49:04 UTC, Dicebot wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

...


@nogc itself does not help here as this code will still be 
affected by stop-the-world. Those issues are related, but not 
directly.


It will help to avoid memory leaks when you switch the GC off 
though.


Of Course, I never said stop the world only affected certain 
parts of the program. ? Having a nogc allows one to use only 
those functions and disable the gc and not worry about running 
out of memory.


e.g., import @nogc std.string; only imports nogc functions. Can 
disable the gc, use reference counting and write a RT app. (it 
would be better to specify the module as @nogc then all imports 
can only import @nogc functions)





Re: Memory management design

2013-07-10 Thread JS

On Wednesday, 10 July 2013 at 10:56:48 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, 
but only the ones
that are explicitly attributed will enjoy that 
recognition... seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the world 
GC(at least not without a great amount of work or severe 
limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more 
useful for real time applications. If custom allocation 
methods ever come about then the @nogc may be obsolete are 
extremely useful depending on how the alternate memory models 
are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc 
dependent.


I do agree D needs something like @nogc, something like 
untraced pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of the 
default allocation mode.




I agree but it's not going to happen ;/

In such case it looks more as a workaround instead of fixing 
the real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting 
with compiler support to minimize increment/decrement 
operations.


I don't know if that is a solid statement. ARC is pretty 
different from AGC.


I personally think memory management should be up to the 
programmer with some sort of GC as a fallback, ideally 
optional... maybe even selectable at run-time.




Re: Memory management design

2013-07-10 Thread Paulo Pinto

On Wednesday, 10 July 2013 at 11:38:35 UTC, JS wrote:

On Wednesday, 10 July 2013 at 10:56:48 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 10:40:10 UTC, JS wrote:

On Wednesday, 10 July 2013 at 09:06:10 UTC, Paulo Pinto wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

On 10 July 2013 17:53, Dicebot pub...@dicebot.lv wrote:


On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


...



I am pretty sure stuff like @nogc (or probably @noheap. or 
both) will have
no problems in being accepted into the mainstream once 
properly
implemented. It is mostly a matter of volunteer wanting to 
get dirty with

the compiler.



I'd push for an ARC implementation. I've become convinced 
that's what I
actually want, and that GC will never completely satisfy my 
requirements.


Additionally, while I can see some value in @nogc, I'm not 
actually sold on
that personally... it feels explicit attribution is a 
backwards way of
going about it. ie, most functions may actually be @nogc, 
but only the ones
that are explicitly attributed will enjoy that 
recognition... seems kinda

backwards.


That is the approach taken by other languages with untraced 
pointers.


Actually I prefer to have GC by default with something like 
@nogc where it really makes a difference.


Unless D wants to cater for the micro-optimizations folks 
before anything else, that is so common in the C and C++ 
communities.




It's not about any micro-optimizations. Many real time 
applications simply can't use D because of it's stop the 
world GC(at least not without a great amount of work or 
severe limitations).


By having a @nogc attribute people can start marking their 
code, the sooner the better(else, at some point, it because 
useless because there is too much old code to mark). @nogc 
respects function composition... so if two functions do not 
rely on the gc then if one calls the other it will not break 
anything.


So, as libraries are updated more and more functions are 
available to those that can't use gc code, making D more 
useful for real time applications. If custom allocation 
methods ever come about then the @nogc may be obsolete are 
extremely useful depending on how the alternate memory models 
are implemented.


Code that only use stack allocation or static heap allocation 
have no business being lumped in with code that is gc 
dependent.


I do agree D needs something like @nogc, something like 
untraced pointer as I mentioned.


What I am speaking against is making GC a opt-in instead of 
the default allocation mode.




I agree but it's not going to happen ;/

In such case it looks more as a workaround instead of fixing 
the real problem, which is having a better GC.


Note that by GC, I also mean some form of reference counting 
with compiler support to minimize increment/decrement 
operations.


I don't know if that is a solid statement. ARC is pretty 
different from AGC.


Reference counting is pretty much seen as a primitive form of 
garbage collection in the CS literature.


In some books it is usually the first chapter, hence the way I 
phrased my comment.



--
Paulo


Re: Memory management design

2013-07-10 Thread Kagamin

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


Re: Memory management design

2013-07-10 Thread bearophile

Kagamin:


Most functions can't be @nogc because they throw exceptions.


Probably about half of my functions/metods are tagged with 
nothrow. And as .dup becomes nothrow and few more functions 
become nothrow (iota, etc), that percentage will increase. I have 
also proposed to add to Phobos some nonthrowing functions, like a 
maybeTo, that help increase the percentage of nothrow functions:


http://d.puremagic.com/issues/show_bug.cgi?id=6840

Bye,
bearophile


Re: Memory management design

2013-07-10 Thread John Colvin

On Wednesday, 10 July 2013 at 13:00:53 UTC, Kagamin wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


I think I mentioned before, elsewhere, that @nogc could allow 
exceptions. No one who is sensitive to memory usage is going to 
use exceptions for anything other than exceptional circumstances, 
which perhaps don't need the same stringent memory control and 
high performance as the normal code path.



How much of the exception model would have to change in order to 
free them from the GC? I don't see high performance as a concern 
for exceptions so even an inefficient situation would be fine.


Re: Memory management design

2013-07-10 Thread Dicebot

On Wednesday, 10 July 2013 at 13:57:50 UTC, John Colvin wrote:
How much of the exception model would have to change in order 
to free them from the GC? I don't see high performance as a 
concern for exceptions so even an inefficient situation would 
be fine.


Well, you can just throw malloc'ed exceptions. Problem is 
druntime and Phobos use new for exceptions and that is hard 
wired into GC. That has generally the same issues as global 
customized new allocator hook.


Re: Memory management design

2013-07-10 Thread sclytrack

On Tuesday, 9 July 2013 at 23:32:13 UTC, BLM768 wrote:
Given all of this talk about memory management, it would seem 
that it's time for people to start putting forward some ideas 
for improved memory management designs. I've got an idea or two 
of my own, but I'd like to discuss my ideas before I draft a 
DIP so I can try to get everything fleshed out and polished.


Anyway the core idea behind my design is that object lifetimes 
tend to be managed in one of three ways:


1. Tied to a stack frame
2. Tied to an owner object
3. Not tied to any one object (managed by the GC)

To distinguish between these types of objects, one could use a 
set of three storage classes:


1. scope: refers to stack-allocated memory (which seems to be 
the original design behind scope). scope references may not 
be stashed anywhere where they might become invalid. Since this 
is the safest type of reference, any object may be passed by 
scope ref.


2. owned: refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. owned 
references may only be stashed in other owned references. Any 
non-scope object may be passed by owned ref. This storage 
class might not be usable in @safe code without further 
restrictions.


3. GC-managed: the default storage class. Fairly 
self-explanatory. GC-managed references may not refer to 
scope or owned objects.


Besides helping with the memory management issue, this design 
could also help tame the hairy mess of auto ref; scope ref 
can safely take any stack-allocated object, including 
temporaries, so a function could have scope auto ref 
parameters without needing to be a template function and with 
greater safety than auto ref currently provides.


--


2. Tied to an owner object

Why not just go manual memory. Just store everything
in a tree-like structure.

SuperOwner
--Child1
--Child2
SubChild1
SubChild2
--Container1
--Container2
--TStringList

Freeing a Child2 disposes of everything below.






Re: Memory management design

2013-07-10 Thread bearophile

sclytrack:


Why not just go manual memory. Just store everything
in a tree-like structure.

SuperOwner
--Child1
--Child2
SubChild1
SubChild2
--Container1
--Container2
--TStringList

Freeing a Child2 disposes of everything below.


Something like this?
http://swapped.cc/?_escaped_fragment_=/halloc#!/halloc

A manual hierarchical allocator is possibly handy to have in 
Phobos. But it doesn't replace a owning scheme for automatic 
memory management.


Bye,
bearophile


Re: Memory management design

2013-07-10 Thread Paulo Pinto

Am 10.07.2013 15:57, schrieb John Colvin:

On Wednesday, 10 July 2013 at 13:00:53 UTC, Kagamin wrote:

On Wednesday, 10 July 2013 at 08:00:55 UTC, Manu wrote:

most functions may actually be @nogc


Most functions can't be @nogc because they throw exceptions.


I think I mentioned before, elsewhere, that @nogc could allow
exceptions. No one who is sensitive to memory usage is going to use
exceptions for anything other than exceptional circumstances, which
perhaps don't need the same stringent memory control and high
performance as the normal code path.


How much of the exception model would have to change in order to free
them from the GC? I don't see high performance as a concern for
exceptions so even an inefficient situation would be fine.


Who is going to write two versions of the library then?

Throwing exceptions with @nogc pointers floating around would just lead 
to the same headache as in C++.


--
Paulo


Re: Memory management design

2013-07-10 Thread Johannes Pfau
Am Wed, 10 Jul 2013 18:12:42 +0200
schrieb Paulo Pinto pj...@progtools.org:

 Who is going to write two versions of the library then?
 
 Throwing exceptions with @nogc pointers floating around would just
 lead to the same headache as in C++.

This will really be an issue if/once we support systems which just
can't run a GC. (Because of really limited memory or because of code
size limitations...)

Once we have ARC we might check whether switching all exceptions to ARC
would work. If we manage to combine ARC with different custom
allocators it can be integrated perfectly with the GC as well, although
for an ARC-object allocated from the GC the reference count code would
just be no-ops.


Re: Memory management design

2013-07-10 Thread BLM768

On Wednesday, 10 July 2013 at 07:50:17 UTC, JS wrote:


One can already choose their own memory model in their own 
code. The issue is with the core library and pre-existing code 
that forces you to use the GC model.


It's possible to use your own memory model, but that doesn't mean 
it's necessarily convenient or safe, and there's no standardized 
method of going about it. If it becomes standardized, there's a 
much higher chance of the core library using it.


@nogc has been proposed several years ago but not gotten any 
footing. By having the ability to mark stuff has @nogc phobos 
could be migrated slowly and, at least, some libraries would be 
weaned off the GC and available.


I think the use of custom allocators would be better. Plug your 
own memory management model into D.


Memory management and memory allocation are not the same issue; 
from a purely theoretical standpoint, they're nearly orthogonal, 
at least without a compacting collector. If the both the GC and 
the allocators are designed in a sufficiently flexible and 
modular manner, it would be possible to tie several 
general-purpose allocators to the GC at once. There are some 
allocators that can't be shoehorned into the GC model, but those 
would just return non-GC references.


On Wednesday, 10 July 2013 at 07:59:41 UTC, Dicebot wrote:
I think merging scope and owned can be usable enough to be 
interesting without introducing any new concepts. Simply make 
it that scope in a variable declaration means it is a 
stack-allocated entity with unique ownership and scope as a 
function parameter attribute is required to accept scope data, 
verifying no references to it are taken / stored. Expecting 
mandatory deadalnix comment about lifetime definition ;)


Most of the functionality of owned is redundant, but there are 
still some corner cases where it could be useful. The idea behind 
it is to have it function very much like a pointer in C++ code. 
For non-reference types, you could just use a pointer, but using 
a pointer with reference types introduces an extra dereference 
operation to get to the real data.


This is something that could be implemented as a library type 
rather than an intrinsic part of the language, and that would 
probably be better because it's really sort of a low-level tool.


Only thing I have no idea about is if scope attribute should 
be shallow or transitive. Former is dangerous, latter severely 
harms usability.


I'm not sure how shallow scope would be dangerous. If a scope 
object contains non-scope references to GC-allocated data, it's 
perfectly safe to stash those references somewhere because the 
target of the reference won't be collected. If the object 
contains scope members (if the language even allows that), then 
references to those members should actually inherit the 
container's scope status, not the members' scope status, 
because scope would be overloaded in that case to mean 
packed into a (potentially heap allocated) object. scope is 
all about where the objects might be allocated, which is not a 
transitive property.


Memory management design

2013-07-09 Thread BLM768
Given all of this talk about memory management, it would seem 
that it's time for people to start putting forward some ideas for 
improved memory management designs. I've got an idea or two of my 
own, but I'd like to discuss my ideas before I draft a DIP so I 
can try to get everything fleshed out and polished.


Anyway the core idea behind my design is that object lifetimes 
tend to be managed in one of three ways:


1. Tied to a stack frame
2. Tied to an owner object
3. Not tied to any one object (managed by the GC)

To distinguish between these types of objects, one could use a 
set of three storage classes:


1. scope: refers to stack-allocated memory (which seems to be 
the original design behind scope). scope references may not 
be stashed anywhere where they might become invalid. Since this 
is the safest type of reference, any object may be passed by 
scope ref.


2. owned: refers to an object that is heap-allocated but 
manually managed by another object or by a stack frame. owned 
references may only be stashed in other owned references. Any 
non-scope object may be passed by owned ref. This storage class 
might not be usable in @safe code without further restrictions.


3. GC-managed: the default storage class. Fairly 
self-explanatory. GC-managed references may not refer to scope 
or owned objects.


Besides helping with the memory management issue, this design 
could also help tame the hairy mess of auto ref; scope ref 
can safely take any stack-allocated object, including 
temporaries, so a function could have scope auto ref parameters 
without needing to be a template function and with greater safety 
than auto ref currently provides.