Re: Template alias as template specialisation not recognized.

2021-01-15 Thread Basile B. via Digitalmars-d-learn

On Saturday, 16 January 2021 at 01:21:24 UTC, Paul wrote:
I'm having issues when trying to use a template alias as a 
template specialisation.

When using the following:

alias Vec(uint size, Type) = Mat!(size, 1, Type);


void setUniform(V : Vec!(L, bool), int L)(string name, V 
value) {...}



Vec!(4, bool) a;
setUniform("test", a);


I get the following error:
template `shader.Shader.setUniform` cannot deduce function 
from argument types `!()(string, Mat!(4u, 1u, bool))`, 
candidates are:DUB
shader.d(43, 7): `setUniform(V : Vec!(L, bool), uint L)(string 
name, V value)`


Meanwhile, when using the following, I have no issues:
void setUniform(V : Mat!(L, 1, bool), int L)(string name, V 
value) {}


In this case you can use a const template parameter:

  alias Vec(uint size, Type, const uint length = 1) = Mat!(size, 
length, Type);


Although this is not a generic workaround for the issue mentioned.


Re: Template alias as template specialisation not recognized.

2021-01-15 Thread Paul via Digitalmars-d-learn

On Saturday, 16 January 2021 at 01:38:38 UTC, Paul Backus wrote:

You have encountered issue 1807:

https://issues.dlang.org/show_bug.cgi?id=1807


Ah I see, thank you, sad to see several DIP's I'd be interested 
in are postponed :(

Thanks for the workaround hint, I'll probably be using that.


Re: Template alias as template specialisation not recognized.

2021-01-15 Thread Paul Backus via Digitalmars-d-learn

On Saturday, 16 January 2021 at 01:21:24 UTC, Paul wrote:
I'm having issues when trying to use a template alias as a 
template specialisation.

When using the following:

alias Vec(uint size, Type) = Mat!(size, 1, Type);


void setUniform(V : Vec!(L, bool), int L)(string name, V 
value) {...}



Vec!(4, bool) a;
setUniform("test", a);


I get the following error:
template `shader.Shader.setUniform` cannot deduce function 
from argument types `!()(string, Mat!(4u, 1u, bool))`, 
candidates are:DUB
shader.d(43, 7): `setUniform(V : Vec!(L, bool), uint L)(string 
name, V value)`


Meanwhile, when using the following, I have no issues:
void setUniform(V : Mat!(L, 1, bool), int L)(string name, V 
value) {}


You have encountered issue 1807:

https://issues.dlang.org/show_bug.cgi?id=1807

The easiest way to work around it that I know of is to change 
`Vec` from an alias into a struct:


struct Vec(uint size_, Type)
{
Mat!(size, 1, Type) payload;
alias payload this;
}


Template alias as template specialisation not recognized.

2021-01-15 Thread Paul via Digitalmars-d-learn
I'm having issues when trying to use a template alias as a 
template specialisation.

When using the following:

alias Vec(uint size, Type) = Mat!(size, 1, Type);


void setUniform(V : Vec!(L, bool), int L)(string name, V value) 
{...}



Vec!(4, bool) a;
setUniform("test", a);


I get the following error:
template `shader.Shader.setUniform` cannot deduce function from 
argument types `!()(string, Mat!(4u, 1u, bool))`, candidates 
are:DUB
shader.d(43, 7): `setUniform(V : Vec!(L, bool), uint L)(string 
name, V value)`


Meanwhile, when using the following, I have no issues:
void setUniform(V : Mat!(L, 1, bool), int L)(string name, V 
value) {}


Re: Why many programmers don't like GC?

2021-01-15 Thread Guillaume Piolat via Digitalmars-d-learn
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
wrote:


Many open source projects (and also some commercial ones) work 
ok for small datasets, but tank when you increase the dataset. 
So "match and mix" basically means use it for prototyping, but 
do-not-rely-on-it-if-you-can-avoid-it.


It's certainly true that in team dynamics, without any reward, 
efficiency can be victim to a tragedy of commons.


Well, any software invariant is harder to hold if the 
shareholders don't care.

(be it "being fast", or "being correct", or other invariants).




Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread Basile B. via Digitalmars-d-learn

On Thursday, 14 January 2021 at 18:24:44 UTC, ddcovery wrote:
I know there is other threads about null safety and the 
"possible" ways to support this in D and so on.

[...]
If it's not a bother, I'd like to know how you usually approach 
it


[...]

Thanks!!!


I have a opDispatch solution here [1], probably very similar to 
the other opDispatch solution mentioned. It is used in d-scanner 
since several years, e.g here [2]. I'd like to have this as a 
first class operator because as usual in D,  you can do great 
things with templates but then completion is totally unable to 
deal with them. Also There's a great difference between using the 
template to do refacts and using it to write new code. Very 
frustrating to write `safeAcess(stuff). ` and no completion popup 
appears.


[1]: 
https://gitlab.com/basile.b/iz/-/blob/master/import/iz/sugar.d#L1655
[2]: 
https://github.com/dlang-community/D-Scanner/blob/2963358eb4a24064b0893493684d4075361297eb/src/dscanner/analysis/assert_without_msg.d#L42





Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 22:13:01 UTC, Max Haughton wrote:
I think the way forward is some robust move semantics and 
analysis like Rust. I suppose ideally we would have some kind 
of hidden ARC behind the scenes but I don't know how that would 
play with structs.


If they are heap allocated then you just put the reference count 
at a negative offset (common strategy).


You need pointer types for it, but that is not a big issue if the 
strategy is to support both the old GC and ARC.  You basically 
just need to get library authors that support ARC to mark their 
library code in some way.





Re: Why many programmers don't like GC?

2021-01-15 Thread Max Haughton via Digitalmars-d-learn

On Friday, 15 January 2021 at 21:49:07 UTC, H. S. Teoh wrote:
On Fri, Jan 15, 2021 at 09:04:13PM +, welkam via 
Digitalmars-d-learn wrote:

[...]


As the joke goes, "you can write assembly code in any 
language". :-D  If you code in a sloppy way, it doesn't matter 
what language you write in, your program will still suck.  No 
amount of compiler magic will be able to help you.  The 
solution is not to blame this or that, it's to learn how to use 
what the language offers you effectively.




[...]


And with D, it's actually easy to do this, because D gives you 
tools like slices and by-value structs.  Having slices backed 
by the GC is actually a very powerful combination that people 
seem to overlook: it means you can freely refer to data by 
slicing the buffer.  Strings being slices, as opposed to 
null-terminated, is a big part of this.  In C, you cannot 
assume anything about how the memory of a buffer is managed 
(unless you allocated it yourself); as a result, in typical C 
code strcpy's, strdup's are everywhere.  Want a substring?  You 
can't null-terminate the parent string without affecting code 
that still depends on it; solution? strdup.  Want to store a 
string in some persistent data structure?  You can't be sure 
the pointer will still be valid (or that the contents pointed 
to won't change); solution? strdup, or strcpy.  Want to parse a 
string into words?  Either you modify it in-place (e.g. 
strtok), invalidating any other references to it, or you have 
to make new allocations of every segment.  GC or no GC, this 
will not lead to a good place, performance-wise.


I could not have written fastcsv if I had to work under the 
constraints of C's null-terminated strings under manual memory 
management.  Well, I *could*, but it would have taken 10x the 
amount of effort, and the API would be 5x uglier due to the 
memory management paraphrenalia required to do this correctly 
in C.  And to support lazy range-based iteration would require 
a whole new set of API's in C just for that purpose.  In D, I 
can simply take slices of the input -- eliminating a whole 
bunch of copying.  And backed by the GC -- so the code doesn't 
have to be cluttered with memory management paraphrenalia, but 
can have a simple, easy-to-use API compatible across a large 
range of use cases. Lazy iteration comes "for free", no need to 
introduce an entire new API. It's a win-win.


All that's really needed is for people to be willing to drop 
their C/C++/Java coding habits, and write D the way it's meant 
to be written: with preference for stack-allocated structs and 
by-value semantics, using class objects only for more 
persistent data. Use slices for maximum buffer reuse, avoid 
needless copying. Use compile-time introspection to generate 
code statically where possible instead of needlessly 
recomputing stuff at runtime.  Don't fear the GC; embrace it 
and use it to your advantage.  If it becomes a bottleneck, 
refactor that part of the code.  No need to rewrite the entire 
project the painful way; most of the time GC performance issues 
are localised and have relatively simple fixes.



T


I agree that the GC is useful, but it is a serious hindrance on 
the language not having an alternative other than really bad 
smart pointers (well written but hard to know their overhead) and 
malloc and free. I don't mind using the GC for my own stuff, but 
it's too difficult to avoid it at the moment for the times when 
it gets in the way.


I think the way forward is some robust move semantics and 
analysis like Rust. I suppose ideally we would have some kind of 
hidden ARC behind the scenes but I don't know how that would play 
with structs.


One more cynical argument for having a modern alternative is that 
it's a huge hindrance on the languages "cool"Ness in the next 
generation of programmers and awareness is everything (most 
people won't have heard of D)


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 21:18:55 UTC, aberba wrote:

TL;DR:

In summation, the garbage collection system is a robust part 
of Unreal Engine that affords C++ programmers a lot of safety 
from memory leaks, as well as convenience. With this 
high-level discussion, I was aiming to introduce the system at 
a conceptual level, and I hope I have achieved that.


What is your conceptual level? You haven't described what it 
does, and does not do. But yes, frameworks need that allow 
"scripting" in some shape or form (compiled or not) has to hide 
internal structures and intricacies and provide some convenience.


However, you write your own from scratch you can often times 
build the marking into an existing pass, so you get it for free. 
Not uncommon for people who write code that modify graphs.


There is a big difference between writing a dedicated collector 
for a dedicated graph, and a general ownership mechanism for the 
whole program.






Re: Why many programmers don't like GC?

2021-01-15 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Jan 15, 2021 at 09:04:13PM +, welkam via Digitalmars-d-learn wrote:
> On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
> > (1) Refactored one function called from an inner loop to reuse a
> > buffer instead of allocating a new one each time, thus eliminating a
> > large amount of garbage from small allocations;
> > <...>
> > The result was about 40-50% reduction in runtime, which is close to
> > about a 2x speedup.
> 
> I think this message needs to be signal boosted. Most of the time GC
> is not the problem. The problem is sloppy memory usage. If you
> allocate a lot of temporary objects your performance will suffer even
> if you use malloc and free.

As the joke goes, "you can write assembly code in any language". :-D  If
you code in a sloppy way, it doesn't matter what language you write in,
your program will still suck.  No amount of compiler magic will be able
to help you.  The solution is not to blame this or that, it's to learn
how to use what the language offers you effectively.


> If you write code that tries to use stack allocation as much as
> possible, doesn't copy data around, reuses buffers then it will be
> faster than manual memory management that doesn't do that. And thats
> with a "slow" GC.

And with D, it's actually easy to do this, because D gives you tools
like slices and by-value structs.  Having slices backed by the GC is
actually a very powerful combination that people seem to overlook: it
means you can freely refer to data by slicing the buffer.  Strings being
slices, as opposed to null-terminated, is a big part of this.  In C, you
cannot assume anything about how the memory of a buffer is managed
(unless you allocated it yourself); as a result, in typical C code
strcpy's, strdup's are everywhere.  Want a substring?  You can't
null-terminate the parent string without affecting code that still
depends on it; solution? strdup.  Want to store a string in some
persistent data structure?  You can't be sure the pointer will still be
valid (or that the contents pointed to won't change); solution? strdup,
or strcpy.  Want to parse a string into words?  Either you modify it
in-place (e.g. strtok), invalidating any other references to it, or you
have to make new allocations of every segment.  GC or no GC, this will
not lead to a good place, performance-wise.

I could not have written fastcsv if I had to work under the constraints
of C's null-terminated strings under manual memory management.  Well, I
*could*, but it would have taken 10x the amount of effort, and the API
would be 5x uglier due to the memory management paraphrenalia required
to do this correctly in C.  And to support lazy range-based iteration
would require a whole new set of API's in C just for that purpose.  In
D, I can simply take slices of the input -- eliminating a whole bunch of
copying.  And backed by the GC -- so the code doesn't have to be
cluttered with memory management paraphrenalia, but can have a simple,
easy-to-use API compatible across a large range of use cases. Lazy
iteration comes "for free", no need to introduce an entire new API.
It's a win-win.

All that's really needed is for people to be willing to drop their
C/C++/Java coding habits, and write D the way it's meant to be written:
with preference for stack-allocated structs and by-value semantics,
using class objects only for more persistent data. Use slices for
maximum buffer reuse, avoid needless copying. Use compile-time
introspection to generate code statically where possible instead of
needlessly recomputing stuff at runtime.  Don't fear the GC; embrace it
and use it to your advantage.  If it becomes a bottleneck, refactor that
part of the code.  No need to rewrite the entire project the painful
way; most of the time GC performance issues are localised and have
relatively simple fixes.


T

-- 
Once the bikeshed is up for painting, the rainbow won't suffice. -- Andrei 
Alexandrescu


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 21:15:29 UTC, aberba wrote:
Isn't it more theoretical/imaginary/hypothetical than something 
really measured from a real-world use case? Almost all large 
software use cases I've seen used mix and match.


No?! Chrome has a garbage collector because JavaScript acquire 
resources in a somewhat chaotic manner, but they have fine tuned 
it and only call it when the call stack is short.


High quality game engines have similarly fine tuned collection, 
and not really a big sweeping conservative scan that lock down 
threads.



(BTW ARC is also another form of GC)


By GC in this thread we speak of tracing GC. Generally, in 
informal contexts GC always means tracing GC, even among 
academics.


Legends have it that almost every major software project in ANY 
system language ends up writing custom allocators and 
containers.


Containers certainly, allocators, sometimes. But that is not 
necessarily related to handling ownership. You can write your own 
allocator and still rely on a standard ownership mechanism.







Re: Why many programmers don't like GC?

2021-01-15 Thread aberba via Digitalmars-d-learn
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
wrote:
On Friday, 15 January 2021 at 19:37:12 UTC, Guillaume Piolat 
wrote:

A small GC heap is sufficient.
There is this blog post where there was a quantitative measure 
of the sub-1ms D GC heap size.


That's ok for a small game, but not for applications that grow 
over time or projects where the requirement spec is written 
(and continually added to) by customers. But for enthusiast 
projects, that can work.


Many open source projects (and also some commercial ones) work 
ok for small datasets, but tank when you increase the dataset. 
So "match and mix" basically means use it for prototyping, but 
do-not-rely-on-it-if-you-can-avoid-it.


Switching to ARC looks more attractive, scales better and the 
overhead is more evenly distributed. But it probably won't 
happen.


Isn't it more theoretical/imaginary/hypothetical than something 
really measured from a real-world use case? Almost all large 
software use cases I've seen used mix and match.


(BTW ARC is also another form of GC)

Unreal game engine 
https://mikelis.net/garbage-collection-in-ue4-a-high-level-overview/


Unity (of course) 
https://docs.unity3d.com/Manual/UnderstandingAutomaticMemoryManagement.html


Legends have it that almost every major software project in ANY 
system language ends up writing custom allocators and 
containers.





Re: Why many programmers don't like GC?

2021-01-15 Thread aberba via Digitalmars-d-learn

On Friday, 15 January 2021 at 21:15:29 UTC, aberba wrote:
On Friday, 15 January 2021 at 19:49:34 UTC, Ola Fosheim Grøstad 
wrote:

[...]


Isn't it more theoretical/imaginary/hypothetical than something 
really measured from a real-world use case? Almost all large 
software use cases I've seen used mix and match.


(BTW ARC is also another form of GC)

Unreal game engine 
https://mikelis.net/garbage-collection-in-ue4-a-high-level-overview/


Unity (of course) 
https://docs.unity3d.com/Manual/UnderstandingAutomaticMemoryManagement.html



[...]



TL;DR:

In summation, the garbage collection system is a robust part of 
Unreal Engine that affords C++ programmers a lot of safety from 
memory leaks, as well as convenience. With this high-level 
discussion, I was aiming to introduce the system at a 
conceptual level, and I hope I have achieved that.


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn

On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
(1) Refactored one function called from an inner loop to reuse 
a buffer instead of allocating a new one each time, thus 
eliminating a large amount of garbage from small allocations;

<...>
The result was about 40-50% reduction in runtime, which is 
close to about a 2x speedup.


I think this message needs to be signal boosted. Most of the time 
GC is not the problem. The problem is sloppy memory usage. If you 
allocate a lot of temporary objects your performance will suffer 
even if you use malloc and free. If you write code that tries to 
use stack allocation as much as possible, doesn't copy data 
around, reuses buffers then it will be faster than manual memory 
management that doesn't do that. And thats with a "slow" GC.


Re: To switch GC from FIFO to LIFO paradigm.

2021-01-15 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Jan 15, 2021 at 08:19:18PM +, tsbockman via Digitalmars-d-learn 
wrote:
[...]
> However, generational GCs are somewhat closer to LIFO than what we
> have now, which does provide some performance gains under common usage
> patterns.  People have discussed adding a generational GC to D in the
> past, and I think the conclusion was that it requires pervasive write
> barriers (not the atomics kind), which leadership considers
> inappropriate for D for other reasons.

Also note that generational GCs are designed to cater to languages like
Java or C#, where almost everything is heap-allocated, so you tend to
get a lot of short-term allocations that go away after a function call
or two and become garbage.  In that context, a generational GC makes a
lot of sense: most of the garbage is in "young" objects, so putting them
in a separate generation from "older" objects helps reduces the number
of objects you need to scan.

In idiomatic D, however, by-value, stack-allocated types like structs
are generally preferred over heap-allocated classes where possible, with
the latter tending to be used more for longer-term, more persistent
objects.  So there's less short-term garbage, and it's unclear how much
improvement one might see with a generational GC.  It may not make as
big of a difference as one might expect because usage patterns differ
across languages.

(Of course, this assumes idiomatic D... if you write D in Java style
with lots of short-lived class objects, a generational GC might indeed
make a bigger difference. But you'd lose out on the speed of
stack-allocated objects.  It's unclear how this compares with modern
JVMs with JIT that optimizes away some heap allocations, though.)


T

-- 
"I'm running Windows '98." "Yes." "My computer isn't working now." "Yes, you 
already said that." -- User-Friendly


Re: To switch GC from FIFO to LIFO paradigm.

2021-01-15 Thread tsbockman via Digitalmars-d-learn

On Friday, 15 January 2021 at 12:39:30 UTC, MGW wrote:
GC cleans memory using the FIFO paradigm. Is it possible to 
switch GC to work using the LIFO paradigm?


As others already said, the current GC isn't FIFO; it just scans 
everything once in a while a frees whatever it can, new or old.


However, generational GCs are somewhat closer to LIFO than what 
we have now, which does provide some performance gains under 
common usage patterns. People have discussed adding a 
generational GC to D in the past, and I think the conclusion was 
that it requires pervasive write barriers (not the atomics kind), 
which leadership considers inappropriate for D for other reasons.


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 15 January 2021 at 19:37:12 UTC, Guillaume Piolat 
wrote:

A small GC heap is sufficient.
There is this blog post where there was a quantitative measure 
of the sub-1ms D GC heap size.


That's ok for a small game, but not for applications that grow 
over time or projects where the requirement spec is written (and 
continually added to) by customers. But for enthusiast projects, 
that can work.


Many open source projects (and also some commercial ones) work ok 
for small datasets, but tank when you increase the dataset. So 
"match and mix" basically means use it for prototyping, but 
do-not-rely-on-it-if-you-can-avoid-it.


Switching to ARC looks more attractive, scales better and the 
overhead is more evenly distributed. But it probably won't happen.




Re: Why many programmers don't like GC?

2021-01-15 Thread Guillaume Piolat via Digitalmars-d-learn
On Friday, 15 January 2021 at 18:55:27 UTC, Ola Fosheim Grøstad 
wrote:
On Friday, 15 January 2021 at 18:43:44 UTC, Guillaume Piolat 
wrote:
Calling collect() isn't very good, it's way better to ensure 
the GC heap is relatively small, hence easy to traverse.
You can use -gc=profile for this (noting that things that 
can't contain pointer, such as ubyte[], scan way faster than 
void[])


Ok, so what you basically say is that the number of pointers to 
trace was small, and perhaps also the render thread was not 
under GC control?


A small GC heap is sufficient.
There is this blog post where there was a quantitative measure of 
the sub-1ms D GC heap size.

http://www.infognition.com/blog/2014/the_real_problem_with_gc_in_d.html


200 KB can be scanned/collected in 1 ms.


Since then the D GC has improved in many ways (multicore, 
precise, faster...) that surprisingly have not been publicized 
that much ; but probably the suggested realtime heap size is in 
the same order of magnitude.


In this 200kb number above, things that can't contain pointers 
don't count.


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 15 January 2021 at 18:43:44 UTC, Guillaume Piolat 
wrote:
Calling collect() isn't very good, it's way better to ensure 
the GC heap is relatively small, hence easy to traverse.
You can use -gc=profile for this (noting that things that can't 
contain pointer, such as ubyte[], scan way faster than void[])


Ok, so what you basically say is that the number of pointers to 
trace was small, and perhaps also the render thread was not under 
GC control?


I think it is better with something simpler like saying one GC 
per thread


But then ownership doesn't cross threads, so it can be tricky 
to keep object alive when they cross threads. I think that was 
a problem in Nim.


What I have proposed before is to pin down objects with a ref 
count when you temporarily hand them to other threads. Then the 
other thread will handle it with a smart_pointer which release 
the "borrow ref count" on return.


But yes, you "need" some way for other threads to borrow thread 
local memory, in order to implement async services etc.  Then 
again, I think people who write such service frameworks will be 
more advanced programmers than those that use them. So I wouldn't 
say it is a big downside.


But sometimes that ownership is just not interesting. If you 
are writing a hello world program, no one cares who "hello 
world" string belongs to. So the GC is that global owner.


I get your viewpoint, but simple types like strings can be 
handled equally well with RC... If we take the view, that you 
also stressed, that it is desirable to keep the tracable pointer 
count down, then maybe making only class object GC is the better 
approach.


Re: Why many programmers don't like GC?

2021-01-15 Thread Guillaume Piolat via Digitalmars-d-learn
On Friday, 15 January 2021 at 16:37:46 UTC, Ola Fosheim Grøstad 
wrote:


But when do you call collect? Do you not create more and more 
long-lived objects?


Calling collect() isn't very good, it's way better to ensure the 
GC heap is relatively small, hence easy to traverse.
You can use -gc=profile for this (noting that things that can't 
contain pointer, such as ubyte[], scan way faster than void[])




How do you structure this? Limit GC to one main thread? But an 
audio plugin GUI is not used frequently, so... hickups are less 
noticable. For a 3D or animation editor hickups would be very 
annoying.


Yes but when a hiccup happen you can often trace it back to 
gargage generation and target it. It's an optimization task.



I think it is better with something simpler like saying one GC 
per thread


But then ownership doesn't cross threads, so it can be tricky to 
keep object alive when they cross threads. I think that was a 
problem in Nim.



It really is quite easy to do: build you app normally, 
evetually optimize later by using manual memory management.


I understand what you are saying, but it isn't all that much 
more work to use explicit ownership if all the libraries have 
support for it.


But sometimes that ownership is just not interesting. If you are 
writing a hello world program, no one cares who "hello world" 
string belongs to. So the GC is that global owner.


Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread Dukc via Digitalmars-d-learn

On Thursday, 14 January 2021 at 18:24:44 UTC, ddcovery wrote:
This is only an open question to know what code patterns you 
usually use to solve this situation in D:


  if(person.father.father.name == "Peter") doSomething();
  if(person.father.age > 80 ) doSomething();

knowing that *person*, or its *father* property can be null



Probably the incremental check solution. A helper function if I 
find myself doing that more than two or three times.


On the other hand, I don't have to do this that often. I usually 
design the functions to either except non-null values, or to 
return early in case of null.





Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread ddcovery via Digitalmars-d-learn
On Friday, 15 January 2021 at 14:25:09 UTC, Steven Schveighoffer 
wrote:

On 1/15/21 9:19 AM, Steven Schveighoffer wrote:

Something similar to BlackHole or WhiteHole. Essentially 
there's a default action for null for all 
types/fields/methods, and everything else is passed through.


And now reading the other thread about this above, it looks 
like this type is already written:


https://code.dlang.org/packages/optional

I'd say use that.

-Steve


Yes, the Optional/Some/None pattern is the "functional" 
orientation for avoiding the use of "null".


Swift uses a similar pattern (and scala too) and supports the 
"null safety operators"  ?. and ??  (it doesn't work on "null" 
but on optional/nil).


The more I think about it, the more fervent defender of the use 
of ?. and ?? I am.


The misinterpretation about "null safety" is we talk about "null" 
reference safety, but this pattern can be used with "optional" 
to...


D has not optional/none/some native implementation and this is 
the reason we think about "?." as a "bad pattern" because we 
imagine it is for "null" values exclusively.


But like other operators, they could be overloaded and adapted to 
each library.


Well, I'm digressing:  good night!!!






Re: How can I specify flags for the compiler when --build=release in dub?

2021-01-15 Thread evilrat via Digitalmars-d-learn

On Friday, 15 January 2021 at 17:02:32 UTC, Jack wrote:

is this possible? if so, how?


You can add extra options for for platform and compiler, and IIRC 
for build type too.


For example like this for lflags, it might complain about the 
order so just follow the instructions.


"lflags-debug"
"lflags-windows-debug"
"lflags-linux-ldc2-release"
"lflags-windows-x86_64-dmd-debug"


How can I specify flags for the compiler when --build=release in dub?

2021-01-15 Thread Jack via Digitalmars-d-learn

is this possible? if so, how?


Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn

On Friday, 15 January 2021 at 16:22:59 UTC, IGotD- wrote:

[snip]

Are we talking about the same things here? You mentioned DMD 
but I was talking about programs compiled with DMD (or GDC, 
LDC), not the nature of the DMD compiler in particular.


Bump the pointer and never return any memory might acceptable 
for short lived programs but totally unacceptable for long 
running programs, like a browser you are using right now.


Just to clarify, in a program that is made in D with the 
default options, will there be absolutely no memory reclamation?


You are talking about different things.

DMD, as a program, uses the bump the pointer allocation strategy.

If you compile a D program with DMD that uses new or appends to a 
dynamic array (or whenver else), then it is using the GC to do 
that. You can also use malloc or your own custom strategy. The GC 
will reclaim memory, but there is no guarantee that malloc or a 
custom allocation strategy will.


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn

On Wednesday, 13 January 2021 at 18:58:56 UTC, Marcone wrote:
I've always heard programmers complain about Garbage Collector 
GC. But I never understood why they complain. What's bad about 
GC?


Most people get to know GC trough Java or C#. Those languages 
promote the use of OOP and they say that you dont need to worry 
about memory management. The result is that people write code 
that doesnt utilize CPU caches effectively and makes a lot of 
temporary allocations. For example people at Microsoft 
implemented json parser in C# and it would allocate 8GB of memory 
for every 1GB parsed. Add to that virtual machines and you find 
that programs written in those languages run like they are coded 
in molasse.


People with experience of those programs conclude that is all 
because of GC. And its simple explanation for simple people.


Re: Why many programmers don't like GC?

2021-01-15 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Jan 15, 2021 at 04:22:59PM +, IGotD- via Digitalmars-d-learn wrote:
[...]
> Are we talking about the same things here? You mentioned DMD but I was
> talking about programs compiled with DMD (or GDC, LDC), not the nature
> of the DMD compiler in particular.
> 
> Bump the pointer and never return any memory might acceptable for
> short lived programs but totally unacceptable for long running
> programs, like a browser you are using right now.
> 
> Just to clarify, in a program that is made in D with the default
> options, will there be absolutely no memory reclamation?

We're apparently cross-talking here.  A default D program uses the GC,
as should be obvious by now.  DMD itself, however, uses bump-the-pointer
(*not* programs it compiles, though!).  The two are completely
unrelated.


T

-- 
Let X be the set not defined by this sentence...


Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread ddcovery via Digitalmars-d-learn
On Friday, 15 January 2021 at 14:19:35 UTC, Steven Schveighoffer 
wrote:

On 1/14/21 7:27 PM, ddcovery wrote:
On Thursday, 14 January 2021 at 20:23:08 UTC, Steven 
Schveighoffer wrote:


You could kinda automate it like:

struct NullCheck(T)
{
   private T* _val;
   auto opDispatch(string mem)() if (__traits(hasMember, T, 
mem)) {
   alias Ret = typeof(() { return __traits(getMember, 
*_val, mem); }());

   if(_val is null) return NullCheck!(Ret)(null);
   else return NullCheck!(Ret)(__trats(getMember, *_val, 
mem));

   }

   bool opCast(V: bool)() { return _val !is null; }
}

auto nullCheck(T)(T *val) { return AutoNullCheck!T(val);}

// usage
if(nullCheck(person).father.father && 
person.father.father.name == "Peter")


Probably doesn't work for many circumstances, and I'm sure I 
messed something up.


-Steve


I'm seeing "opDispatch" everywhere last days :-). It's really 
powerful!!!


If we define an special T _(){ return _val; } method, then you 
can write


   if( nullCheck(person).father.father.name._ == "Peter")

And renaming

   if( ns(person).father.father.name._ == "Peter" )


This doesn't work, if person, person.father, or 
person.father.father is null, because now you are dereferencing 
null again.


But something like this might work:

NullCheck(T)
{
   ... // opdispatch and stuff
   bool opEquals(auto ref T other) {
  return _val is null ? false : *_val == other;
   }
}

Something similar to BlackHole or WhiteHole. Essentially 
there's a default action for null for all types/fields/methods, 
and everything else is passed through.


Swift has stuff like this built-in. But D might look better 
because you wouldn't need a chain of question marks.


-Steve


I don't know if I can add this to Dlang IDE and then share a 
link... links that I generate doesn't work...


* I have adapted the "onDispatch" and the factory method to 
manage nullable and not nullable values
* The unwrapper "T _()" method returns Nullable!T for nullable 
value types instead T  (similar to c#)


* I removed the T* when testing changes (I discovered after 1000 
changes that template errors are not well informed by the 
compiler... I losted a lot to discover a missing import)... I 
will try to restore.



import std.typecons;
import std.traits;

void main()
{
Person person = new Person("Andres", 10, new Person("Peter", 
40, null));

// null reference
assert(ns(person).father.father._ is null);
// null reference
assert(ns(person).father.father.name._ is null);
// reference value
assert(ns(person).father.name._ == "Peter");
// Nullable!int
assert(ns(person).father.father.age._.isNull);
assert(ns(person).father.father.age._.get(0) == 0);
assert(ns(11)._.get == 11);
}

struct NullSafety(T)
{
private T _val;
private bool _isEmpty;

auto opDispatch(string name)() if (__traits(hasMember, T, 
name))

{
alias Ret = typeof((() => __traits(getMember, _val, 
name))());

if (_val is null)
{
static if (isAssignable!(Ret, typeof(null)))
return NullSafety!(Ret)(null, true);
else
return NullSafety!(Ret)(Ret.init, true);
}
else
{
return NullSafety!(Ret)(__traits(getMember, _val, 
name), false);

}
}

static if (isAssignable!(T, typeof(null))) // Reference types 
unwrapper

T _()
{
return _val;
}
else // value types unwrapper
Nullable!T _()
{
return _isEmpty ? Nullable!T() : Nullable!T(_val);
}

}

auto ns(T)(T val)
{
static if (isAssignable!(T, typeof(null)))
return NullSafety!T(val, val is null);
else
return NullSafety!T(val, false);
}

class Person
{
public string name;
public Person father;
public int age;
this(string name, int age, Person father)
{
this.name = name;
this.father = father;
this.age = age;
}
}


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 15 January 2021 at 16:26:59 UTC, Guillaume Piolat 
wrote:
Until someone can describe a strategy that works for a full 
application, e.g. an animation-editor or something like that, 
it is really difficult to understand what is meant by it.


Personal examples:
 - The game Vibrant uses GC for some long-lived objects.
   Memory pools for most game entities.
   Audio thread has disabled GC.


But when do you call collect? Do you not create more and more 
long-lived objects?


- Dplug plugins before runtime removal used GC in the UI, but 
no GC in whatever was called repeatedly, leading to no GC pause 
in practice. In case an error was made, it would be a GC pause, 
but not a leak.


How do you structure this? Limit GC to one main thread? But an 
audio plugin GUI is not used frequently, so... hickups are less 
noticable. For a 3D or animation editor hickups would be very 
annoying.


The pain point with the mixed approach is adding GC roots when 
needed. You need a mental model of traceability.


Yes. I tend to regret "clever" solutions when getting back to the 
code months later because the mental model is no longer easily 
available.


I think it is better with something simpler like saying one GC 
per thread, or ARC across the board unless you use non-arc 
pointers, or that only class objects are GC. Basically something 
that creates a simple mental model.


It really is quite easy to do: build you app normally, 
evetually optimize later by using manual memory management.


I understand what you are saying, but it isn't all that much more 
work to use explicit ownership if all the libraries have support 
for it.


It is a lot more work to add manual memory management if the 
available libraries don't help you out.





Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 16:21:43 UTC, jmh530 wrote:
On Friday, 15 January 2021 at 15:36:37 UTC, Ola Fosheim Grøstad 
wrote:
The library smart pointers would make it difficult to interact 
with existing D GC code though.


Yes. So it would be better to do it automatically in the compiler 
for designated GC objects.


ARC is also a good alternative.

Probably less work to get a high quality ARC implementation, than 
a high quality GC implementation.




Re: Why many programmers don't like GC?

2021-01-15 Thread Guillaume Piolat via Digitalmars-d-learn
On Friday, 15 January 2021 at 16:21:18 UTC, Ola Fosheim Grøstad 
wrote:


What do you mean by "mix and match"? If it means shutting down 
the GC after initialization then it can easily backfire for 
more complicated software that accidentally calls code that 
relies on the GC.


I mean: "using GC, unless where it creates problems". Examples 
below.


Until someone can describe a strategy that works for a full 
application, e.g. an animation-editor or something like that, 
it is really difficult to understand what is meant by it.


Personal examples:
 - The game Vibrant uses GC for some long-lived objects.
   Memory pools for most game entities.
   Audio thread has disabled GC.

- Dplug plugins before runtime removal used GC in the UI, but no 
GC in whatever was called repeatedly, leading to no GC pause in 
practice. In case an error was made, it would be a GC pause, but 
not a leak.


The pain point with the mixed approach is adding GC roots when 
needed. You need a mental model of traceability.


It really is quite easy to do: build you app normally, evetually 
optimize later by using manual memory management.


Re: Static constructor

2021-01-15 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/6/21 12:05 PM, ludo wrote:

I read in the documentation
"Static constructors are used to initialize static class members with 
values that cannot be computed at compile time"


I try to understand the design of the following code:

---
class OpenAL
{
 static string[int] ALErrorLookup;
 static Object mutex;

 // Initialize static variables
 static this ()
 {    ALErrorLookup = [
     0xA001: "AL_INVALID_NAME"[],
     0xA002: "AL_ILLEGAL_ENUM",
     0xA002: "AL_INVALID_ENUM",
     0xA003: "AL_INVALID_VALUE",
     0xA004: "AL_ILLEGAL_COMMAND",
     0xA004: "AL_INVALID_OPERATION",
     0xA005: "AL_OUT_OF_MEMORY"
     ];
     mutex = new Object();
 }

    static anotherfunc()
    {}

 static Object getMutex()
 {    return mutex;
     }
}
---

At this point, I have not looked up Object, guess must be a class. It 
seems to me that ALErrorLookup can be computed at compile time... So the 
constructor is static because mutex "can not be computed at compiled time"?


Associative arrays can be computed at compile time, and used at compile 
time. BUT they cannot be transferred to runtime AAs (yet).


So likely that is the reason the AA initialization is in there.

You can (now) create classes at compile-time, and then assign them to 
variables as static initializers. I don't know if that means you can 
create the mutex at compile-time. I know OS mutex primitives can be 
initialized at compile time. I don't know if that works properly here.


Also note that what is happening here is not correct in D2 (I see that 
it's D1 you noted in a subsequent message), as `static this()` is run 
once per thread (and mutex is going to get one instance per thread). So 
most likely you need to change both of these to shared.




The idea of the dev (from context) is to that this class will just be a 
wrapper, no instance necessary. So anotherfunc(), which does the main 
work, is static and everything goes this way.


Then getMutex returns the static mutex when necessary... Have not looked 
that up yet either.


It doesn't make a whole lot of sense, since mutex is publicly accessible.

But, I don't know, i have a feeling that this is over complicated. For 
example, can't we have AlErrorlook-up initialized another way in D, a 
static mutex in the getMutex function directly (with if (mutex == null) 
{mutex = new Object()} .


No on the AA (as noted above). The mutex *is* created on demand. Every 
Object can have a mutex, and it's only created when you synchronize it 
for the first time.


I don't know, is it the proper D way? And also, when we have those 
classes with everything static, does it even make sense to have a class? 
This module actually contains only this class 
(https://tinyurl.com/yxt2xw23) Shouldn't we have one module with normal 
functions?


ANY input is learning material for me. Thanks.


I would say the AA initialization is standard D. Using the class as a 
namespace isn't standard or necessary. If anything, it should be a 
struct, or you can use a template. But I can't see why you can't just 
use a module.


-Steve


Re: Why many programmers don't like GC?

2021-01-15 Thread IGotD- via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:50:50 UTC, H. S. Teoh wrote:


DMD *never* frees anything.  *That's* part of why it's so fast; 
it completely drops the complexity of tracking free lists and 
all of that jazz.


That's also why it's a gigantic memory hog that can be a big 
embarrassment when run on a low-memory system. :-D


This strategy only works for DMD because a compiler is, by its 
very nature, a transient process: you read in source files, 
process them, spit out object files and executables, then you 
exit.  Add to that the assumption that most PCs these days have 
gobs of memory to spare, and this allocation scheme completely 
eliminates memory management overhead. It doesn't matter that 
memory is never freed, because once the process exits, the OS 
reclaims everything anyway.


But such an allocation strategy would not work on anything that 
has to be long-running, or that recycles a lot of memory such 
that you wouldn't be able to fit it all in memory if you didn't 
free any of it.



T


Are we talking about the same things here? You mentioned DMD but 
I was talking about programs compiled with DMD (or GDC, LDC), not 
the nature of the DMD compiler in particular.


Bump the pointer and never return any memory might acceptable for 
short lived programs but totally unacceptable for long running 
programs, like a browser you are using right now.


Just to clarify, in a program that is made in D with the default 
options, will there be absolutely no memory reclamation?




Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn
On Friday, 15 January 2021 at 15:36:37 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 15 January 2021 at 15:20:05 UTC, jmh530 wrote:
Hypothetically, would it be possible for users to supply their 
own garbage collector that uses write barriers?


Yes. You could translate Google Chrome's Oilpan to D. It uses 
library smart pointers for dirty-marking. But it requires you 
to write a virtual function that points out what should be 
traced (actually does the tracing for the outgoing pointers 
from that object):


The library smart pointers would make it difficult to interact 
with existing D GC code though.


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 15 January 2021 at 15:50:59 UTC, Guillaume Piolat 
wrote:

On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:


That's the whole point of being able to mix and match. Anyone 
avoiding the GC completely is missing it (unless they really, 
really, must be GC-less).


+1
mix and match is a different style versus only having a GC, or 
only having lifetimes for everything. And it's quite awesome as 
a style, since half of things don't need a well-identified 
owner.



What do you mean by "mix and match"? If it means shutting down 
the GC after initialization then it can easily backfire for more 
complicated software that accidentally calls code that relies on 
the GC.


Until someone can describe a strategy that works for a full 
application, e.g. an animation-editor or something like that, it 
is really difficult to understand what is meant by it.





Re: To switch GC from FIFO to LIFO paradigm.

2021-01-15 Thread Imperatorn via Digitalmars-d-learn

On Friday, 15 January 2021 at 12:39:30 UTC, MGW wrote:
GC cleans memory using the FIFO paradigm. Is it possible to 
switch GC to work using the LIFO paradigm?


AFAIK the GC just sweeps, and the only queue is for destructors 
(unreachable memory) iirc


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:48:07 UTC, welkam wrote:
On Friday, 15 January 2021 at 14:35:55 UTC, Ola Fosheim Grøstad 
wrote:

improved with precise collection

Precise GC is slower than default GC.


D does not have a fully precise GC. The "precise" collector still 
scans things conservatively when it cannot be certain.


If you combine fully precise collection it with static analysis, 
then you can reduce the number of paths you follow, but it is a 
lot of work to implement. So it would take a very motivated 
individual.


-lowmem flag replaces all* allocations with GC allocations so 
you can benchmark that


Interesting idea. There are compilers that use GC written in 
other languages. It is a nice baseline test, especially since 
there are not many large commonly known programs for D to do 
realistic benchmarks with.


A write barrier is a peace of code that is inserted before a 
write to an [object].


Not a write to the object, but a modified pointer. The write 
barrier is invoked when you switch a pointer from one object to 
another one. Then you mark the object, so you need 2 free bits in 
each object to use for marking.


But my uncertainty was related to how to optimize away barrier 
that has no impact on the final collection. It is easy to make 
mistakes when doing such optimizations. The goal should be to 
invoke as few barriers as possible by static analysis.


Reference counting needs mutation. How do you define immutable 
RC slice that needs to mutate its reference count? Thats a 
unsolved problem in D.


D needs more fine grained immutable, for sure.



Re: Static constructor

2021-01-15 Thread Imperatorn via Digitalmars-d-learn

On Friday, 15 January 2021 at 16:04:02 UTC, ludo wrote:
I believe so. I've never used OpenAL so it may have additional 
restrictions with multithreading, but from a simple "This 
function is only ever executed on one thread at a time", your 
above suggestions should work.


Apologies for the late reply.


No worry and thank you. I found the low-lock pattern on my own, 
digging more info. Smart pattern!
I put the synchronized function in front of the function. The 
problem with multithreading, is the difficulty to verify that 
it works fine :) But now the topic streams away from static 
constructor :) Cheers


Isn't that kind of the de facto standard? Well, saw this was 7 
years ago now 😁


Re: Static constructor

2021-01-15 Thread ludo via Digitalmars-d-learn
I believe so. I've never used OpenAL so it may have additional 
restrictions with multithreading, but from a simple "This 
function is only ever executed on one thread at a time", your 
above suggestions should work.


Apologies for the late reply.


No worry and thank you. I found the low-lock pattern on my own, 
digging more info. Smart pattern!
I put the synchronized function in front of the function. The 
problem with multithreading, is the difficulty to verify that it 
works fine :) But now the topic streams away from static 
constructor :) Cheers


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:18:31 UTC, IGotD- wrote:
I have a feeling that bump the pointer is not the complete 
algorithm that D uses because of that was the only one, D would 
waste a lot of memory.


Freeing memory is for loosers :D
https://issues.dlang.org/show_bug.cgi?id=21248

DMD allocates and never frees.


Re: Why many programmers don't like GC?

2021-01-15 Thread Guillaume Piolat via Digitalmars-d-learn

On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:


That's the whole point of being able to mix and match. Anyone 
avoiding the GC completely is missing it (unless they really, 
really, must be GC-less).


+1
mix and match is a different style versus only having a GC, or 
only having lifetimes for everything. And it's quite awesome as a 
style, since half of things don't need a well-identified owner.


Re: Why many programmers don't like GC?

2021-01-15 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Jan 15, 2021 at 03:18:31PM +, IGotD- via Digitalmars-d-learn wrote:
[...]
> Bump the pointer is a very fast way to allocate memory but what is
> more interesting is what happens when you return the memory. What does
> the allocator do with chunks of free memory? Does it put it in a free
> list, does it merge chunks? I have a feeling that bump the pointer is
> not the complete algorithm that D uses because of that was the only
> one, D would waste a lot of memory.

DMD *never* frees anything.  *That's* part of why it's so fast; it
completely drops the complexity of tracking free lists and all of that
jazz.

That's also why it's a gigantic memory hog that can be a big
embarrassment when run on a low-memory system. :-D

This strategy only works for DMD because a compiler is, by its very
nature, a transient process: you read in source files, process them,
spit out object files and executables, then you exit.  Add to that the
assumption that most PCs these days have gobs of memory to spare, and
this allocation scheme completely eliminates memory management overhead.
It doesn't matter that memory is never freed, because once the process
exits, the OS reclaims everything anyway.

But such an allocation strategy would not work on anything that has to
be long-running, or that recycles a lot of memory such that you wouldn't
be able to fit it all in memory if you didn't free any of it.


T

-- 
Don't throw out the baby with the bathwater. Use your hands...


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn
On Friday, 15 January 2021 at 14:35:55 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:
You can use GC with D compiler by passing -lowmem flag. I 
didnt measure but I heard it can increase compilation time by 
3x.


Thanks for the info. 3x is a lot
Take it with a grain of salt. I heard it long time ago so I might 
not remember correctly and I didnt measure it myself.



improved with precise collection

Precise GC is slower than default GC.

Making it use automatic garbage collection (of some form) would 
be an interesting benchmark.


-lowmem flag replaces all* allocations with GC allocations so you 
can benchmark that


On Friday, 15 January 2021 at 14:59:18 UTC, Ola Fosheim Grøstad 
wrote:

I think? Or maybe I am missing something?


A write barrier is a peace of code that is inserted before a 
write to an [object]. Imagine you have a class that has pointer 
to another class. If you want to change that pointer you need to 
tell the GC that you changed that pointer so GC can do its magic.

https://en.wikipedia.org/wiki/Write_barrier#In_Garbage_collection


3. Make slices and dynamic arrays RC.


Reference counting needs mutation. How do you define immutable RC 
slice that needs to mutate its reference count? Thats a unsolved 
problem in D.






Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:20:05 UTC, jmh530 wrote:
Hypothetically, would it be possible for users to supply their 
own garbage collector that uses write barriers?


Yes. You could translate Google Chrome's Oilpan to D. It uses 
library smart pointers for dirty-marking. But it requires you to 
write a virtual function that points out what should be traced 
(actually does the tracing for the outgoing pointers from that 
object):






Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 15:18:31 UTC, IGotD- wrote:
Bump the pointer is a very fast way to allocate memory but what 
is more interesting is what happens when you return the memory. 
What does the allocator do with chunks of free memory? Does it 
put it in a free list, does it merge chunks? I have a feeling 
that bump the pointer is not the complete algorithm that D uses 
because of that was the only one, D would waste a lot of memory.


I don't know what DMD does exactly, but I guess this is called an 
"arena" or something like that? Objective-C does something 
similar with its autorelease pool.


Basically, you have a point in the call-tree where you know that 
all work has been done and then you just reclaim everything that 
is not marked as in-long-term-use. So you don't do the mark 
phase, you put the burden of marking the object as in use on the 
object/reference and just sweep. (Or assume that everything can 
be freed, which fits well with a compiler that is working in 
discrete stages).


Side note: I incidentally wrote a little allocator cache 
yesterday that at compile time takes a list of types and then 
takes the size of those types, sorts it and builds an array of 
freelists for those specific sizes and caches objects that are 
freed if they match the desired size (then there is a threshold 
for the length of the freelist, when that is hit C free() is 
called. It should be crazy fast too, since I require the free 
call to provide the type so the correct free list is found at 
compile time, not at run time.


Re: Why many programmers don't like GC?

2021-01-15 Thread jmh530 via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
On Thursday, 14 January 2021 at 18:51:16 UTC, Ola Fosheim 
Grøstad wrote:
One can follow the same kind of reasoning for D. It makes no 
sense for people who want to stay high level and do batch 
programming. Which is why this disconnect exists in the 
community... I think.


The reasoning of why we do not implement write barriers is that 
it will hurt low level programming. But I feel like if we drew 
a ven diagram of people who rely on GC and those who do a lot 
of writes trough a pointer we would get almost no overlap. In 
other words if D compiler had a switch that turned on write 
barriers and better GC I think many people would use it and 
find the trade offs acceptable.


Hypothetically, would it be possible for users to supply their 
own garbage collector that uses write barriers?


Re: Why many programmers don't like GC?

2021-01-15 Thread IGotD- via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:


No. And it will never will. Currently DMD uses custom allocator 
for almost everything. It works as follows. Allocate a big 
chunk(1MB) of memory using malloc. Have a internal pointer that 
points to the beginning of unallocated memory. When someone ask 
for memory return that pointer and increment internal pointer 
with the 16 byte aligned size of allocation. Meaning the new 
pointer is pointing to unused memory and everything behind the 
pointer has been allocated. This simple allocation strategy is 
called bump the pointer and it improved DMD performance by ~70%.


You can use GC with D compiler by passing -lowmem flag. I didnt 
measure but I heard it can increase compilation time by 3x.


https://github.com/dlang/dmd/blob/master/src/dmd/root/rmem.d#L153


Actually druntime uses map (Linux) and VirtualAlloc (Windows) to 
break out more memory. C-lib malloc is an option but not used in 
most platforms and this option also is very inefficient in terms 
of waste of memory because of alignment requirements.


Bump the pointer is a very fast way to allocate memory but what 
is more interesting is what happens when you return the memory. 
What does the allocator do with chunks of free memory? Does it 
put it in a free list, does it merge chunks? I have a feeling 
that bump the pointer is not the complete algorithm that D uses 
because of that was the only one, D would waste a lot of memory.


As far as I can see, it is simply very difficult to create a 
completely lockless allocator. Somewhere down the line there will 
be a lock, even if you don't add one in druntime (the lock will 
be in the kernel instead when breaking out memory). Also merging 
chunks can be difficult without locks.




Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn
On Friday, 15 January 2021 at 14:59:18 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
avoid redundant pointers. For instance, a type for telling the 
compiler that a pointer is non-owning.


I guess "non-owning" is the wrong term. I mean pointers that are 
redundant. Not all "non-owning" pointers are redundant.




Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:50:00 UTC, welkam wrote:
The reasoning of why we do not implement write barriers is that 
it will hurt low level programming. But I feel like if we drew 
a ven diagram of people who rely on GC and those who do a lot 
of writes trough a pointer we would get almost no overlap. In 
other words if D compiler had a switch that turned on write 
barriers and better GC I think many people would use it and 
find the trade offs acceptable.



Yes, I think this is what we need some way of making the compiler 
know which pointers has to be traced so that it can avoid 
redundant pointers. For instance, a type for telling the compiler 
that a pointer is non-owning. Then we don't have to use a write 
barrier for that non-owning pointer I think? Or maybe I am 
missing something?


Then we can also have a switch.

But I also think that we could do this:

1. Make all class objects GC allocated and use write barriers for 
those.

2. Allow non-owning annotations for class object pointers.
3. Make slices and dynamic arrays RC.
4. Let structs be held by unique_ptr style (Rust/C++ default).

Then we need a way to improve precise tracing:
1. make use of LLVM precise stack/register information
2. introduce tagged unions and only allow redundant pointers in 
untagged unions

3. Each compile phase emits information for GC.
4. Before linking the compiler generates code to narrowly trace 
the correct pointers.


Then we don't have to deal with real time type information lookup 
and don't have to do expensive lookup to figure out if a pointer 
points to GC memory or not. The compiler can then just assume 
that the generated collection code is exact.




Re: How to get call stack for InvalidMemoryOperationError while doing unittest?

2021-01-15 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/13/21 1:22 PM, apz28 wrote:
core.exception.InvalidMemoryOperationError@src\core\exception.d(647): 
Invalid memory operation


I've struggled with this as well. It doesn't even tell you the original 
usage point that causes the exception.


I believe stack traces are disabled from printing on this because of the 
fact that it needs some memory to print the trace or walk the trace 
(this is fuzzy, I think it might not need memory, but I can't remember 
exactly).


You can override the handling of memory errors by defining it yourself:

extern(C) void onInvalidMemoryOperationError(void *pretend_sideeffect = 
null) @trusted pure nothrow @nogc

{
   // try to print stack trace here yourself...
}

A very *very* common reason this is triggered is because a GC destructor 
is trying to allocate memory (this is not allowed during GC cleanup). 
But without knowing the trace, it's really hard to find it.


-Steve


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn
On Thursday, 14 January 2021 at 18:51:16 UTC, Ola Fosheim Grøstad 
wrote:
One can follow the same kind of reasoning for D. It makes no 
sense for people who want to stay high level and do batch 
programming. Which is why this disconnect exists in the 
community... I think.


The reasoning of why we do not implement write barriers is that 
it will hurt low level programming. But I feel like if we drew a 
ven diagram of people who rely on GC and those who do a lot of 
writes trough a pointer we would get almost no overlap. In other 
words if D compiler had a switch that turned on write barriers 
and better GC I think many people would use it and find the trade 
offs acceptable.


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 14:24:40 UTC, welkam wrote:
You can use GC with D compiler by passing -lowmem flag. I didnt 
measure but I heard it can increase compilation time by 3x.


Thanks for the info. 3x is a lot though, maybe it could be 
improved with precise collection, but I assume that would require 
a rewrite.


Making it use automatic garbage collection (of some form) would 
be an interesting benchmark.




Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread Imperatorn via Digitalmars-d-learn
On Friday, 15 January 2021 at 14:25:09 UTC, Steven Schveighoffer 
wrote:

On 1/15/21 9:19 AM, Steven Schveighoffer wrote:

Something similar to BlackHole or WhiteHole. Essentially 
there's a default action for null for all 
types/fields/methods, and everything else is passed through.


And now reading the other thread about this above, it looks 
like this type is already written:


https://code.dlang.org/packages/optional

I'd say use that.

-Steve


That could be useful actually


Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/15/21 9:19 AM, Steven Schveighoffer wrote:

Something similar to BlackHole or WhiteHole. Essentially there's a 
default action for null for all types/fields/methods, and everything 
else is passed through.


And now reading the other thread about this above, it looks like this 
type is already written:


https://code.dlang.org/packages/optional

I'd say use that.

-Steve


Re: Why many programmers don't like GC?

2021-01-15 Thread welkam via Digitalmars-d-learn
On Friday, 15 January 2021 at 11:28:55 UTC, Ola Fosheim Grøstad 
wrote:

On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
That's the whole point of being able to mix and match. Anyone 
avoiding the GC completely is missing it (unless they really, 
really, must be GC-less).


Has DMD switched to using the GC as the default?


No. And it will never will. Currently DMD uses custom allocator 
for almost everything. It works as follows. Allocate a big 
chunk(1MB) of memory using malloc. Have a internal pointer that 
points to the beginning of unallocated memory. When someone ask 
for memory return that pointer and increment internal pointer 
with the 16 byte aligned size of allocation. Meaning the new 
pointer is pointing to unused memory and everything behind the 
pointer has been allocated. This simple allocation strategy is 
called bump the pointer and it improved DMD performance by ~70%.


You can use GC with D compiler by passing -lowmem flag. I didnt 
measure but I heard it can increase compilation time by 3x.


https://github.com/dlang/dmd/blob/master/src/dmd/root/rmem.d#L153


Re: To switch GC from FIFO to LIFO paradigm.

2021-01-15 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/15/21 7:39 AM, MGW wrote:
GC cleans memory using the FIFO paradigm. Is it possible to switch GC to 
work using the LIFO paradigm?


I'm not sure what you mean. I don't think there's any guaranteed order 
for GC cleanup.


-Steve


Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread Steven Schveighoffer via Digitalmars-d-learn

On 1/14/21 7:27 PM, ddcovery wrote:

On Thursday, 14 January 2021 at 20:23:08 UTC, Steven Schveighoffer wrote:


You could kinda automate it like:

struct NullCheck(T)
{
   private T* _val;
   auto opDispatch(string mem)() if (__traits(hasMember, T, mem)) {
   alias Ret = typeof(() { return __traits(getMember, *_val, mem); 
}());

   if(_val is null) return NullCheck!(Ret)(null);
   else return NullCheck!(Ret)(__trats(getMember, *_val, mem));
   }

   bool opCast(V: bool)() { return _val !is null; }
}

auto nullCheck(T)(T *val) { return AutoNullCheck!T(val);}

// usage
if(nullCheck(person).father.father && person.father.father.name == 
"Peter")


Probably doesn't work for many circumstances, and I'm sure I messed 
something up.


-Steve


I'm seeing "opDispatch" everywhere last days :-). It's really powerful!!!

If we define an special T _(){ return _val; } method, then you can write

   if( nullCheck(person).father.father.name._ == "Peter")

And renaming

   if( ns(person).father.father.name._ == "Peter" )


This doesn't work, if person, person.father, or person.father.father is 
null, because now you are dereferencing null again.


But something like this might work:

NullCheck(T)
{
   ... // opdispatch and stuff
   bool opEquals(auto ref T other) {
  return _val is null ? false : *_val == other;
   }
}

Something similar to BlackHole or WhiteHole. Essentially there's a 
default action for null for all types/fields/methods, and everything 
else is passed through.


Swift has stuff like this built-in. But D might look better because you 
wouldn't need a chain of question marks.


-Steve


To switch GC from FIFO to LIFO paradigm.

2021-01-15 Thread MGW via Digitalmars-d-learn
GC cleans memory using the FIFO paradigm. Is it possible to 
switch GC to work using the LIFO paradigm?


Re: Directory recursive walking

2021-01-15 Thread dog2002 via Digitalmars-d-learn

On Friday, 15 January 2021 at 11:05:56 UTC, Daniel Kozak wrote:
On Fri, Jan 15, 2021 at 10:30 AM dog2002 via 
Digitalmars-d-learn < digitalmars-d-learn@puremagic.com> wrote:



...
Okay, the reason is incredibly stupid: using WinMain instead of
main causes high memory usage. I don't know why, I use the same
code. If I replace WinMain with main, the memory consumption is
about 6 MB.



https://wiki.dlang.org/D_for_Win32


Thank you! Now the application works properly.

And sorry for the dumb questions.


Re: Open question: what code pattern you use usually for null safety problem

2021-01-15 Thread ddcovery via Digitalmars-d-learn

On Thursday, 14 January 2021 at 18:24:44 UTC, ddcovery wrote:
I know there is other threads about null safety and the 
"possible" ways to support this in D and so on.


This is only an open question to know what code patterns you 
usually use to solve this situation in D




I'm writing a "personal" article/study about "null safety" 
anti-pattern in form of github project (to include some examples)


I really thank you for your answers here that I will use (and 
mention with your permission) in this small article.


The actual version can be found here 
https://github.com/ddcovery/d_null_safety/blob/main/README.md


It is under construction :-).





Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 11:11:14 UTC, Mike Parker wrote:
That's the whole point of being able to mix and match. Anyone 
avoiding the GC completely is missing it (unless they really, 
really, must be GC-less).


Has DMD switched to using the GC as the default?


Re: Why many programmers don't like GC?

2021-01-15 Thread Mike Parker via Digitalmars-d-learn

On Friday, 15 January 2021 at 08:49:21 UTC, Imperatorn wrote:



Nice strategy, using GC and optimizing where you need it.


That's the whole point of being able to mix and match. Anyone 
avoiding the GC completely is missing it (unless they really, 
really, must be GC-less).


Re: Directory recursive walking

2021-01-15 Thread Daniel Kozak via Digitalmars-d-learn
On Fri, Jan 15, 2021 at 10:30 AM dog2002 via Digitalmars-d-learn <
digitalmars-d-learn@puremagic.com> wrote:

> ...
> Okay, the reason is incredibly stupid: using WinMain instead of
> main causes high memory usage. I don't know why, I use the same
> code. If I replace WinMain with main, the memory consumption is
> about 6 MB.
>

https://wiki.dlang.org/D_for_Win32


Re: Directory recursive walking

2021-01-15 Thread dog2002 via Digitalmars-d-learn

On Friday, 15 January 2021 at 06:15:06 UTC, dog2002 wrote:

On Thursday, 14 January 2021 at 22:28:19 UTC, Paul Backus wrote:

On Thursday, 14 January 2021 at 20:23:37 UTC, dog2002 wrote:

[...]


What code are you using to copy the bytes? If you're reading 
the whole file into memory at once, that will consume a lot of 
memory.


void func(string inputFile, string outFile, uint chunk_size) {
try {
File _inputFile = File(inputFile, "r");
File _outputFile = File(outFile, "w");

ubyte[] tempBuffer = _inputFile.rawRead(new ubyte[](512));

//doing some operations with the tempBuffer 

_outputFile.rawWrite(tempBuffer);

_inputFile.seek(tempBuffer.length, SEEK_SET);


foreach(_buffer; _inputFile.byChunk(chunk_size)) {
_outputFile.rawWrite(_buffer);
}
_inputFile.close();
_outputFile.close();
}
catch (Throwable) {}

}


Okay, the reason is incredibly stupid: using WinMain instead of 
main causes high memory usage. I don't know why, I use the same 
code. If I replace WinMain with main, the memory consumption is 
about 6 MB.


Re: Why many programmers don't like GC?

2021-01-15 Thread Ola Fosheim Grøstad via Digitalmars-d-learn

On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
To be fair, the GC *has* improved over the years.  Just not as 
quickly as people would like, but it *has* improved.


It cannot improve enough as a global collector without write 
barriers. No language has been able to do this. Therefore, D 
cannot do it.


Precise collection only helps when you have few pointers to trace.


improvement. But why would I?  It takes 5x less effort to write 
GC code, and requires only a couple more days of effort to fix


That's like saying it takes 5x more time to write code in Swift 
than D. That is not at all reasonable.


Tracing GC is primarily useful when you have many small 
long-lived objects with unclear ownership and cyclic references 
that are difficult to break with weak pointers.


In those cases it is invaluable, but most well-designed programs 
have more tree-like structures and clear ownership.



after that to debug obscure pointer bugs.  Life is too short to 
be squandered chasing down the 1000th double-free and the 
20,000th dangling pointer in my life.


That has nothing to do with a tracing GC... Cyclic references is 
the only significant problem a tracing GC addresses compared to 
other solutions.



A lot of naysayers keep repeating GC performance issues as if 
it's a black-and-white, all-or-nothing question.  It's not.  
You *can* write high-performance programs even with D's 
supposedly lousy GC -- just profile the darned thing, and


There are primarily two main problems, and they are not 
throughput, they are:


1. LATENCY: stopping the world will never be acceptable in 
interactive applications of some size, it is only acceptable in 
batch programs. In fact, even incremental collectors can cause a 
sluggish experience!


2. MEMORY CONSUMPTION: doing fewer collection cycles will 
increase the memory footprint. Ideally the collector would run 
all the time. In the cloud you pay for memory, so you want to 
keep memory consumption to a fixed level that you never exceed.



System level programming is primarily valuable for interactive 
applications, OS level programming, or embedded. So, no, it is 
not snobbish to not want a sluggish GC. Most other tasks are 
better done in high level languages.





Re: Why many programmers don't like GC?

2021-01-15 Thread Imperatorn via Digitalmars-d-learn

On Friday, 15 January 2021 at 07:35:00 UTC, H. S. Teoh wrote:
On Thu, Jan 14, 2021 at 12:36:12PM +, claptrap via 
Digitalmars-d-learn wrote: [...]

[...]


To be fair, the GC *has* improved over the years.  Just not as 
quickly as people would like, but it *has* improved.


[...]


Nice strategy, using GC and optimizing where you need it.