Re: How to use D without the GC ?

2024-06-14 Thread Dukc via Digitalmars-d-learn

bachmeier kirjoitti 14.6.2024 klo 16.48:
See the example I posted elsewhere in this thread: 
https://forum.dlang.org/post/mwerxaolbkuxlgfep...@forum.dlang.org


I defined

```
@nogc ~this() {
   free(ptr);
   printf("Data has been freed\n");
}
```

and that gets called when the reference count hits zero.


Oh sorry, missed that.


Re: How to use D without the GC ?

2024-06-14 Thread bachmeier via Digitalmars-d-learn

On Friday, 14 June 2024 at 07:52:35 UTC, Dukc wrote:

Lance Bachmeier kirjoitti 14.6.2024 klo 4.23:
We must be talking about different things. You could, for 
instance, call a function in a C library to allocate memory at 
runtime. That function returns a pointer and you pass it to 
SafeRefCounted to ensure it gets freed. Nothing is known about 
the allocation at compile time. This is in fact my primary use 
case - allocating an opaque struct allocated by a C library, 
and not wanting to concern myself with freeing it when I'm 
done with it.


Using a raw pointer as the `SafeRefCounted` type like that 
isn't going to work. `SafeRefCounted` will free only the 
pointer itself at the end, not the struct it's referring to. If 
you use some sort of RAII wrapper for the pointer that `free`s 
it at it's destructor, then it'll work - maybe that's what you 
meant.


See the example I posted elsewhere in this thread: 
https://forum.dlang.org/post/mwerxaolbkuxlgfep...@forum.dlang.org


I defined

```
@nogc ~this() {
  free(ptr);
  printf("Data has been freed\n");
}
```

and that gets called when the reference count hits zero.


Re: How to use D without the GC ?

2024-06-14 Thread Dukc via Digitalmars-d-learn

Lance Bachmeier kirjoitti 14.6.2024 klo 4.23:
We must be talking about different things. You could, for instance, call 
a function in a C library to allocate memory at runtime. That function 
returns a pointer and you pass it to SafeRefCounted to ensure it gets 
freed. Nothing is known about the allocation at compile time. This is in 
fact my primary use case - allocating an opaque struct allocated by a C 
library, and not wanting to concern myself with freeing it when I'm done 
with it.


Using a raw pointer as the `SafeRefCounted` type like that isn't going 
to work. `SafeRefCounted` will free only the pointer itself at the end, 
not the struct it's referring to. If you use some sort of RAII wrapper 
for the pointer that `free`s it at it's destructor, then it'll work - 
maybe that's what you meant.


Re: How to use D without the GC ?

2024-06-13 Thread Lance Bachmeier via Digitalmars-d-learn

On Thursday, 13 June 2024 at 07:18:48 UTC, Dukc wrote:

Lance Bachmeier kirjoitti 13.6.2024 klo 1.32:


Why would it be different from calling malloc and free 
manually? I guess I'm not understanding, because you put the 
same calls to malloc and free that you'd otherwise be doing 
inside this and ~this.


Because with `SafeRefCounted`, you have to decide the size of 
your allocations at compile time, meaning you need to do a 
varying number of `malloc`s and `free`s to vary the size of 
your allocation at runtime. Even if you were to use templates 
to vary the type of `SafeRefCounted` object based on size of 
your allocation, the spec puts an upper bound of 16MiB to size 
of a static array.


We must be talking about different things. You could, for 
instance, call a function in a C library to allocate memory at 
runtime. That function returns a pointer and you pass it to 
SafeRefCounted to ensure it gets freed. Nothing is known about 
the allocation at compile time. This is in fact my primary use 
case - allocating an opaque struct allocated by a C library, and 
not wanting to concern myself with freeing it when I'm done with 
it.


Re: How to use D without the GC ?

2024-06-13 Thread Dukc via Digitalmars-d-learn

Dukc kirjoitti 13.6.2024 klo 10.18:
So for example, if you have a program that sometimes needs 600Mib and 
sometimes needs 1100MiB, you can in any case allocate all that in one go 
with one `malloc` or one `new`, but you'll need at least 38/59 
`SafeRefCounted` static arrays, and therefore `malloc`s, to accomplish 
the same.


Now granted, 16MiB (or even smaller amounts, like 256 KiB) sounds big 
enough that it probably isn't making a difference since it's a long way 
into multiples of page size anyway. But I'm not sure.


Re: How to use D without the GC ?

2024-06-13 Thread Dukc via Digitalmars-d-learn

Lance Bachmeier kirjoitti 13.6.2024 klo 1.32:


Why would it be different from calling malloc and free manually? I guess 
I'm not understanding, because you put the same calls to malloc and free 
that you'd otherwise be doing inside this and ~this.


Because with `SafeRefCounted`, you have to decide the size of your 
allocations at compile time, meaning you need to do a varying number of 
`malloc`s and `free`s to vary the size of your allocation at runtime. 
Even if you were to use templates to vary the type of `SafeRefCounted` 
object based on size of your allocation, the spec puts an upper bound of 
16MiB to size of a static array.


So for example, if you have a program that sometimes needs 600Mib and 
sometimes needs 1100MiB, you can in any case allocate all that in one go 
with one `malloc` or one `new`, but you'll need at least 38/59 
`SafeRefCounted` static arrays, and therefore `malloc`s, to accomplish 
the same.


Re: How to use D without the GC ?

2024-06-12 Thread monkyyy via Digitalmars-d-learn
On Wednesday, 12 June 2024 at 16:50:04 UTC, Vinod K Chandran 
wrote:

On Wednesday, 12 June 2024 at 01:35:26 UTC, monkyyy wrote:


rather then worring about the gc, just have 95% of data on the 
stack


How's that even possible ? AFAIK, we need heap allocated memory 
in order to make GUI lib as a DLL. So creating things in heap 
and modify it, that's the nature of my project.


gui is a bit harder and maybe aim for 70%

but if you went down the rabbit hole you could have strings be in 
an "arena" of which the first 5000 chars are a global scope 
array; or full me and just an array that doesnt expand


Re: How to use D without the GC ?

2024-06-12 Thread Lance Bachmeier via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 21:59:54 UTC, drug007 wrote:

Yes, but you get all the benefits of `double[]` for free if 
you do it that way, including the more concise foo[10] syntax.


I meant you do not need to add `ptr` field at all


I see. You're right. I thought it would be easier for someone new 
to the language to read more explicit code rather than assuming 
knowledge about data.ptr. In practice it's better to not have a 
ptr field.


Re: How to use D without the GC ?

2024-06-12 Thread Lance Bachmeier via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 21:36:30 UTC, Dukc wrote:

bachmeier kirjoitti 12.6.2024 klo 18.21:
You're splitting things into GC-allocated memory and manually 
managed memory. There's also SafeRefCounted, which handles the 
malloc and free for you.


I suspect `SafeRefCounted` (or `RefCounted`) is not the best 
fit for this scenario. The problem with it is it `malloc`s and 
`free`s individual objects, which doesn't sound efficient to me.


Maybe it performs if the objects in question are big enough, or 
if they can be bundled to static arrays so there's no need to 
refcount individual objects. But even then, you can't just 
allocate and free dozens or hundreds of megabytes with one 
call, unlike with the GC or manual `malloc`/`free`. I honestly 
don't know if calling malloc/free for, say each 64KiB, would 
have performance implications over a single allocation.


Why would it be different from calling malloc and free manually? 
I guess I'm not understanding, because you put the same calls to 
malloc and free that you'd otherwise be doing inside this and 
~this.


Re: How to use D without the GC ?

2024-06-12 Thread drug007 via Digitalmars-d-learn

On 12.06.2024 23:56, bachmeier wrote:

On Wednesday, 12 June 2024 at 20:37:36 UTC, drug007 wrote:

On 12.06.2024 21:57, bachmeier wrote:

On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
    ptr = cast(double*) malloc(n*double.sizeof);
    data = ptr[0..n];
    printf("Data has been allocated\n");
  }
 }

```


Why not just use `ptr` ? Why did you `data` with `ptr` ?


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly 
throws an out of bounds error. The second gives `Segmentation fault 
(core dumped)`.


I think you can use data only because data contains data.ptr


Yes, but you get all the benefits of `double[]` for free if you do it 
that way, including the more concise foo[10] syntax.


I meant you do not need to add `ptr` field at all
```D
import std;
import core.stdc.stdlib;

struct Foo {
@nogc:
double[] data;
alias data this;

this(int n)
{
auto ptr = cast(double*) malloc(n*double.sizeof);
data = ptr[0..n];
}
}

@nogc void main() {
auto foo = SafeRefCounted!Foo(3);
foo[0..3] = 1.5;
printf("%f %f %f\n", foo[0], foo[1], foo[2]);
foo.ptr[10] = 1.5; // no need for separate ptr field
}
```


Re: How to use D without the GC ?

2024-06-12 Thread Dukc via Digitalmars-d-learn

bachmeier kirjoitti 12.6.2024 klo 18.21:
You're splitting things into GC-allocated memory and manually managed 
memory. There's also SafeRefCounted, which handles the malloc and free 
for you.


I suspect `SafeRefCounted` (or `RefCounted`) is not the best fit for 
this scenario. The problem with it is it `malloc`s and `free`s 
individual objects, which doesn't sound efficient to me.


Maybe it performs if the objects in question are big enough, or if they 
can be bundled to static arrays so there's no need to refcount 
individual objects. But even then, you can't just allocate and free 
dozens or hundreds of megabytes with one call, unlike with the GC or 
manual `malloc`/`free`. I honestly don't know if calling malloc/free 
for, say each 64KiB, would have performance implications over a single 
allocation.


Re: How to use D without the GC ?

2024-06-12 Thread bachmeier via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 20:37:36 UTC, drug007 wrote:

On 12.06.2024 21:57, bachmeier wrote:
On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran 
wrote:

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
    ptr = cast(double*) malloc(n*double.sizeof);
    data = ptr[0..n];
    printf("Data has been allocated\n");
  }
 }

```


Why not just use `ptr` ? Why did you `data` with `ptr` ?


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first 
correctly throws an out of bounds error. The second gives 
`Segmentation fault (core dumped)`.


I think you can use data only because data contains data.ptr


Yes, but you get all the benefits of `double[]` for free if you 
do it that way, including the more concise foo[10] syntax.


Re: How to use D without the GC ?

2024-06-12 Thread bachmeier via Digitalmars-d-learn
On Wednesday, 12 June 2024 at 20:31:34 UTC, Vinod K Chandran 
wrote:

On Wednesday, 12 June 2024 at 18:57:41 UTC, bachmeier wrote:


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first 
correctly throws an out of bounds error. The second gives 
`Segmentation fault (core dumped)`.


We can use it like this, i think.
```
struct Foo {
  double * ptr;
  uint capacity;
  uint legnth;
  alias data this;

}
```
And then we use an index, we can perform a bound check.
I am not sure but I hope this will work.


Yes, you can do that, but then you're replicating what you get 
for free by taking a slice. You'd have to write your own opIndex, 
opSlice, etc., and I don't think there's any performance benefit 
from doing so.


Re: How to use D without the GC ?

2024-06-12 Thread drug007 via Digitalmars-d-learn

On 12.06.2024 21:57, bachmeier wrote:

On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran wrote:

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
    ptr = cast(double*) malloc(n*double.sizeof);
    data = ptr[0..n];
    printf("Data has been allocated\n");
  }
 }

```


Why not just use `ptr` ? Why did you `data` with `ptr` ?


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly throws 
an out of bounds error. The second gives `Segmentation fault (core 
dumped)`.


I think you can use data only because data contains data.ptr


Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 18:57:41 UTC, bachmeier wrote:


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first 
correctly throws an out of bounds error. The second gives 
`Segmentation fault (core dumped)`.


We can use it like this, i think.
```
struct Foo {
  double * ptr;
  uint capacity;
  uint legnth;
  alias data this;

}
```
And then we use an index, we can perform a bound check.
I am not sure but I hope this will work.




Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 18:58:49 UTC, evilrat wrote:
the only problem is that it seems to leak a lot PydObjects so i 
have to manually free them, even scope doesn't helps with that 
which is sad.




Oh I see. I did some experiments with nimpy and pybind11. Both 
experiments were resulted in slower than ctypes dll calling 
method. That's why I didn't take much interest in binding with 
Python C API. Even Cython is slower compare to ctypes. But it can 
be used when we call the dll in Cython and call the cython code 
from python. But then you will have to face some other obstacles. 
In my case, callback functions are the reason. When using a dll 
in cython, you need to pass a cython function as callback and 
inside that func, you need to convert everything into pyobject 
back and forth. That will take time. Imagine that you want to do 
some heavy lifting in a mouse move event ? No one will be happy 
with at snail's pace.
But yeah, Cython is a nice language and we can create an entire 
gui lib in Cython but the execution speed is 2.5X slower than my 
current c3 dll.





Re: How to use D without the GC ?

2024-06-12 Thread Ferhat Kurtulmuş via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 18:58:49 UTC, evilrat wrote:
On Wednesday, 12 June 2024 at 17:00:14 UTC, Vinod K Chandran 
wrote:

[...]


It is probably not that well maintained, but it definitely 
works with python 3.10 and maybe even 3.11, i use it to 
interface with pytorch and numpy and PIL, but my use case is 
pretty simple, i just write some wrapper python functions to 
run inference and pass images back and forth using embedded 
py_stmts. the only problem is that it seems to leak a lot 
PydObjects so i have to manually free them, even scope doesn't 
helps with that which is sad.


[...]


You can use libonnx via importc to do inference of pytorch models 
after converting them *.onnx. in this way you won't need python 
at all. Please refer to the etichetta. instead of PIL for 
preprocessing just use DCV.


https://github.com/trikko/etichetta



Re: How to use D without the GC ?

2024-06-12 Thread evilrat via Digitalmars-d-learn
On Wednesday, 12 June 2024 at 17:00:14 UTC, Vinod K Chandran 
wrote:

On Wednesday, 12 June 2024 at 10:16:26 UTC, Sergey wrote:


Btw are you going to use PyD or doing everything manually from 
scratch?


Does PyD active now ? I didn't tested it. My approach is using 
"ctypes" library with my dll. Ctypes is the fastes FFI in my 
experience. I tested Cython, Pybind11 and CFFI. But None can 
beat the speed of ctypes. Currently the fastest experiments 
were the dlls created in Odin & C3. Both are non-GC languages.


It is probably not that well maintained, but it definitely works 
with python 3.10 and maybe even 3.11, i use it to interface with 
pytorch and numpy and PIL, but my use case is pretty simple, i 
just write some wrapper python functions to run inference and 
pass images back and forth using embedded py_stmts. the only 
problem is that it seems to leak a lot PydObjects so i have to 
manually free them, even scope doesn't helps with that which is 
sad.


example classifier python
```python
def inference(image: Image):
""" Predicts the image class and returns confidences for 
every class

To get the class one can use the following code
> conf = inference(image)
> index = conf.argmax()
> cls = classes[index]
"""

# this detector doesn't works with more than 3 channels
ch = len(image.getbands())
has_transparency = image.info.get('transparency', None) is 
not None

if ch > 3 or has_transparency:
image = image.convert("RGB")

image_tensor = prep_transform(image).float()
image_tensor = image_tensor.unsqueeze_(0)

# it is fast enough to run on CPU
#if torch.cuda.is_available():
#image_tensor.cuda()

with torch.inference_mode():
# NOTE: read the comment on model
output = model(image_tensor)
index = output.data.numpy()

return index
```

and some of D functions

```d
ImageData aiGoesB(string path, int strength = 50) {
try {
if (!pymod)
py_stmts("import sys; 
sys.path.append('modules/xyz')");

initOnce!pymod(py_import("xyz.inference"));
if (!pymod.hasattr("model"))
pymod.model = pymod.method("load_model", 
"modules/xyz/pre_trained/weights.pth");


PydObject ipath = py(path);
scope(exit) destroy(ipath);

auto context = new InterpContext();
context.path = ipath;

context.py_stmts("
from PIL import Image
image = Image.open(path)
ch = len(image.getbands())
if ch > 3:
image = image.convert('RGB')
");

// signature: def run(model, imagepath, alpha=45) -> 
numpy.Array
PydObject output = pymod.method("run", pymod.model, 
context.image, 100-strength);

context.output = output;
scope(exit) destroy(output);

PydObject shape = output.getattr("shape");
scope(exit) destroy(shape);

// int n = ...;
int c = shape[2].to_d!int;
int w = shape[1].to_d!int;
int h = shape[0].to_d!int;

// numpy array
void* raw_ptr = output.buffer_view().item_ptr([0,0,0]);

ubyte* d_ptr = cast(ubyte*) raw_ptr;
ubyte[] d_img = d_ptr[0..h*w*c];

return ImageData(d_img.dup, h ,w ,c);
} catch (PythonException e) {
// oh no...
auto context = new InterpContext();
context.trace = new PydObject(e.traceback);
context.py_stmts("from traceback import format_tb; trace 
= format_tb(trace)");

printerr(e.py_message, "\n", context.trace.to_d!string);
}
return ImageData.init;
```



Re: How to use D without the GC ?

2024-06-12 Thread bachmeier via Digitalmars-d-learn
On Wednesday, 12 June 2024 at 18:36:26 UTC, Vinod K Chandran 
wrote:

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
ptr = cast(double*) malloc(n*double.sizeof);
data = ptr[0..n];
printf("Data has been allocated\n");
  }
 }

```


Why not just use `ptr` ? Why did you `data` with `ptr` ?


Try `foo[10] = 1.5` and `foo.ptr[10] = 1.5`. The first correctly 
throws an out of bounds error. The second gives `Segmentation 
fault (core dumped)`.


Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
ptr = cast(double*) malloc(n*double.sizeof);
data = ptr[0..n];
printf("Data has been allocated\n");
  }
 }

```


Why not just use `ptr` ? Why did you `data` with `ptr` ?




Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 15:33:39 UTC, bachmeier wrote:

A SafeRefCounted example with main marked @nogc:


Thanks for the sample. It looks tempting! Let me check that.


Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 15:21:22 UTC, bachmeier wrote:


You're splitting things into GC-allocated memory and manually 
managed memory. There's also SafeRefCounted, which handles the 
malloc and free for you.


Thanks, I have read about the possibilities of "using malloc and 
free from D" in some other post. I think I should need to check 
that.




Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 10:16:26 UTC, Sergey wrote:


Btw are you going to use PyD or doing everything manually from 
scratch?


Does PyD active now ? I didn't tested it. My approach is using 
"ctypes" library with my dll. Ctypes is the fastes FFI in my 
experience. I tested Cython, Pybind11 and CFFI. But None can beat 
the speed of ctypes. Currently the fastest experiments were the 
dlls created in Odin & C3. Both are non-GC languages.





Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 09:44:05 UTC, DrDread wrote:


also just slap @nogc on your main function to avoid accidential 
allocations.



Thanks for the suggestion. Let me check that idea.




Re: How to use D without the GC ?

2024-06-12 Thread Vinod K Chandran via Digitalmars-d-learn

On Wednesday, 12 June 2024 at 01:35:26 UTC, monkyyy wrote:


rather then worring about the gc, just have 95% of data on the 
stack


How's that even possible ? AFAIK, we need heap allocated memory 
in order to make GUI lib as a DLL. So creating things in heap and 
modify it, that's the nature of my project.




Re: How to use D without the GC ?

2024-06-12 Thread bachmeier via Digitalmars-d-learn

A SafeRefCounted example with main marked @nogc:

```
import std;
import core.stdc.stdlib;

struct Foo {
  double[] data;
  double * ptr;
  alias data this;

  @nogc this(int n) {
ptr = cast(double*) malloc(n*double.sizeof);
data = ptr[0..n];
printf("Data has been allocated\n");
  }

  @nogc ~this() {
free(ptr);
printf("Data has been freed\n");
  }
}

@nogc void main() {
  auto foo = SafeRefCounted!Foo(3);
  foo[0..3] = 1.5;
  printf("%f %f %f\n", foo[0], foo[1], foo[2]);
}
```


Re: How to use D without the GC ?

2024-06-12 Thread Sergey via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:
On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer 
wrote:



Two reasons.
1. I am writting a dll to use in Python. So I am assuming that


Btw are you going to use PyD or doing everything manually from 
scratch?


Re: How to use D without the GC ?

2024-06-12 Thread DrDread via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:
On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer 
wrote:



I would instead ask the reason for wanting to write D code 
without the GC.


-Steve


Hi Steve,
Two reasons.
1. I am writting a dll to use in Python. So I am assuming that 
manual memory management is better for this project. It will 
give finer control to me.

2. To squeeze out the last bit of performance from D.


the GC only runs on allocation. if you want to squeeze out the 
last bit of performance, you should preallocate all bufferes 
anyway, and GC vs no GC doesn't matter.
also just slap @nogc on your main function to avoid accidential 
allocations.


Re: How to use D without the GC ?

2024-06-11 Thread monkyyy via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 17:15:07 UTC, Vinod K Chandran wrote:
On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer 
wrote:



I would instead ask the reason for wanting to write D code 
without the GC.


-Steve


Hi Steve,
Two reasons.
1. I am writting a dll to use in Python. So I am assuming that 
manual memory management is better for this project. It will 
give finer control to me.

2. To squeeze out the last bit of performance from D.


rather then worring about the gc, just have 95% of data on the 
stack


Re: How to use D without the GC ?

2024-06-11 Thread Vinod K Chandran via Digitalmars-d-learn
On Tuesday, 11 June 2024 at 16:54:44 UTC, Steven Schveighoffer 
wrote:



I would instead ask the reason for wanting to write D code 
without the GC.


-Steve


Hi Steve,
Two reasons.
1. I am writting a dll to use in Python. So I am assuming that 
manual memory management is better for this project. It will give 
finer control to me.

2. To squeeze out the last bit of performance from D.





Re: How to use D without the GC ?

2024-06-11 Thread Steven Schveighoffer via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran wrote:

Hi all,
I am planning to write some D code without GC. But I have no 
prior experience with it. I have experience using manual memory 
management languages. But D has so far been used with GC. So I 
want to know what pitfalls it has and what things I should 
watch out for. Also, I want to know what high level features I 
will be missing.

Thanks in advance.


I could answer the question directly, but it seems others have 
already done so.


I would instead ask the reason for wanting to write D code 
without the GC. In many cases, you can write code without 
*regularly* using the GC (i.e. preallocate, or reuse buffers), 
but still use the GC in the sense that it is there as your 
allocator.


A great example is exceptions. Something that has the code `throw 
new Exception(...)` is going to need the GC in order to build 
that exception. But if your code is written such that this never 
(normally) happens, then you aren't using the GC for that code.


So I would call this kind of style writing code that avoids 
creating garbage. To me, this is the most productive way to 
minimize GC usage, while still allowing one to use D as it was 
intended.


-Steve


Re: How to use D without the GC ?

2024-06-11 Thread drug007 via Digitalmars-d-learn

On 11.06.2024 17:59, Kagamin wrote:
1) arena allocator makes memory manageable with occasional cache 
invalidation problem

2) no hashtable no problem


[OT] could you elaborate what problems they cause?

3) error handling depends on your code complexity, but even in complex 
C# code I found exceptions as boolean: you either have an exception or 
you don't

4) I occasionally use CTFE, where `@nogc` is a nuisance
5) polymorphism can be a little quirky




Re: How to use D without the GC ?

2024-06-11 Thread Vinod K Chandran via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 14:59:24 UTC, Kagamin wrote:
1) arena allocator makes memory manageable with occasional 
cache invalidation problem

2) no hashtable no problem
3) error handling depends on your code complexity, but even in 
complex C# code I found exceptions as boolean: you either have 
an exception or you don't

4) I occasionally use CTFE, where `@nogc` is a nuisance
5) polymorphism can be a little quirky


Oh thank you @Kagamin. That's some valuable comments. I will take 
special care.


Re: How to use D without the GC ?

2024-06-11 Thread Kagamin via Digitalmars-d-learn
1) arena allocator makes memory manageable with occasional cache 
invalidation problem

2) no hashtable no problem
3) error handling depends on your code complexity, but even in 
complex C# code I found exceptions as boolean: you either have an 
exception or you don't

4) I occasionally use CTFE, where `@nogc` is a nuisance
5) polymorphism can be a little quirky


Re: How to use D without the GC ?

2024-06-11 Thread Vinod K Chandran via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 13:35:19 UTC, matheus wrote:
On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran 
wrote:

...


Similar posts that may help:

https://forum.dlang.org/thread/hryadrwplyezihwag...@forum.dlang.org

https://forum.dlang.org/thread/dblfikgnzqfmmglwd...@forum.dlang.org

Matheus.

Thank you Matheus, let me check that. :)



Re: How to use D without the GC ?

2024-06-11 Thread matheus via Digitalmars-d-learn

On Tuesday, 11 June 2024 at 13:00:50 UTC, Vinod K Chandran wrote:

...


Similar posts that may help:

https://forum.dlang.org/thread/hryadrwplyezihwag...@forum.dlang.org

https://forum.dlang.org/thread/dblfikgnzqfmmglwd...@forum.dlang.org

Matheus.


Re: importC with gc-sections not work on linux

2024-02-26 Thread Dakota via Digitalmars-d-learn
On Monday, 26 February 2024 at 12:33:02 UTC, Richard (Rikki) 
Andrew Cattermole wrote:

On 27/02/2024 1:28 AM, Dakota wrote:
When I use importC to build a c library, there is a lot unused 
symbol missing.


I try add `-L--gc-sections` to dmd to workaround this issue.


This removes symbols, not keeps them.

You want the linker flag: ``--no-gc-sections``

"Enable garbage collection of unused input sections. It is 
ignored on targets that do not support this option. The default 
behaviour (of not performing this garbage collection) can be 
restored by specifying ‘--no-gc-sections’ on the command line. 
Note that garbage collection for COFF and PE format targets is 
supported, but the implementation is currently considered to be 
experimental."


https://sourceware.org/binutils/docs/ld/Options.html


I need remove symbol, since the problem is some symbol from 
importC reference to undefined symbol(need to be implement in d, 
but they will never used).



after remove the unused symbol,  I don't need to add the 
implement all of them.




Re: importC with gc-sections not work on linux

2024-02-26 Thread Richard (Rikki) Andrew Cattermole via Digitalmars-d-learn

On 27/02/2024 1:28 AM, Dakota wrote:
When I use importC to build a c library, there is a lot unused symbol 
missing.


I try add `-L--gc-sections` to dmd to workaround this issue.


This removes symbols, not keeps them.

You want the linker flag: ``--no-gc-sections``

"Enable garbage collection of unused input sections. It is ignored on 
targets that do not support this option. The default behaviour (of not 
performing this garbage collection) can be restored by specifying 
‘--no-gc-sections’ on the command line. Note that garbage collection for 
COFF and PE format targets is supported, but the implementation is 
currently considered to be experimental."


https://sourceware.org/binutils/docs/ld/Options.html


importC with gc-sections not work on linux

2024-02-26 Thread Dakota via Digitalmars-d-learn
When I use importC to build a c library, there is a lot unused 
symbol missing.


I try add `-L--gc-sections` to dmd to workaround this issue.


I also try `-L-dead_strip` on macOS, it work as expected.


I do some google, some one suggestion use with 
`-ffunction-sections`, `-f fdata-sections`,  dmd seems not 
support it.



Any tips to work this around ?


Re: D is nice whats really wrong with gc??

2023-12-23 Thread IGotD- via Digitalmars-d-learn

On Monday, 18 December 2023 at 16:44:11 UTC, Bkoie wrote:
just look at this i know this is overdesign im just trying to 
get a visual on how a api can be design im still new though but 
the fact you can build an api like this and it not break it is 
amazing.


but what is with these ppl and the gc?
just dont allocate new memory or invoke,
you can use scopes to temporry do stuff on immutable slices 
that will auto clean up

the list goes on

and you dont need to use pointers at all...!!

i honesty see nothing wrong with gc,



I don't think there is any wrong having GC in language either and 
upcoming languages also show that as a majority of the have some 
form of GC. GC is here to stay regardless.


So what is the problem with D? The problem with D is that it is 
limited to what type of GC it can support. Right now D only 
supports stop the world GC which is quickly becoming unacceptable 
on modern systems. Sure it was fine when when we had dual core 
CPUs but today desktop PCs can have 32 execution units (server 
CPUs can have an insane amount of of them like 128). Stopping 32 
execution (potentially even more if you have more threads) units 
is just unacceptable, which not only takes a lot of time but a 
very clumsy approach on modern systems.


What GC should D then support? In my opinion, all of them. Memory 
management is a moving target and I don't know how it will look 
like in 10 years. Will cache snoop be viable for example, will 
the cores be clustered so that snoops are only possible within 
them etc? D needs a more future proof language design when it 
comes to memory management.


Because of this it is important that D can as seamless as 
possible support different types of GC types. Exposing raw 
pointers in the language for GC allocated type was a big mistake 
in the D language design which I think should be rectified. About 
all other new languages have opaque pointers/reference types in 
order to hide the GC mechanism and so that other GC algorithms 
like reference counting can be used.


This is a an F- in language design.



Re: D is nice whats really wrong with gc??

2023-12-23 Thread bomat via Digitalmars-d-learn

On Friday, 22 December 2023 at 22:33:35 UTC, H. S. Teoh wrote:
IMNSHO, if I had very large data files to load, I wouldn't use 
JSON. Precompile the data into a more compact binary form 
that's already ready to use, and just mmap() it at runtime.


I wondered about that decision as well, especially because this 
was internal game data that did not have to be user readable.
That's beside the point though; it was a ~10 MB JSON file that 
took them several minutes to parse. That's really just insane. 
Turns out it helps if you don't count the length of the entire 
document for every single value. It also helps if you don't 
iterate over your entire array of already written values every 
time you want to insert a new one. :)

In case you didn't know the story, here's a link:
https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times-by-70/

I think there are several great lessons in there. Rockstar must 
have noticed how slow the loading is, but apparently just 
accepted it as a given... for 7+ years. Who needs optimizations 
on today's great hardware, right? There couldn't possibly be 
algorithmic problems in something simple like a JSON parser, 
right?
Second, look at what people suspected as the root cause of the 
problem, like the P2P architecture. It's funny how speculations 
about performance problems are *always* wrong. Only measuring 
will tell you the truth.




Re: D is nice whats really wrong with gc??

2023-12-22 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Dec 22, 2023 at 09:40:03PM +, bomat via Digitalmars-d-learn wrote:
> On Friday, 22 December 2023 at 16:51:11 UTC, bachmeier wrote:
> > Given how fast computers are today, the folks that focus on memory
> > and optimizing for performance might want to apply for jobs as
> > flooring inspectors, because they're often solving problems from the
> > 1990s.
> 
> *Generally* speaking, I disagree. Think of the case of GTA V where
> several *minutes* of loading time were burned just because they
> botched the implementation of a JSON parser.

IMNSHO, if I had very large data files to load, I wouldn't use JSON.
Precompile the data into a more compact binary form that's already ready
to use, and just mmap() it at runtime.


> Of course, this was unrelated to memory management. But it goes to
> show that today's hardware being super fast doesn't absolve you from
> knowing what you're doing... or at least question your implementation
> once you notice that it's slow.

My favorite example is this area is the poor selection of algorithms, a
very common mistake being choosing an O(n²) algorithm because it's
easier to implement than the equivalent O(n) algorithm, and not very
noticeable on small inputs. But on large inputs it slows to an unusable
crawl. "But I wrote it in C, why isn't it fast?!" Because O(n²) is
O(n²), and that's independent of language. Given large enough input, an
O(n) Java program will beat the heck out of an O(n²) C program.


> But that is true for any language, obviously.
>
> I think there is a big danger of people programming in C/C++ and
> thinking that it *must* be performing well just because it's C/C++.
> The C++ codebase I have to maintain in my day job is a really bad
> example for that as well.

"Elegant or ugly code as well as fine or rude sentences have something
in common: they don't depend on the language." -- Luca De Vitis

:-)


> > I say this as I'm in the midst of porting C code to D. The biggest
> > change by far is deleting line after line of manual memory
> > management.  Changing anything in that codebase would be miserable.
> 
> I actually hate C with a passion.

Me too. :-D


> I have to be fair though: What you describe doesn't sound like a
> problem of the codebase being C, but the codebase being crap. :)

Yeah, I've seen my fair share of crap C and C++ codebases. C code that
makes you do a double take and stare real hard at the screen to
ascertain whether it's actually C and not some jokelang or exolang
purposely designed to be unreadable/unmaintainable. (Or maybe it would
qualify as an IOCCC entry. :-D)  And C++ code that looks like ... I
dunno what.  When business logic is being executed inside of a dtor, you
*know* that your codebase has Problems(tm), real big ones at that.



> If you have to delete "line after line" of manual memory management, I
> assume you're dealing with micro-allocations on the heap - which are
> performance poison in any language.

Depends on what you're dealing with.  Some micro-allocations are totally
avoidable, but if you're manipulating a complex object graph composed of
nodes of diverse types, it's hard to avoid. At least, not without
uglifying your APIs significantly and introducing long-term
maintainability issues.  One of my favorite GC "lightbulb" moments is
when I realized that having a GC allowed me to simplify my internal APIs
significantly, resulting in much cleaner code that's easy to debug and
easy to maintain. Whereas the equivalent bit of code in the original C++
codebase would have required disproportionate amounts of effort just to
navigate the complex allocation requirements.

These days my motto is: use the GC by default, when it becomes a
problem, then use a more manual memory management scheme, but *only
where the bottleneck is* (as proven by an actual profiler, not where you
"know" (i.e., imagine) it is).  A lot of C/C++ folk (and I speak from my
own experience as one of them) spend far too much time and energy
optimizing things that don't need to be optimized, because they are
nowhere near the bottleneck, resulting in lots of sunk cost and added
maintenance burden with no meaningful benefit.


[...]
> Of course, this directly leads to the favorite argument of C
> defenders, which I absolutely hate: "Why, it's not a problem if you're
> doing it *right*."
> 
> By this logic, you have to do all these terrible mistakes while
> learning your terrible language, and then you'll be a good programmer
> and can actually be trusted with writing production software - after
> like, what, 20 years of shooting yourself in the foot and learning
> everything the hard way?  :) And even then, the slightest slipup

Re: D is nice whats really wrong with gc??

2023-12-22 Thread bomat via Digitalmars-d-learn

On Friday, 22 December 2023 at 16:51:11 UTC, bachmeier wrote:
Given how fast computers are today, the folks that focus on 
memory and optimizing for performance might want to apply for 
jobs as flooring inspectors, because they're often solving 
problems from the 1990s.


*Generally* speaking, I disagree. Think of the case of GTA V 
where several *minutes* of loading time were burned just because 
they botched the implementation of a JSON parser.
Of course, this was unrelated to memory management. But it goes 
to show that today's hardware being super fast doesn't absolve 
you from knowing what you're doing... or at least question your 
implementation once you notice that it's slow.

But that is true for any language, obviously.
I think there is a big danger of people programming in C/C++ and 
thinking that it *must* be performing well just because it's 
C/C++. The C++ codebase I have to maintain in my day job is a 
really bad example for that as well.


I say this as I'm in the midst of porting C code to D. The 
biggest change by far is deleting line after line of manual 
memory management. Changing anything in that codebase would be 
miserable.


I actually hate C with a passion.
I have to be fair though: What you describe doesn't sound like a 
problem of the codebase being C, but the codebase being crap. :)
If you have to delete "line after line" of manual memory 
management, I assume you're dealing with micro-allocations on the 
heap - which are performance poison in any language.
A decent system would allocate memory in larger blocks and manage 
access to it via handles. That way you never do micro-allocations 
and never have ownership problems.
Essentially, it's still a "memory manager" that owns all the 
memory, the only difference being that it's self-written.
Porting a codebase like that would actually be very easy because 
all the mallocs would be very localized.


Of course, this directly leads to the favorite argument of C 
defenders, which I absolutely hate: "Why, it's not a problem if 
you're doing it *right*."


By this logic, you have to do all these terrible mistakes while 
learning your terrible language, and then you'll be a good 
programmer and can actually be trusted with writing production 
software - after like, what, 20 years of shooting yourself in the 
foot and learning everything the hard way? :)
And even then, the slightest slipup will give you dramatic 
vulnerabilities.

Such a great concept.



Re: D is nice whats really wrong with gc??

2023-12-22 Thread H. S. Teoh via Digitalmars-d-learn
On Fri, Dec 22, 2023 at 07:22:15PM +, Dmitry Ponyatov via 
Digitalmars-d-learn wrote:
> > It's called GC phobia, a knee-jerk reaction malady common among
> > C/C++ programmers
> 
> I'd like to use D in hard realtime apps (gaming can be thought as one
> of them, but I mostly mean realtime dynamic multimedia and digital
> signal processing).

For digital signal processing, couldn't you just preallocate beforehand?
Even if we had a top-of-the-line incremental GC I wouldn't want to
allocate wantonly in my realtime code. I'd preallocate whatever I can,
and use region allocators for the rest.


> So, GC in such applications commonly supposed unacceptable. In
> contrast, I can find some PhD theses speaking about realtime GC,
> prioritized message passing and maybe RDMA-based clustering.

I'm always skeptical of general claims like this. Until you actually
profile and identify the real hotspots, it's just speculation.


> Unfortunately, I have no hope that D lang is popular enough that
> somebody in the topic can rewrite its runtime and gc to be usable in
> more or less hard RT apps.

Popularity has nothing to do with it. The primary showstopper here is
the lack of write barriers (and Walter's reluctance to change this).
If we had write barriers a lot more GC options would open up.


T

-- 
What is Matter, what is Mind? Never Mind, it doesn't Matter.


Re: D is nice whats really wrong with gc??

2023-12-22 Thread Dmitry Ponyatov via Digitalmars-d-learn
It's called GC phobia, a knee-jerk reaction malady common among 
C/C++ programmers


I'd like to use D in hard realtime apps (gaming can be thought as 
one of them, but I mostly mean realtime dynamic multimedia and 
digital signal processing).


So, GC in such applications commonly supposed unacceptable. In 
contrast, I can find some PhD theses speaking about realtime GC, 
prioritized message passing and maybe RDMA-based clustering.


Unfortunately, I have no hope that D lang is popular enough that 
somebody in the topic can rewrite its runtime and gc to be usable 
in more or less hard RT apps.




Re: D is nice whats really wrong with gc??

2023-12-22 Thread bachmeier via Digitalmars-d-learn

On Friday, 22 December 2023 at 12:53:44 UTC, bomat wrote:

If you use (or even feel tempted to use) a GC, it means that 
you don't care about your memory. Neither about its layout nor 
its size, nor when chunks of it are allocated or deallocated, 
etc.
And if you don't care about these things, you should not call 
yourself a programmer. You are the reason why modern software 
sucks and everything gets slower and slower despite the 
processors getting faster and faster. In fact, you probably 
should get another job, like flooring inspector or something. :)


Given how fast computers are today, the folks that focus on 
memory and optimizing for performance might want to apply for 
jobs as flooring inspectors, because they're often solving 
problems from the 1990s. That's not to say it's never needed, but 
the number of cases where idiomatic D, Go, or Java will be too 
slow is shrinking rapidly. And there's a tradeoff. In return for 
solving a problem that doesn't exist, you get bugs, increased 
development time, and difficulty changing approaches.


I say this as I'm in the midst of porting C code to D. The 
biggest change by far is deleting line after line of manual 
memory management. Changing anything in that codebase would be 
miserable.


Re: D is nice whats really wrong with gc??

2023-12-22 Thread Bkoie via Digitalmars-d-learn

On Friday, 22 December 2023 at 12:53:44 UTC, bomat wrote:
I think the problem most "old school" programmers have with 
automatic garbage collection, or *any* kind of "managed" code, 
really, is not the GC itself, but that it demonstrates a wrong 
mindset.


If you use (or even feel tempted to use) a GC, it means that 
you don't care about your memory. Neither about its layout nor 
its size, nor when chunks of it are allocated or deallocated, 
etc.
And if you don't care about these things, you should not call 
yourself a programmer. You are the reason why modern software 
sucks and everything gets slower and slower despite the 
processors getting faster and faster. In fact, you probably 
should get another job, like flooring inspector or something. :)


and that's the reason why modern programs are getting bigger, 
slower and leaking memory. no one should be manually managing 
memory, rust is a prime example of that but now "barrow checker 
the issue" or "too many unsafe blocks", and as one guy said above 
you can avoid the gc in d so...


Re: D is nice whats really wrong with gc??

2023-12-22 Thread bomat via Digitalmars-d-learn

On Monday, 18 December 2023 at 16:44:11 UTC, Bkoie wrote:

but what is with these ppl and the gc?
[...]


I'm a C++ programmer in my day job. Personally, I have no problem 
with a GC, but one of my colleague is a total C fanboy, so I feel 
qualified to answer your question. :)


I think the problem most "old school" programmers have with 
automatic garbage collection, or *any* kind of "managed" code, 
really, is not the GC itself, but that it demonstrates a wrong 
mindset.


If you use (or even feel tempted to use) a GC, it means that you 
don't care about your memory. Neither about its layout nor its 
size, nor when chunks of it are allocated or deallocated, etc.
And if you don't care about these things, you should not call 
yourself a programmer. You are the reason why modern software 
sucks and everything gets slower and slower despite the 
processors getting faster and faster. In fact, you probably 
should get another job, like flooring inspector or something. :)


And although this is not my opinion (otherwise I wouldn't use D), 
I have to admit that this isn't completely wrong. I like my 
abstractions because they make my life easier, but yeah, they 
detach me from the hardware, which often means things are not 
quite as fast as they could possibly be. It's a tradeoff.


Of course, people with a "purer" mindset could always use the 
"BetterC" subset of D... but then again, why should they? C is 
perfect, right? :)


Re: D is nice whats really wrong with gc??

2023-12-20 Thread Imperatorn via Digitalmars-d-learn

On Monday, 18 December 2023 at 17:22:22 UTC, H. S. Teoh wrote:
On Mon, Dec 18, 2023 at 04:44:11PM +, Bkoie via 
Digitalmars-d-learn wrote: [...]

but what is with these ppl and the gc?

[...]

It's called GC phobia, a knee-jerk reaction malady common among 
C/C++ programmers (I'm one of them, though I got cured of GC 
phobia thanks to D :-P).  95% of the time the GC helps far more 
than it hurts.  And the 5% of the time when it hurts, there are 
plenty of options for avoiding it in D.  It's not shoved down 
your throat like in Java, there's no need to get all worked up 
about it.



T


Truth


Re: D is nice whats really wrong with gc??

2023-12-18 Thread H. S. Teoh via Digitalmars-d-learn
On Mon, Dec 18, 2023 at 04:44:11PM +, Bkoie via Digitalmars-d-learn wrote:
[...]
> but what is with these ppl and the gc?
[...]

It's called GC phobia, a knee-jerk reaction malady common among C/C++
programmers (I'm one of them, though I got cured of GC phobia thanks to
D :-P).  95% of the time the GC helps far more than it hurts.  And the
5% of the time when it hurts, there are plenty of options for avoiding
it in D.  It's not shoved down your throat like in Java, there's no need
to get all worked up about it.


T

-- 
Computerese Irregular Verb Conjugation: I have preferences.  You have biases.  
He/She has prejudices. -- Gene Wirchenko


D is nice whats really wrong with gc??

2023-12-18 Thread Bkoie via Digitalmars-d-learn
just look at this i know this is overdesign im just trying to get 
a visual on how a api can be design im still new though but the 
fact you can build an api like this and it not break it is 
amazing.


but what is with these ppl and the gc?
just dont allocate new memory or invoke,
you can use scopes to temporry do stuff on immutable slices that 
will auto clean up

the list goes on

and you dont need to use pointers at all...!!

i honesty see nothing wrong with gc,

ofc d has some downsides,
docs not very good compare to some other lang.
ide support not great but it works sometimes
i use helix and lapce and maybe sometimes intellj
it works better in helix though.
and of d is missing some minor libraries

```
import std.stdio: writeln, readln;
auto struct Game
{
string title;
private Board _board;
private const(Player)[] _players;
final auto load(T)(T any) {
static if (is(T == Player)) {
_pushPlayer(any);
}
return this;
};
final auto play() {assert(_isPlayersFull, "require players is 
2 consider removing"); "playing the game".writeln;};

final auto _end() {};
auto _currentPlayers() const {return _players.length;}
enum _playerLimit = 2;
auto _isPlayersFull() const {return _currentPlayers == 
_playerLimit;}

import std.format: format;
auto _pushPlayer(T: Player)(T any) {
if (_isPlayersFull) assert(false, "require %s 
players".format(_playerLimit));

_players.reserve(_playerLimit);
_players ~= any;
}
}
private struct Board {}
enum symbol {none, x, o}
private struct Player {const(string) _name; symbol _hand; 
@disable this(); public this(in string n) {_name = n;}}

alias game = Game;
alias player = Player;
alias board = Board;
auto main()
{
import std.string: strip;
game()
.load(player(readln().strip))
// .matchmake
.load(player(readln().strip))
.play;
}
```


Re: GC doesn't collect where expected

2023-06-19 Thread Steven Schveighoffer via Digitalmars-d-learn

On 6/19/23 2:01 PM, axricard wrote:



Does it mean that if my function _func()_ is as following (say I don't 
use clobber), I could keep a lot of memory for a very long time (until 
the stack is fully erased by other function calls) ?



```
void func()
{
    Foo[2048] x;
    foreach(i; 0 .. 2048)
  x[i] = new Foo;
}
```



When the GC stops all threads, each of them registers their *current* 
stack as the target to scan, so most likely not.


However, the compiler/optimizer is not trying to zero out stack 
unnecessarily, and likely this leads in some cases to false pointers. 
Like I said, even the "clobber" function might not actually zero out any 
stack because the compiler decides writing zeros to the stack that will 
never be read is a "dead store" and just omit that.


This question comes up somewhat frequently "why isn't the GC collecting 
the garbage I gave it!", and the answer is mostly "don't worry about 
it". There is no real good way to guarantee an interaction between the 
compiler, the optimizer, and the runtime to make sure something happens 
one way or another. The only thing you really should care about is if 
you have a reference to an item and it's prematurely collected. Then 
there is a bug. Other than that, just don't worry about it.


-Steve


Re: GC doesn't collect where expected

2023-06-19 Thread axricard via Digitalmars-d-learn
On Monday, 19 June 2023 at 16:43:30 UTC, Steven Schveighoffer 
wrote:


In general, the language does not guarantee when the GC will 
collect your item.


In this specific case, most likely it's a stale register or 
stack reference. One way I usually use to ensure such things is 
to call a function that destroys the existing stack:


```d
void clobber()
{
   int[2048] x;
}
```

Calling this function will clear out 2048x4 bytes of data to 0 
on the stack.


-Steve


Does it mean that if my function _func()_ is as following (say I 
don't use clobber), I could keep a lot of memory for a very long 
time (until the stack is fully erased by other function calls) ?



```
void func()
{
   Foo[2048] x;
   foreach(i; 0 .. 2048)
 x[i] = new Foo;
}
```



Re: GC doesn't collect where expected

2023-06-19 Thread Steven Schveighoffer via Digitalmars-d-learn

On 6/19/23 12:51 PM, Anonymouse wrote:

On Monday, 19 June 2023 at 16:43:30 UTC, Steven Schveighoffer wrote:


In this specific case, most likely it's a stale register or stack 
reference. One way I usually use to ensure such things is to call a 
function that destroys the existing stack:


```d
void clobber()
{
   int[2048] x;
}
```

Calling this function will clear out 2048x4 bytes of data to 0 on the 
stack.


Could you elaborate on how you use this? When do you call it? Just, ever 
so often, or is there thought behind it?


Just before forcing a collect.

The stack is *always* scanned conservatively, and even though really the 
stack data should be blown away by the next function call (probably 
GC.collect), it doesn't always work out that way. Indeed, even just 
declaring `x` might not do it if the compiler decides it doesn't 
actually have to.


But I've found that seems to help.

-Steve


Re: GC doesn't collect where expected

2023-06-19 Thread axricard via Digitalmars-d-learn
On Monday, 19 June 2023 at 16:43:30 UTC, Steven Schveighoffer 
wrote:
In general, the language does not guarantee when the GC will 
collect your item.


In this specific case, most likely it's a stale register or 
stack reference. One way I usually use to ensure such things is 
to call a function that destroys the existing stack:


```d
void clobber()
{
   int[2048] x;
}
```

Calling this function will clear out 2048x4 bytes of data to 0 
on the stack.


-Steve


All clear, thank you !


Re: GC doesn't collect where expected

2023-06-19 Thread Anonymouse via Digitalmars-d-learn
On Monday, 19 June 2023 at 16:43:30 UTC, Steven Schveighoffer 
wrote:


In this specific case, most likely it's a stale register or 
stack reference. One way I usually use to ensure such things is 
to call a function that destroys the existing stack:


```d
void clobber()
{
   int[2048] x;
}
```

Calling this function will clear out 2048x4 bytes of data to 0 
on the stack.


-Steve


Could you elaborate on how you use this? When do you call it? 
Just, ever so often, or is there thought behind it?


Re: GC doesn't collect where expected

2023-06-19 Thread Steven Schveighoffer via Digitalmars-d-learn

On 6/19/23 12:13 PM, axricard wrote:
I'm doing some experiments with ldc2 GC, by instrumenting it and 
printing basic information (what is allocated and freed)


My first tests are made on this sample :

```

cat test2.d

import core.memory;

class Bar { int bar; }

class Foo {

   this()
   {
     this.bar = new Bar;
   }

   Bar bar;
}


void func()
{
   Foo f2 = new Foo;
}

int main()
{
   Foo f = new Foo;

   func();
   GC.collect();

   return 0;
}

```

When trying to run the instrumented druntime, I get a strange behavior : 
the first collection (done with GC.collect) doesn't sweep anything (in 
particular, it doesn't sweep memory allocated in _func()_). The whole 
sweeping is done when program finish, at cleanup. I don't understand why 
: memory allocated in _func()_ shouldn't be accessible from any root at 
first collection, right ?


```
╰─> /instrumented-ldc2 -g -O0 test2.d --disable-gc2stack 
--disable-d-passes --of test2  &&  ./test2 "--DRT-gcopt=cleanup:collect 
fork:0 parallel:0 verbose:2"



[test2.d:26] new 'test2.Foo' (24 bytes) => p = 0x7f3a0454d000
[test2.d:10] new 'test2.Bar' (20 bytes) => p = 0x7f3a0454d020
[test2.d:21] new 'test2.Foo' (24 bytes) => p = 0x7f3a0454d040
[test2.d:10] new 'test2.Bar' (20 bytes) => p = 0x7f3a0454d060

 COLLECTION  =
     = MARKING ==
     marking range: [0x7fff22337a60..0x7fff22339000] (0x15a0)
     range: [0x7f3a0454d000..0x7f3a0454d020] (0x20)
     range: [0x7f3a0454d040..0x7f3a0454d060] (0x20)
     marking range: [0x7f3a0464d720..0x7f3a0464d8b9] (0x199)
     marking range: [0x46c610..0x47b3b8] (0xeda8)
     = SWEEPING ==
=


 COLLECTION  =
     = MARKING ==
     marking range: [0x46c610..0x47b3b8] (0xeda8)
     = SWEEPING ==
     Freeing test2.Foo (test2.d:26; 24 bytes) (0x7f3a0454d000). AGE 
:  1/2
     Freeing test2.Bar (test2.d:10; 20 bytes) (0x7f3a0454d020). AGE 
:  1/2
     Freeing test2.Foo (test2.d:21; 24 bytes) (0x7f3a0454d040). AGE 
:  1/2
     Freeing test2.Bar (test2.d:10; 20 bytes) (0x7f3a0454d060). AGE 
:  1/2

=====
```



In general, the language does not guarantee when the GC will collect 
your item.


In this specific case, most likely it's a stale register or stack 
reference. One way I usually use to ensure such things is to call a 
function that destroys the existing stack:


```d
void clobber()
{
   int[2048] x;
}
```

Calling this function will clear out 2048x4 bytes of data to 0 on the stack.

-Steve


GC doesn't collect where expected

2023-06-19 Thread axricard via Digitalmars-d-learn
I'm doing some experiments with ldc2 GC, by instrumenting it and 
printing basic information (what is allocated and freed)


My first tests are made on this sample :

```

cat test2.d

import core.memory;

class Bar { int bar; }

class Foo {

  this()
  {
this.bar = new Bar;
  }

  Bar bar;
}


void func()
{
  Foo f2 = new Foo;
}

int main()
{
  Foo f = new Foo;

  func();
  GC.collect();

  return 0;
}

```

When trying to run the instrumented druntime, I get a strange 
behavior : the first collection (done with GC.collect) doesn't 
sweep anything (in particular, it doesn't sweep memory allocated 
in _func()_). The whole sweeping is done when program finish, at 
cleanup. I don't understand why : memory allocated in _func()_ 
shouldn't be accessible from any root at first collection, right ?


```
╰─> /instrumented-ldc2 -g -O0 test2.d --disable-gc2stack 
--disable-d-passes --of test2  &&  ./test2 
"--DRT-gcopt=cleanup:collect fork:0 parallel:0 verbose:2"



[test2.d:26] new 'test2.Foo' (24 bytes) => p = 0x7f3a0454d000
[test2.d:10] new 'test2.Bar' (20 bytes) => p = 0x7f3a0454d020
[test2.d:21] new 'test2.Foo' (24 bytes) => p = 0x7f3a0454d040
[test2.d:10] new 'test2.Bar' (20 bytes) => p = 0x7f3a0454d060

 COLLECTION  =
= MARKING ==
marking range: [0x7fff22337a60..0x7fff22339000] (0x15a0)
range: [0x7f3a0454d000..0x7f3a0454d020] (0x20)
range: [0x7f3a0454d040..0x7f3a0454d060] (0x20)
marking range: [0x7f3a0464d720..0x7f3a0464d8b9] (0x199)
marking range: [0x46c610..0x47b3b8] (0xeda8)
= SWEEPING ==
=


 COLLECTION  =
= MARKING ==
marking range: [0x46c610..0x47b3b8] (0xeda8)
= SWEEPING ==
Freeing test2.Foo (test2.d:26; 24 bytes) 
(0x7f3a0454d000). AGE :  1/2
Freeing test2.Bar (test2.d:10; 20 bytes) 
(0x7f3a0454d020). AGE :  1/2
Freeing test2.Foo (test2.d:21; 24 bytes) 
(0x7f3a0454d040). AGE :  1/2
Freeing test2.Bar (test2.d:10; 20 bytes) 
(0x7f3a0454d060). AGE :  1/2

=
```



Re: Lazy and GC Allocations

2023-02-20 Thread Etienne via Digitalmars-d-learn
On Monday, 20 February 2023 at 19:58:32 UTC, Steven Schveighoffer 
wrote:

On 2/20/23 1:50 PM, Etienne wrote:
On Monday, 20 February 2023 at 02:50:20 UTC, Steven 
Schveighoffer wrote:
See Adam's bug report: 
https://issues.dlang.org/show_bug.cgi?id=23627




So, according to this bug report, the implementation is 
allocating a closure on the GC even though the spec says it 
shouldn't?


The opposite, the delegate doesn't force a closure, and so when 
the variable goes out of scope, memory corruption ensues.


I've been writing some betterC and the lazy parameter was 
prohibited because it allocates on the GC, so I'm wondering 
what the situation is currently


It shouldn't. Now, lazy can't be `@nogc` (because that's just 
what the compiler dictates), but it won't actually *use* the GC 
if you don't allocate in the function call.


I just tested and you can use lazy parameters with betterC.

-Steve


The @nogc issue might be what might be why it didn't work for me. 
I use it because it's easier to work with betterC but perhaps I 
should avoid writing @nogc code altogether


Thanks for the info!

Etienne



Re: Lazy and GC Allocations

2023-02-20 Thread Steven Schveighoffer via Digitalmars-d-learn

On 2/20/23 1:50 PM, Etienne wrote:

On Monday, 20 February 2023 at 02:50:20 UTC, Steven Schveighoffer wrote:

See Adam's bug report: https://issues.dlang.org/show_bug.cgi?id=23627



So, according to this bug report, the implementation is allocating a 
closure on the GC even though the spec says it shouldn't?


The opposite, the delegate doesn't force a closure, and so when the 
variable goes out of scope, memory corruption ensues.


I've been writing some betterC and the lazy parameter was prohibited 
because it allocates on the GC, so I'm wondering what the situation is 
currently


It shouldn't. Now, lazy can't be `@nogc` (because that's just what the 
compiler dictates), but it won't actually *use* the GC if you don't 
allocate in the function call.


I just tested and you can use lazy parameters with betterC.

-Steve


Re: Lazy and GC Allocations

2023-02-20 Thread Etienne via Digitalmars-d-learn
On Monday, 20 February 2023 at 02:50:20 UTC, Steven Schveighoffer 
wrote:
See Adam's bug report: 
https://issues.dlang.org/show_bug.cgi?id=23627


-Steve


So, according to this bug report, the implementation is 
allocating a closure on the GC even though the spec says it 
shouldn't?


I've been writing some betterC and the lazy parameter was 
prohibited because it allocates on the GC, so I'm wondering what 
the situation is currently


Etienne


Re: Lazy and GC Allocations

2023-02-19 Thread Steven Schveighoffer via Digitalmars-d-learn

On 2/19/23 9:15 PM, Steven Schveighoffer wrote:
Indeed, you can't really "save" the hidden delegate somewhere, so the 
calling function knows that the delgate can't escape.


I stand corrected, you can save it (by taking the address of it).

And it's explicitly allowed by the spec.

But it still doesn't allocate a closure!

See Adam's bug report: https://issues.dlang.org/show_bug.cgi?id=23627

-Steve


Re: Lazy and GC Allocations

2023-02-19 Thread Steven Schveighoffer via Digitalmars-d-learn

On 2/19/23 7:50 PM, Etienne wrote:

Hello,

I'm wondering at which moment the following would make an allocation of 
the scope variables on the GC. Should I assume that the second parameter 
of enforce being lazy, we would get a delegate/literal that saves the 
current scope on the GC even if it's not needed? I'm asking purely for a 
performance perspective of avoiding GC allocations.


```
void main() {
  int a = 5;
  enforce(true, format("a: %d", a));
}
```


enforce takes a lazy variable, which I believe is scope by default, so 
no closure should be allocated.


Indeed, you can't really "save" the hidden delegate somewhere, so the 
calling function knows that the delgate can't escape.


-Steve


Lazy and GC Allocations

2023-02-19 Thread Etienne via Digitalmars-d-learn

Hello,

I'm wondering at which moment the following would make an 
allocation of the scope variables on the GC. Should I assume that 
the second parameter of enforce being lazy, we would get a 
delegate/literal that saves the current scope on the GC even if 
it's not needed? I'm asking purely for a performance perspective 
of avoiding GC allocations.


```
void main() {
 int a = 5;
 enforce(true, format("a: %d", a));
}
```

Thanks

Etienne


Re: GC interaction with malloc/free

2023-01-05 Thread H. S. Teoh via Digitalmars-d-learn
On Thu, Jan 05, 2023 at 08:18:42PM +, DLearner via Digitalmars-d-learn 
wrote:
> On Thursday, 5 January 2023 at 19:54:01 UTC, H. S. Teoh wrote:
[...]
> > core.stdc.stdlib.{malloc,free} *is* the exact same malloc/free that
> > C uses, it has nothing to do with the GC.  The allocated memory is
> > taken from the malloc/free part of the heap, which is disjoint from
> > the heap memory managed by the GC.
> > 
> > So, it should not cause any crashes.
[...]
> That's comforting, but there is a reference in:
> 
> https://dlang.org/blog/2017/09/25/go-your-own-way-part-two-the-heap/
> 
> '...Given that it’s rarely recommended to disable the GC entirely,
> most D programs allocating outside the GC heap will likely also be
> using memory from the GC heap in the same program. In order for the GC
> to properly do its job, it needs to be informed of any non-GC memory
> that contains, or may potentially contain, references to memory from
> the GC heap.'
> 
> Followed by things that have to be done (GC.addRange) to avoid
> interaction effects?

You only need to do this if you will be storing pointers to GC-allocated
objects inside malloc-allocated objects.  E.g., if you malloc a struct
that contains a reference to a GC-allocated class object.

The reason for this precaution is because the GC needs to know all the
root pointers that eventually may point to a prospective object to be
garbage-collected.  If there are pointers to an object outside of the
areas the GC is aware of, e.g., in the malloc heap, then the GC may not
be able to correctly determine that there's still a reference to the
object, and may collect it prematurely, leading to a crash when you next
try to dereference the pointer to the object.

If there are no references from the malloc heap to the GC heap, then you
do not need to use GC.addRange.


T

-- 
Build a man a fire, and he is warm for a night. Set a man on fire, and he is 
warm for the rest of his life.


Re: GC interaction with malloc/free

2023-01-05 Thread DLearner via Digitalmars-d-learn

On Thursday, 5 January 2023 at 19:54:01 UTC, H. S. Teoh wrote:
On Thu, Jan 05, 2023 at 07:49:38PM +, DLearner via 
Digitalmars-d-learn wrote:
Suppose there is a D main program (not marked anywhere with 
@nogc),

that _both_

A: Calls one or more C functions that themselves call 
malloc/free; and

also
B: Calls one or more D functions that themselves call 
malloc/free via

`import core.stdc.stdlib;`

Assuming the malloc/free's are used correctly, does this 
situation risk crashing the D main program?

[...]

core.stdc.stdlib.{malloc,free} *is* the exact same malloc/free 
that C uses, it has nothing to do with the GC.  The allocated 
memory is taken from the malloc/free part of the heap, which is 
disjoint from the heap memory managed by the GC.


So, it should not cause any crashes.


T


That's comforting, but there is a reference in:

https://dlang.org/blog/2017/09/25/go-your-own-way-part-two-the-heap/

'...Given that it’s rarely recommended to disable the GC 
entirely, most D programs allocating outside the GC heap will 
likely also be using memory from the GC heap in the same program. 
In order for the GC to properly do its job, it needs to be 
informed of any non-GC memory that contains, or may potentially 
contain, references to memory from the GC heap.'


Followed by things that have to be done (GC.addRange) to avoid 
interaction effects?


Re: GC interaction with malloc/free

2023-01-05 Thread H. S. Teoh via Digitalmars-d-learn
On Thu, Jan 05, 2023 at 07:49:38PM +, DLearner via Digitalmars-d-learn 
wrote:
> Suppose there is a D main program (not marked anywhere with @nogc),
> that _both_
> 
> A: Calls one or more C functions that themselves call malloc/free; and
> also
> B: Calls one or more D functions that themselves call malloc/free via
> `import core.stdc.stdlib;`
> 
> Assuming the malloc/free's are used correctly, does this situation
> risk crashing the D main program?
[...]

core.stdc.stdlib.{malloc,free} *is* the exact same malloc/free that C
uses, it has nothing to do with the GC.  The allocated memory is taken
from the malloc/free part of the heap, which is disjoint from the heap
memory managed by the GC.

So, it should not cause any crashes.


T

-- 
Маленькие детки - маленькие бедки.


GC interaction with malloc/free

2023-01-05 Thread DLearner via Digitalmars-d-learn
Suppose there is a D main program (not marked anywhere with 
@nogc), that _both_


A: Calls one or more C functions that themselves call 
malloc/free; and also
B: Calls one or more D functions that themselves call malloc/free 
via `import core.stdc.stdlib;`


Assuming the malloc/free's are used correctly, does this 
situation risk crashing the D main program?


Best regards


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread vushu via Digitalmars-d-learn

On Monday, 5 December 2022 at 14:48:33 UTC, cc wrote:

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

[...]


If your program runs, does some stuff, and terminates, use the 
GC.
If your program runs, stays up for a while with user 
occasionally interacting with it, use the GC.
If your program runs, and stays up 24/7 doing things in the 
background, use the GC.


[...]


Thanks a lot for your advice :)


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread vushu via Digitalmars-d-learn

On Sunday, 4 December 2022 at 17:47:38 UTC, ryuukk_ wrote:

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

[...]



D gives you the choice

But the most important thing is your usecase, what kind of 
library are you making?


Once you answer this question, you can then ask what your 
memory strategy should be, and then it is based on performance 
concerns


D scale from microcontrollers to servers, drivers, games, 
desktop apps


Your audience will determine what you should provide

For a desktop app, a GC is an advantage

For a driver or a game, it's not


I agree with you i depends on the usecase, I will consider that 
thanks.


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread vushu via Digitalmars-d-learn
On Monday, 5 December 2022 at 10:53:33 UTC, Guillaume Piolat 
wrote:
On Sunday, 4 December 2022 at 21:55:52 UTC, Siarhei Siamashka 
wrote:
Is it possible to filter packages in this list by @nogc or 
@safe compatibility?


You can list DUB packages for "@nogc usage"
https://code.dlang.org/?sort=score&limit=20&category=library.nogc


Cool, it looks like there is only a few nogc suitable libraries.


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread vushu via Digitalmars-d-learn
On Monday, 5 December 2022 at 10:48:59 UTC, Guillaume Piolat 
wrote:
There are legitimate uses cases when you can't afford the 
runtime machinery (attach/detach every incoming thread in a 
shared library), more than not being able to afford the GC from 
a performance point of view.


[...]


Thanks for the description of your usecase, good to know your 
perspective when considering  using a library :)


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread jmh530 via Digitalmars-d-learn

On Sunday, 4 December 2022 at 23:25:34 UTC, Adam D Ruppe wrote:

On Sunday, 4 December 2022 at 22:46:52 UTC, Ali Çehreli wrote:

That's way beyond my pay grade. Explain please. :)


The reason that the GC stops threads right now is to ensure 
that something doesn't change in the middle of its analysis.


[snip]


That's a great explanation. Thanks.


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread cc via Digitalmars-d-learn

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

What are your thoughts about using GC as a library writer?


If your program runs, does some stuff, and terminates, use the GC.
If your program runs, stays up for a while with user occasionally 
interacting with it, use the GC.
If your program runs, and stays up 24/7 doing things in the 
background, use the GC.


If your program is a game meant to run at 60+fps, and any sudden 
skip or interrupt is unacceptable, no matter how minor (which it 
should be), plan carefully about how to manage your game objects, 
because naive GC instantiation and discarding isn't going to cut 
it.  malloc/free, pre-allocated lists, and other strategies come 
into play here.  In a desperate pinch you can also manually 
`GC.free` your GC-allocated objects but this is not recommended.  
The GC can still be used for allocations that are not likely to 
significantly affect performance every frame (strings, occasional 
user-generated information requests, first-load data 
instantiation, Steam avatars, etc) -- but also be even more 
careful when you start mixing and matching.


I find that @nogc is a bit of a false idol though, even in 
situations where the GC is deliberately being avoided.  It simply 
adds too much pain to trying to make everything compliant, and 
certain things just plain don't work (amazingly, the 
non-allocating form of toString can't be @nogc), so I simply 
avoid it and "be careful" (and/or hook into the GC so I can 
monitor if an unexpected allocation happens).  If you're writing 
code that's going to run on a space shuttle or life support 
system, then yeah you might consider the extra effort, but in my 
use cases it simply fails the cost-benefit analysis.


For any strategy, it's still a good idea to have a good 
understanding of or profile your allocations/deallocations so 
you're not just spending memory haphazardly or generating 
excessive collections.


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread Guillaume Piolat via Digitalmars-d-learn
On Sunday, 4 December 2022 at 21:55:52 UTC, Siarhei Siamashka 
wrote:
Is it possible to filter packages in this list by @nogc or 
@safe compatibility?


You can list DUB packages for "@nogc usage"
https://code.dlang.org/?sort=score&limit=20&category=library.nogc




Re: Idiomatic D using GC as a library writer

2022-12-05 Thread Guillaume Piolat via Digitalmars-d-learn
There are legitimate uses cases when you can't afford the runtime 
machinery (attach/detach every incoming thread in a shared 
library), more than not being able to afford the GC from a 
performance point of view.


GC gives you higher productivity and better performance with the 
time gained.


Now, @nogc code is good for performance since (even in a GC 
program) you will have no hidden allocation anymore, if you also 
disable postBlut and copy ctor, unlike in C++ where hidden copies 
are rempant.



On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

What are your thoughts about using GC as a library writer?


I don't use it always, but wish I could do it.
Meanwhile, I make plenty of nothrow @nogc code.

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:
If you wan't to include a library into your project aren't you 
more inclined to use a library which is gc free?


Yes I am, but my needs are very specific and only the "betterC" 
subset fits it, and it's certainly not the nominal case in D, nor 
should it be. Some of the D target have strict requirements, for 
example Hipreme engine use audio-formats (nothrow @nogc), but 
audio-formats uses exceptions internally, maybe that will be an 
issue, depending on the flavour of D runtime it uses.


Re: Idiomatic D using GC as a library writer

2022-12-05 Thread Patrick Schluter via Digitalmars-d-learn

On Sunday, 4 December 2022 at 23:37:39 UTC, Ali Çehreli wrote:

On 12/4/22 15:25, Adam D Ruppe wrote:

> which would trigger the write barrier. The thread isn't
> allowed to complete this operation until the GC is done.

According to my limited understanding of write barriers, the 
thread moving to 800 could continue because order of memory 
operations may have been satisfied. What I don't see is, what 
would the GC thread be waiting for about the write to 800?


I'm not a specialist but I have the impression that GC write 
barrier and CPU memory ordering write barriers are 2 different 
things that confusedly use the same term for 2 completely 
different concepts.




Would the GC be leaving behind writes to every page it scans, 
which have barriers around so that the other thread can't 
continue? But then the GC's write would finish and the other 
thread's write would finish.


Ok, here is the question: Is there a very long standing partial 
write that the GC can perform like: "I write to 0x42, but I 
will finish it 2 seconds later. So, all other writes should 
wait?"


> The GC finishes its work and releases the barriers.

So, it really is explicit acquisition and releasing of these 
barriers... I think this is provided by the CPU, not the OS. 
How many explicit write barriers are there?


Ali





Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Ali Çehreli via Digitalmars-d-learn

On 12/4/22 15:25, Adam D Ruppe wrote:

> which would trigger the write barrier. The thread isn't
> allowed to complete this operation until the GC is done.

According to my limited understanding of write barriers, the thread 
moving to 800 could continue because order of memory operations may have 
been satisfied. What I don't see is, what would the GC thread be waiting 
for about the write to 800?


Would the GC be leaving behind writes to every page it scans, which have 
barriers around so that the other thread can't continue? But then the 
GC's write would finish and the other thread's write would finish.


Ok, here is the question: Is there a very long standing partial write 
that the GC can perform like: "I write to 0x42, but I will finish it 2 
seconds later. So, all other writes should wait?"


> The GC finishes its work and releases the barriers.

So, it really is explicit acquisition and releasing of these barriers... 
I think this is provided by the CPU, not the OS. How many explicit write 
barriers are there?


Ali



Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Adam D Ruppe via Digitalmars-d-learn

On Sunday, 4 December 2022 at 22:46:52 UTC, Ali Çehreli wrote:

That's way beyond my pay grade. Explain please. :)


The reason that the GC stops threads right now is to ensure that 
something doesn't change in the middle of its analysis.


Consider for example, the GC scans address 0 - 1000 and finds 
nothing. Then a running thread moves a reference from memory 
address 2200 down to address 800 while the GC is scanning 
1000-2000.


Then the GC scans 2000-3000, where the object used to be, but it 
isn't there anymore... and the GC has no clue it needs to scan 
address 800 again. It, never having seen the object, thinks the 
object is just dead and frees it.


Then the thread tries to use the object, leading to a crash.

The current implementation prevents this by stopping all threads. 
If nothing is running, nothing can move objects around while the 
GC is trying to find them.


But, actually stopping everything requires 1) the GC knows which 
threads are there and has a way to stop them and 2) is overkill! 
All it really needs to do is prevent certain operations that 
might change the GC's analysis while it is running, like what 
happened in the example. It isn't important to stop numeric work, 
that won't change the GC. It isn't important to stop pointer 
reads (well not in D's gc anyway, there's some that do need to 
stop this) so it doesn't need to stop them either.


Since what the GC cares about are pointer locations, it is 
possible to hook that specifically, which we call write barriers; 
they either block pointer writes or at least notify the GC about 
them. (And btw not all pointer writes need to be blocked either, 
just ones that would point to a different memory block. So things 
like slice iterations can also be allowed to continue. More on my 
blog 
http://dpldocs.info/this-week-in-d/Blog.Posted_2022_10_31.html#thoughts-on-pointer-barriers )


So what happens then:


GC scans address 0 - 1000 and finds nothing.

Then a running thread moves a reference from memory address 2200 
down to address 800... which would trigger the write barrier. The 
thread isn't allowed to complete this operation until the GC is 
done. Notice that the GC didn't have to know about this thread 
ahead of time, since the running thread is responsible for 
communicating its intentions to the GC as it happens. 
(Essentially, the GC holds a mutex and all pointer writes in 
generated D code are synchronized on it, but there's various 
implementations.)


Then the GC scans 2000-3000, and the object is still there since 
the write is paused! It doesn't free it.


The GC finishes its work and releases the barriers. The thread 
now resumes and finishes the move, with the object still alive 
and well. No crash.


This would be a concurrent GC, not stopping threads that are 
doing self-contained work, but it would also be more compatible 
with external threads, since no matter what the thread, it'd use 
that gc mutex barrier.


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread rikki cattermole via Digitalmars-d-learn
ALl it means is certain memory patterns (such as writes), will tell the 
GC about it.


Its required for pretty much all advanced GC designs, as a result we are 
pretty much maxing out what we can do.


Worth reading: 
https://www.amazon.com/Garbage-Collection-Handbook-Management-Algorithms/dp/1420082795


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Ali Çehreli via Digitalmars-d-learn

On 12/4/22 12:17, Adam D Ruppe wrote:

On Sunday, 4 December 2022 at 17:53:00 UTC, Adam D Ruppe wrote:
Interesting... you know, maybe D's GC should formally expose a mutex 
that you can synchronize on for when it is running.


.. or compile in write barriers. then it doesn't matter if the 
thread is unregistered, the write barrier will protect it as-needed!


That's way beyond my pay grade. Explain please. :)

Ali



Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Adam D Ruppe via Digitalmars-d-learn
On Sunday, 4 December 2022 at 21:55:52 UTC, Siarhei Siamashka 
wrote:
Do you mean the top of the 
https://code.dlang.org/?sort=score&category=library list?


Well, I was referring to the five that appear on the homepage, 
which shows silly instead of emsi containers.



How do you know that they embrace GC?


I looked at the projects. Except for that arsd-official thing, 
that's a big mystery to me, the code is completely unreadable.


But vibe and dub use it pretty broadly. Unit-threaded and silly 
are test runners, which isn't even really a library (I find it 
weird that they are consistently at the top of the list), so much 
of them don't need the GC anyway, but you can still see that they 
use it without worry when they do want it like when building the 
test list with ~=.


emsi-containers is built on the allocators thing so it works with 
or without gc (it works better without though as you learn if you 
try to use them.)


Is it possible to filter packages in this list by @nogc or 
@safe compatibility?


No. I do have an idea for it, searching for @nogc attributes or 
attached @nogc unittests, but I haven't gotten around to trying 
it.


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Siarhei Siamashka via Digitalmars-d-learn

On Sunday, 4 December 2022 at 12:37:08 UTC, Adam D Ruppe wrote:
All of the top 5 most popular libraries on code.dlang.org 
embrace the GC.


Do you mean the top of the 
https://code.dlang.org/?sort=score&category=library list?


How do you know that they embrace GC? Is it possible to filter 
packages in this list by @nogc or @safe compatibility?


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Adam D Ruppe via Digitalmars-d-learn

On Sunday, 4 December 2022 at 17:53:00 UTC, Adam D Ruppe wrote:
Interesting... you know, maybe D's GC should formally expose a 
mutex that you can synchronize on for when it is running.


.. or compile in write barriers. then it doesn't matter 
if the thread is unregistered, the write barrier will protect it 
as-needed!


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Adam D Ruppe via Digitalmars-d-learn

On Sunday, 4 December 2022 at 16:02:28 UTC, Ali Çehreli wrote:
D's GC needed to stop the world, which meant it would have to 
know what threads were running. You can never be sure whether 
your D library function is being called from a thread you've 
known or whether the Java runtime (or other user code) just 
decided to start another thread.


Interesting... you know, maybe D's GC should formally expose a 
mutex that you can synchronize on for when it is running. So you 
can cooperatively do this in the jni bridge or something. Might 
be worth considering.


I've heard stories about similar things happening with C#.


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread ryuukk_ via Digitalmars-d-learn

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

Dear dlang community.


I am unsure about what idiomatic D is.

Some of the Dconf talks tells people just to use the GC, until 
you can't afford

it.

If there are documents that describes what idiomatic D is then 
I would appreciate it.



So my questions are:


What are your thoughts about using GC as a library writer?


If you wan't to include a library into your project aren't you 
more inclined to use a


library which is gc free?



If that is true, then idiomatic D doesn't apply for library 
writers.


Since to get most exposure as a D library writer you kinda need 
to make it gc free right?




Cheers.



D gives you the choice

But the most important thing is your usecase, what kind of 
library are you making?


Once you answer this question, you can then ask what your memory 
strategy should be, and then it is based on performance concerns


D scale from microcontrollers to servers, drivers, games, desktop 
apps


Your audience will determine what you should provide

For a desktop app, a GC is an advantage

For a driver or a game, it's not







Re: Idiomatic D using GC as a library writer

2022-12-04 Thread vushu via Digitalmars-d-learn

On Sunday, 4 December 2022 at 15:57:26 UTC, Ali Çehreli wrote:

On 12/4/22 05:58, vushu wrote:

> I was worried if my library should be GC free

May I humbly recommend you question where that thinking comes 
from?


Ali

P.S. I used to be certain that the idea of GC was wrong and the 
creators of runtimes with GC were simpletons. In contrast, 
people like me, people who could understand C++, were 
enlightened. Then I learned.


I also come from C++ and as you know it, the community over there 
isn't quite fond of GC.
So I just logical think that by excluding the GC you actually 
widen the range of usage.


But if I only want to cater to the d ecosystem then using GC is 
the recommended way.





Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Ali Çehreli via Digitalmars-d-learn

On 12/4/22 06:27, Sergey wrote:

> if it will be possible to write
> library in D and use it from
> C/++/Python/R/JVM(JNI)/Erlang(NIF)/nameYourChoice smoothly it will be a
> win.

Years ago we tried to call D from Java. I realized that it was very 
tricky to introduce the calling thread to D's GC. D's GC needed to stop 
the world, which meant it would have to know what threads were running. 
You can never be sure whether your D library function is being called 
from a thread you've known or whether the Java runtime (or other user 
code) just decided to start another thread.


We failed and D was replaced with C++.

Ali



Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Ali Çehreli via Digitalmars-d-learn

On 12/4/22 05:58, vushu wrote:

> I was worried if my library should be GC free

May I humbly recommend you question where that thinking comes from?

Ali

P.S. I used to be certain that the idea of GC was wrong and the creators 
of runtimes with GC were simpletons. In contrast, people like me, people 
who could understand C++, were enlightened. Then I learned.




Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Sergey via Digitalmars-d-learn

On Sunday, 4 December 2022 at 12:37:08 UTC, Adam D Ruppe wrote:
All of the top 5 most popular libraries on code.dlang.org 
embrace the GC.


Interesting. It seems that most of the community suppose that 
“library” should be used from D :-)
But in my opinion - “foreign library experience” is much more 
important. The usage of D is not that wide… but if it will be 
possible to write library in D and use it from 
C/++/Python/R/JVM(JNI)/Erlang(NIF)/nameYourChoice smoothly it 
will be a win. Run fast (it could be Rust, Zig) extension/library 
from more high level/less safe/slower dynamic languages. And not 
only run but also write fast(here is D and Nim could be chosen).


Many languages do not have GC inside.. and others have their own. 
And if your library is going to manipulate objects from other 
languages with different memory management approach - it could be 
tricky to do that with GC. You need to make that both GC become 
friends


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread vushu via Digitalmars-d-learn

On Sunday, 4 December 2022 at 13:03:07 UTC, Hipreme wrote:

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

Dear dlang community.


I am unsure about what idiomatic D is.

Some of the Dconf talks tells people just to use the GC, until 
you can't afford

it.

If there are documents that describes what idiomatic D is then 
I would appreciate it.



So my questions are:


What are your thoughts about using GC as a library writer?


If you wan't to include a library into your project aren't you 
more inclined to use a


library which is gc free?



If that is true, then idiomatic D doesn't apply for library 
writers.


Since to get most exposure as a D library writer you kinda 
need to make it gc free right?




Cheers.



"Until you can't afford", is something really extreme. There is 
a bunch of ways to deal with GC memory, what I would say that 
can't afford is when you're constantly allocating memory and 
because of that, making the program more prone to execute a 
collection. I haven't had any problem with the GC yet. If you 
think your program is slow, pass it on a profiler and you'll 
know the real problem. Don't think too much about that or else 
you're gonna lose a heck lot of productivity and end up 
creating needlessly unsafe code.




True that makes sense, I also tried using nogc in code, but it 
complicates things.

The code is much easier to write when I don't work against the GC.

If you're still gonna be hard headed against the GC, at least 
use slices when allocating from malloc, makes your code safe, 
readable and less variables to think about. Don't use raw 
pointers unnecessarily, and right now, the only reason pointers 
have been used in my code base was not for allocated memory, 
but for being able to modify a variable from another place when 
you need to store a variable reference. If you're only gonna 
modify it inside the function, use `ref` instead.


Thanks for the tips :)


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread vushu via Digitalmars-d-learn

On Sunday, 4 December 2022 at 12:37:08 UTC, Adam D Ruppe wrote:

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

What are your thoughts about using GC as a library writer?


Do it. It is lots of gain for very little loss.

If you wan't to include a library into your project aren't you 
more inclined to use a library which is gc free?


No, GC free means the library is necessarily more complicated 
to use and will likely result in a buggier program.


Since to get most exposure as a D library writer you kinda 
need to make it gc free right?


All of the top 5 most popular libraries on code.dlang.org 
embrace the GC.


That's great to hear thanks! I was worried if my library should 
be GC free or not and how it will affect the adoption of it. 
Seems like there is no concern.





Re: Idiomatic D using GC as a library writer

2022-12-04 Thread bachmeier via Digitalmars-d-learn

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

Dear dlang community.


I am unsure about what idiomatic D is.


Idiomatic D code produces the correct result, it's readable, and 
it's easy for others to use.


Some of the Dconf talks tells people just to use the GC, until 
you can't afford

it.


"can't afford it" in what sense? Pauses for garbage collection 
are one thing, overall runtime performance is something 
completely different. Avoiding the GC won't magically make your 
program faster.


If there are documents that describes what idiomatic D is then 
I would appreciate it.



So my questions are:


What are your thoughts about using GC as a library writer?


Depends on the library, but most of the time it's best to use it. 
D's main problem at this point is a lack of high-quality, 
easy-to-use libraries - not libraries that use the GC.


If you wan't to include a library into your project aren't you 
more inclined to use a


library which is gc free?


The moment I have to think about memory management, I start 
looking for a different library. I suppose there's nothing wrong 
if a library avoids the GC internally (since that won't affect 
me). The GC has never caused problems for me. It has made my life 
easier.




Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Hipreme via Digitalmars-d-learn

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

Dear dlang community.


I am unsure about what idiomatic D is.

Some of the Dconf talks tells people just to use the GC, until 
you can't afford

it.

If there are documents that describes what idiomatic D is then 
I would appreciate it.



So my questions are:


What are your thoughts about using GC as a library writer?


If you wan't to include a library into your project aren't you 
more inclined to use a


library which is gc free?



If that is true, then idiomatic D doesn't apply for library 
writers.


Since to get most exposure as a D library writer you kinda need 
to make it gc free right?




Cheers.



"Until you can't afford", is something really extreme. There is a 
bunch of ways to deal with GC memory, what I would say that can't 
afford is when you're constantly allocating memory and because of 
that, making the program more prone to execute a collection. I 
haven't had any problem with the GC yet. If you think your 
program is slow, pass it on a profiler and you'll know the real 
problem. Don't think too much about that or else you're gonna 
lose a heck lot of productivity and end up creating needlessly 
unsafe code.


If you're still gonna be hard headed against the GC, at least use 
slices when allocating from malloc, makes your code safe, 
readable and less variables to think about. Don't use raw 
pointers unnecessarily, and right now, the only reason pointers 
have been used in my code base was not for allocated memory, but 
for being able to modify a variable from another place when you 
need to store a variable reference. If you're only gonna modify 
it inside the function, use `ref` instead.


Re: Idiomatic D using GC as a library writer

2022-12-04 Thread Adam D Ruppe via Digitalmars-d-learn

On Sunday, 4 December 2022 at 09:53:41 UTC, vushu wrote:

What are your thoughts about using GC as a library writer?


Do it. It is lots of gain for very little loss.

If you wan't to include a library into your project aren't you 
more inclined to use a library which is gc free?


No, GC free means the library is necessarily more complicated to 
use and will likely result in a buggier program.


Since to get most exposure as a D library writer you kinda need 
to make it gc free right?


All of the top 5 most popular libraries on code.dlang.org embrace 
the GC.


Idiomatic D using GC as a library writer

2022-12-04 Thread vushu via Digitalmars-d-learn

Dear dlang community.


I am unsure about what idiomatic D is.

Some of the Dconf talks tells people just to use the GC, until 
you can't afford

it.

If there are documents that describes what idiomatic D is then I 
would appreciate it.



So my questions are:


What are your thoughts about using GC as a library writer?


If you wan't to include a library into your project aren't you 
more inclined to use a


library which is gc free?



If that is true, then idiomatic D doesn't apply for library 
writers.


Since to get most exposure as a D library writer you kinda need 
to make it gc free right?




Cheers.





Re: does dmd --build=profile-gc work with core.stdc.stdlib.exit()?

2022-11-13 Thread mw via Digitalmars-d-learn

On Sunday, 13 November 2022 at 19:02:29 UTC, mw wrote:
BTW, can --build=profile-gc can intercept "Ctrl+C" and 
generate *partial* report file?


And what's the suggested proper way to do


Is there a profile-gc plugin function I can call in the middle of 
my program to generate *partial* report file?


Re: does dmd --build=profile-gc work with core.stdc.stdlib.exit()?

2022-11-13 Thread mw via Digitalmars-d-learn

On Sunday, 13 November 2022 at 18:51:17 UTC, mw wrote:

On Sunday, 13 November 2022 at 18:48:42 UTC, mw wrote:
BTW, can --build=profile-gc can intercept "Ctrl+C" and 
generate *partial* report file?


And what's the suggested proper way to do early exit, and 
still let --build=profile-gc generate reports?


I tried presss "Ctrl+C", and that cannot stop the program, it 
just hangs there.


I have to `kill -9 ` it to get it stopped.


My build command is:
```
/dmd2/linux/bin64/dub build --build=profile-gc --config=... 
--compiler=dmd

```



Re: does dmd --build=profile-gc work with core.stdc.stdlib.exit()?

2022-11-13 Thread mw via Digitalmars-d-learn

On Sunday, 13 November 2022 at 18:48:42 UTC, mw wrote:
BTW, can --build=profile-gc can intercept "Ctrl+C" and generate 
*partial* report file?


And what's the suggested proper way to do early exit, and still 
let --build=profile-gc generate reports?


I tried presss "Ctrl+C", and that cannot stop the program, it 
just hangs there.


I have to `kill -9 ` it to get it stopped.


does dmd --build=profile-gc work with core.stdc.stdlib.exit()?

2022-11-13 Thread mw via Digitalmars-d-learn

Hi,

I'm mem-profiling a multi-threaded program, and want it to exit 
early, so I added a call

```
core.stdc.stdlib.exit(-1);
```

in a loop in one of the thread.

However when the program reached this point, it seems hang: it's 
not exiting, and CPU usage dropped to 0%.


I'm wondering does dmd --build=profile-gc work with 
core.stdc.stdlib.exit()?


And where is the output report file, and the filename? I didn't 
see any report file generated in the current working dir.


BTW, can --build=profile-gc can intercept "Ctrl+C" and generate 
*partial* report file?


And what's the suggested proper way to do early exit, and still 
let --build=profile-gc generate reports?


Thanks!


  1   2   3   4   5   6   7   8   9   10   >