Re: How to pass noncopyable variadic arguments with ref?

2022-11-03 Thread tchaloupka via Digitalmars-d-learn

On Friday, 21 October 2022 at 12:05:28 UTC, ryuukk_ wrote:

On Thursday, 20 October 2022 at 14:03:10 UTC, tchaloupka wrote:

void test(Foo..)(Foo foos)

I don't know if that's the 1:1 alternative, but that doesn't 
compile


onlineapp.d(23): Error: struct `onlineapp.Foo` is not 
copyable because it has a disabled postblit


Yeah, I've ended up with this kind of workaround too.
Posted a bug report: 
https://issues.dlang.org/show_bug.cgi?id=23452




How to pass noncopyable variadic arguments with ref?

2022-10-20 Thread tchaloupka via Digitalmars-d-learn

Hi,
I've found strange behavior where:

```D
import std.stdio;

struct Foo
{
@disable this(this);
int x;
}

void test(Foo[] foos...)
{
foreach (ref f; foos) {
writeln(, ": ", f.x);
f.x = 0;
}
}

void main()
{
Foo f1 = Foo(1);
Foo f2 = Foo(2);
writeln("f1: ", );
writeln("f2: ", );
test(f1, f2);
writeln("f1: ", f1.x);
writeln("f2: ", f2.x);
}
```

Compiles fine (no error on passing noncopyable arguments to the 
function), but there are other objects passed to the function as 
they aren't cleared out in the caller scope.


Shouldn't it at least protest that objects can't be passed to the 
function as they aren't copyable?


Re: auto ref function parameter causes that non copyable struct is copied?

2021-11-09 Thread tchaloupka via Digitalmars-d-learn

On Monday, 8 November 2021 at 23:26:39 UTC, tchaloupka wrote:

Bug or feature? :)


I've reported it in 
https://issues.dlang.org/show_bug.cgi?id=22498.


Re: auto ref function parameter causes that non copyable struct is copied?

2021-11-08 Thread tchaloupka via Digitalmars-d-learn

On Tuesday, 9 November 2021 at 02:43:55 UTC, jfondren wrote:

On Tuesday, 9 November 2021 at 02:41:18 UTC, jfondren wrote:
The expectation is probably that `f.move` set `f` to 
`Foo.init`, but the docs say:


Posted too fast. Foo qualifies with its @disable:

```d
struct A { int x; }
struct B { int x; @disable this(this); }

unittest {
import core.lifetime : move;
auto a = A(5);
auto b = B(5);
a.move;
b.move;
assert(a == A(5));
assert(b == B(0));
}
```


Yes it should qualify so it should be cleaned out when moved.
When I change:

```D
auto ref unwrap(EX)(auto ref EX res) {
printf("unwrap()\n");
return res.get();
}
```

I get:

```
~this(0)
~this(0)
~this(0)
unwrap()
~this(42)
~this(0)
~this(42)
```

So the destructor isn't called from `gen()` -> there is 
[NRVO](https://dlang.org/glossary.html#nrvo) in play.


Also `pragma(msg, __traits(isRef, res));` prints false in unwrap 
in this case (which is expected), but how it gets there as a 
value when it's not moved and instead copied even though it has 
disabled copy constructor?

`isCopyable!(Value!Foo)` returns false as expected too.


auto ref function parameter causes that non copyable struct is copied?

2021-11-08 Thread tchaloupka via Digitalmars-d-learn

Lets have this code:

```D
import core.lifetime : forward;
import core.stdc.stdio;
import std.algorithm : move;

struct Value(T) {
private T storage;

this()(auto ref T val) {
storage = forward!val;
}

ref inout(T) get() inout {
return storage;
}
}

Value!T value(T)(auto ref T val) {
return Value!T(forward!val);
}

auto ref unwrap(EX)(auto ref EX res) {
return res.get();
}

struct Foo {
int n;
@disable this(this);
~this() { printf("~this(%d)\n", n); }
}

auto gen() {
Foo f;
f.n = 42;
return value(f.move());
}

void main() {
Foo f;
f = gen().unwrap.move;
}
```

As I understand it unwrap in `f = gen().unwrap.move` can't be 
called by ref (as gen() returns rvalue) so it would force the 
`Value` to be copied but as it holds non copyable struct, it 
can't be and so it should end up with a compiler error.


But the code outputs:
```
~this(0)
~this(0)
~this(0)
~this(42) <- this is a copy (that shouldn't exist) being destroyed
~this(0)
~this(42)
```

This could cause unwanted resource cleanup on a seemingly non 
copyable structs.

Bug or feature? :)


Problem using struct with copy constructor with betterC

2021-11-06 Thread tchaloupka via Digitalmars-d-learn
After a long fiddling with a code that won't compile I've got 
this test case:


```D
struct Foo {
this(ref return scope Foo rhs) {}
~this() {}
}

struct Bar {
@disable this(this);
Foo[2] foos;
}

extern (C)
void main() {}
```

When built with `-betterC` switch (dmd as ldc2 works with it).

I get just:
```
Error: `TypeInfo` cannot be used with -betterC
```

Well imagine getting this useful info in a large codebase :/

Anyway, I've traced the problem by custom built dmd using 
`printf` logging here: 
https://github.com/dlang/dmd/blob/a6f49dade85452d61d9ebcf329e2567ecacd5fab/src/dmd/e2ir.d#L2606


So it somehow tries to generate `_d_arrayctor` for the static 
array.


I've tried to add `= void` to the array, but it doesn't help.
Is it a bug (probably is) and are there any workaround for this?

If I remove copy constructor or destructor form `Foo`, it 
compiles. But I need them there..


BetterC is great, but a bit of a minefield to use.


Re: GC memory fragmentation

2021-04-14 Thread tchaloupka via Digitalmars-d-learn

On Tuesday, 13 April 2021 at 12:30:13 UTC, tchaloupka wrote:
I'm not so sure if pages of small objects (or large) that are 
not completely empty can be reused as a new bucket or only free 
pages can be reused.


Does anyone has some insight of this?

Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..



As now it's possible to inject own GC to runtime, I've tried to 
compile debug version of the GC implementation and use that to 
check some debugging printouts with regard to small allocations.


The code I've used to simulate the behavior (with 
`--DRT-gcopt=incPoolSize:0`):


```D
// fill 1st pool (minsize is 1MB so it's 256 pages, two small 
objects in each

ubyte[][512] pool1;
foreach (i; 0..pool1.length) pool1[i] = new ubyte[2042];

// no more space in pool1 so new one is allocated and also filled
ubyte[][512] pool2;
foreach (i; 0..pool2.length) pool2[i] = new ubyte[2042];

// free up first small object from first pool first page
pool1[0] = null;
GC.collect();

// allocate another small object
pool1[0] = new ubyte[2042];

```

And shortened result is:

```
// first pool allocations
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
Gcx::allocPage(bin = 13)
Gcx::allocPage(bin = 13)
Gcx::newPool(npages = 1, minPages = 256)
Gcx::allocPage(bin = 13)
  => p = 0x7f709e75c000

// second pool allocations
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
Gcx::allocPage(bin = 13)
Gcx.fullcollect()
preparing mark.
scan stacks.
scan roots[]
scan ranges[]
0x51e4e0 .. 0x53d580
free'ing
free'd 0 bytes, 0 pages from 1 pools
recovered small pages = 0
Gcx::allocPage(bin = 13)
Gcx::newPool(npages = 1, minPages = 256)
Gcx::allocPage(bin = 13)
  => p = 0x7f709e65c000

// collection of first item of first pool first bin
GC.fullCollect()
Gcx.fullcollect()
preparing mark.
scan stacks.
scan roots[]
scan ranges[]
0x51e4e0 .. 0x53d580
free'ing
collecting 0x7f709e75c000
free'd 0 bytes, 0 pages from 2 pools
recovered small pages = 0

// and allocation to test where it'll allocate
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
  recover pool => 0x83d1a0
  => p = 0x7f709e75c000
```

So if I'm not mistaken, bins in pools are reused when there is a 
free space (ast last allocation reuses address from first item in 
first bin in first pool).


I've also tested big allocations with same result.

But if this is so, I don't understand why in our case there is 
95% free space in the GC and it still grows from time to time.


At least rate of grow is minimized with 
`--DRT-gcopt=incPoolSize:0`


Would have to log and analyze the behavior more with a custom GC 
build..k


Re: GC memory fragmentation

2021-04-13 Thread tchaloupka via Digitalmars-d-learn

On Monday, 12 April 2021 at 07:03:02 UTC, Sebastiaan Koppe wrote:


We have similar problems, we see memory usage alternate between 
plateauing and then slowly growing. Until it hits the 
configured maximum memory for that job and the orchestrator 
kills it (we run multiple instances and have good failover).


I have reduced the problem by refactoring some of our gc usage, 
but the problem still persists.


On side-note, it would also be good if the GC can be aware of 
the max memory it is allotted so that it knows it needs to do 
more aggressive collections when nearing it.


I knew this must be a more common problem :)

What I've found in the meantime:

* nice writeup of how GC actually works by Vladimir Panteleev - 
https://thecybershadow.net/d/Memory_Management_in_the_D_Programming_Language.pdf
  * described tool (https://github.com/CyberShadow/Diamond) would 
be very helpfull, but I assume it's for D1 and based on some old 
druntime fork :(
* we've implemented log rotation using `std.zlib` (by just 
`foreach (chunk; fin.byChunk(4096).map!(x => c.compress(x))) 
fout.rawWrite(chunk);`)
  * oh boy, don't use `std.zlib.Compress` that way, it allocates 
each chunk and for a large files it creates large GC memory peaks 
that sometimes doesn't go down

  * rewritten using direct `etc.c.zlib` completely out of GC
* currently testing with `--DRT-gcopt=incPoolSize:0` as otherwise 
allocated page size multiplies with number of allocated pools * 
3MB by default
* `profile-gc` is not much helpfull in this case as it only 
prints total allocated memory for each allocation on the 
application exit and as it's a long running service using many 
various libraries it's just hundreds of lines :)
  * I've considered to fork the process periodically, terminate 
it and rename the created profile statistics to at least see the 
differences between the states, but still not sure if it would 
help much
* as I understand the GC it uses different algorithm for small 
allocations and for large objects

  * small (`<=2048`)
* it categorizes objects to fixed set of used sizes and for 
each uses whole memory page as bucket with free list from which 
it reserves memory on request
* when the bucket is full, new page is allocated and 
allocations are provided from that

  * big - similar, but it allocates N pages as a pool

So If I understand it correctly when for example vibe-d 
initializes new fiber on some request, it's handled and fiber can 
be discarded it can easily lead to a scenario when fiber itself 
is allocated in one page, it's filled up during the request 
processing so new page is allocated and when cleaning, bucket 
with fiber cannot be cleaned up as it's added to a `TaskFiber` 
pool (with a fixed size). This way fiber's bucket would never be 
freed and easily never be used anymore during the application 
lifetime.


I'm not so sure if pages of small objects (or large) that are not 
completely empty can be reused as a new bucket or only free pages 
can be reused.


Does anyone has some insight of this?

Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..


Re: GC memory fragmentation

2021-04-11 Thread tchaloupka via Digitalmars-d-learn

On Sunday, 11 April 2021 at 12:20:39 UTC, Nathan S. wrote:


One thing that comes to mind: is your application compiled as 
32-bit? The garbage collector is much more likely to leak 
memory with a 32-bit address space since it much more likely 
for a random int to appear to be a pointer to the interior of a 
block of GC-allocated memory.


Nope it's 64bit build.
I've also tried to switch to precise GC with same result.

Tom


GC memory fragmentation

2021-04-11 Thread tchaloupka via Digitalmars-d-learn

Hi,
we're using vibe-d (on Linux) for a long running REST API server 
and have problem with constantly growing memory until system 
kills it with OOM killer.


First guess was some memory leak so we've added periodic call to:

```D
GC.collect();
GC.minimize();
malloc_trim(0);
```

And when called very often (ie 500ms) service memory stays stable 
and doesn't grow, so it doesn't look as a memory leak (or at 
least not that fast to explain that service can go from 90MB to 
900MB in 2 days).


But this is very bad for performance so we've prolonged the 
interval for example to 30 seconds and now memory still goes up 
(not that dramatically but still).


Stats of the GC when it growed up between 30s are these (logged 
after `GC.collect()` and `GC.minimize()`:


```
GC.stats: usedSize=9994864, freeSize=92765584, total=102760448
GC.stats: usedSize=11571456, freeSize=251621120, total=263192576
```

Before the grow it has 88MB free space from 98MB total allocated.
After 30s it has 239MB free from 251MB allocated.

So it wastes a lot of free space which it can't return back to OS 
for some reason.


Can these numbers be caused by memory fragmentation? There is 
probably a lot of small allocations (postgresql query, result 
processing and REST API json serialization).


Only explanation that makes some sense is that in some operation 
there is required memory allocations that can't be fulfilled with 
current memory pool (ie due to the memory fragmentation in it) 
and then it allocates some data in new memory segment that can't 
be returned afterwards as it still holds the 'live' data. But 
that should be freed too at some point and GC should minimize (if 
another request doesn'cause allocation in the 'wrong' page again).


But still, the amount of used vs free memory seems wrong, its a 
whooping 95% of free space that can't be minimized :-o. I have a 
problem imagining some fragmentation in it.


Are there any tools that can help diagnosing this more?

Also note that `malloc_trim` is not called from the GC itself and 
as internally used malloc handles it's own memory pool it has 
it's own quirks with returning unused memory back to the OS (it 
does it only on `free()` in some cases).
See for example: 
http://notes.secretsauce.net/notes/2016/04/08_glibc-malloc-inefficiency.html


Behavior of malloc can be controlled with `mallopt` with:

```
M_TRIM_THRESHOLD
When  the  amount  of contiguous free memory at the top of 
the heap grows sufficiently large, free(3) employs sbrk(2) to 
release this memory back to the system.  (This can be useful in 
programs that continue to execute
for a long period after freeing a significant amount of 
memory.)  The M_TRIM_THRESHOLD parameter specifies the minimum 
size (in bytes) that this block of memory must reach before 
sbrk(2) is used to trim the heap.


The default value for this parameter is 128*1024.
```

But the default is 128kB free heap memory block for trim to 
activate but when `malloc_trim` is called manually, a much larger 
block of memory is often returned. Thats puzzling too :)


Re: Struct field destructor not called when exception is thrown in the main struct destructor

2020-10-17 Thread tchaloupka via Digitalmars-d-learn
On Friday, 16 October 2020 at 16:00:07 UTC, Steven Schveighoffer 
wrote:

On 10/16/20 9:12 AM, tchaloupka wrote:

So when the exception is thrown within Foo destructor (and 
it's bad on it's own but can easily happen as destructors 
aren't nothrow @nogc by default).


Is this behavior expected?


I would say it's a bug. The compiler is going to call the 
member destructor even if the hand-written destructor does it 
too. If the compiler wants to take responsibility for cleaning 
up members, it should take full responsibility. In fact, there 
is no way to instruct the compiler "I'm handling the 
destruction of this member", so I don't see why it should 
matter if you exit the function via exception it should be any 
different.


-Steve


Thx, added https://issues.dlang.org/show_bug.cgi?id=21322


Struct field destructor not called when exception is thrown in the main struct destructor

2020-10-16 Thread tchaloupka via Digitalmars-d-learn
Found a pretty nasty bug in vibe-d: 
https://github.com/vibe-d/vibe.d/issues/2484


And it's caused by this behavior.

```D
import std;

struct Foo {
Bar bar;
bool err;

~this() {
// scope(failure) destroy(bar); // < this fixes the Bar 
destructor call

enforce(!err, "Test err");
}
}

struct Bar {
static int refs;
~this() { refs--; }
}

void main()
{
{
Foo f;
Bar.refs = 1;
}
assert(Bar.refs == 0);

try () {
Foo f;
f.err = true;
Bar.refs = 1;
}();
catch (Exception ex) {}
assert(Bar.refs == 0);
}
```

So when the exception is thrown within Foo destructor (and it's 
bad on it's own but can easily happen as destructors aren't 
nothrow @nogc by default).


Is this behavior expected?


Re: std.datetime & timzone specifier: 2018-11-06T16:52:03+01:00

2020-03-08 Thread tchaloupka via Digitalmars-d-learn

On Sunday, 8 March 2020 at 17:28:33 UTC, Robert M. Münch wrote:

On 2020-03-07 12:10:27 +, Jonathan M Davis said:

DateTime dt = 
DateTime.fromISOExtString(split("2018-11-06T16:52:03+01:00", 
regex("\\+"))[0]);


IMO such a string should be feedable directly to the function.


You just need to use SysTime.fromISO*String functions for that, 
as DateTime does't work with timezones, SysTime do.


Re: SQLite 3 support?

2020-02-26 Thread tchaloupka via Digitalmars-d-learn

On Wednesday, 26 February 2020 at 20:06:20 UTC, mark wrote:
There seems to be some support for SQLite 3 in std. lib. etc 
when looking at the stable docs:

https://dlang.org/phobos/etc_c_sqlite3.html

But this isn't visible when looking at stable (ddox).

Is this the best SQLite 3 library to use or is a third-party 
library best?

For example https://github.com/biozic/d2sqlite3


What's in the Phobos is just D binding to the sqlite3 C api. 
Probably not much up to date too. I've been using d2sqlite3 a lot 
some time ago and I'd definitelly recommend it. It also provides 
nice higher level API that helps to work with it in a more D 
friendly and productive way.


But I'm not following sqlite3 updates much nowadays.


Re: D create many thread

2020-02-06 Thread tchaloupka via Digitalmars-d-learn

On Wednesday, 5 February 2020 at 13:05:59 UTC, Eko Wahyudin wrote:

Hi all,

I'm create a small (hallo world) application, with DMD.
But my program create 7 annoying threads when create an empty 
class.




If you don't want the parallel sweep enabled for your app, you 
can turn it of ie with:


```
extern(C) __gshared string[] rt_options = [ "gcopt=parallel:0" ];
```

Or in various other ways as described in mentioned documentation: 
https://dlang.org/spec/garbage.html#gc_config





Re: Calling D library from other languages on linux using foreign threads

2019-03-25 Thread tchaloupka via Digitalmars-d-learn

On Saturday, 23 March 2019 at 17:33:31 UTC, tchaloupka wrote:
I've no idea what should be done with C's main thread as it 
can't be attached because it'll hang.
In one of forum threads I've also read the idea to not using 
foreign threads with GC but somehow delegate their work to D's 
thread.
Can this work somehow? But `new Thread()` would still be needed 
to call from C side or static this() but from C's thread.


I've tried this approach here: 
https://github.com/tchaloupka/dlangsharedlib/tree/master/workaround.


I must say, it's an ugly and error prone hack.. :(

It seems to be working, but questions remain:
* is it ok to initialize D runtime and then use GC in the same C 
thread (to create worker Thread)?

* is this reliable?
* should foreign threads work with D's runtime at all?

In the current state, D's libs seems to be pretty useless to be 
used from other languages if GC is needed.


Re: Calling D library from other languages on linux using foreign threads

2019-03-23 Thread tchaloupka via Digitalmars-d-learn

On Saturday, 23 March 2019 at 15:28:34 UTC, Ali Çehreli wrote:

On 03/22/2019 12:34 PM, tchaloupka wrote:
> I've searched a lot at it should be working at least on
linux, but
> apparently is not or I'm doing something totally wrong..
>
> Our use case is to call shared D library from C# (.Net Core)
and from
> different threads.

We needed to do the same from Java. I opened this discussion:

  https://forum.dlang.org/thread/p0dm8f$ij5$1...@digitalmars.com

and abandoned this pull request:

  https://github.com/dlang/druntime/pull/1989

We didn't need to pursue this further because the person who 
was pushing for D was leaving the company, so he rewrote the 
library in C++ before doing so.


I don't think it's possible to call into D's GC from another 
thread in a safe way. If I remember correctly, there is no 
absolute way in POSIX (or just Linux?) of knowing that a 
foreign thread has died. D's GC would be holding on to an 
already dead thread.


Ali


That's pretty unfortunate.
I know about your thread and PR, that's why I've tried to solve 
this by calling thread_attachThis()/thread_detachThis() in every 
worker function call.


But that doesn't work in compilers > dmd-2.078.1.

Actually after fiddling with this some more, I've discovered that 
when I call this method:


```D
void* entry_point2(void*)
{
printf("+entry_point2\n");
scope (exit) printf("-entry_point2\n");

	// This thread gets registered in druntime, does some work and 
gets

// unregistered to be cleaned up manually
	if (!thread_isMainThread()) // thread_attachThis will hang 
otherwise

{
printf("+entry_point2 - thread_attachThis()\n");
thread_attachThis();
rt_moduleTlsCtor();
}

// simulate GC work
auto x = new int[100];
GC.collect();

if (!thread_isMainThread())
{
printf("+entry_point2 - thread_detachThis()\n");
rt_moduleTlsDtor();
thread_detachThis();
}
return null;
}
```

from the main thread, it works too. I've observed, that:

* auto x = new int[100]; needs to be there as it doesn't work 
otherwise
* but it works only with some array lengths (I guess due to some 
GC decision when to kick in) - so is unreliable

* it still hangs sometimes on thread_detachThis()


I've no idea what should be done with C's main thread as it can't 
be attached because it'll hang.
In one of forum threads I've also read the idea to not using 
foreign threads with GC but somehow delegate their work to D's 
thread.
Can this work somehow? But `new Thread()` would still be needed 
to call from C side or static this() but from C's thread.




Re: Shared library loading and static constructor

2019-03-23 Thread tchaloupka via Digitalmars-d-learn

On Saturday, 23 March 2019 at 15:58:07 UTC, Sobaya wrote:
What I am saying is that it can not be read when a code 
importing (a.d) a code including the static constructor (b.d)  
is compiled into shared library.


Hi. I've tried to add your case to the repository and at it seems 
to be working for me.

At least with dmd-2.085.0.

When run with `make dynamicd`:

```
main shared static this
+main()
utils shared static this
worker shared static this
libworker.so is loaded
entry_point1() function is found
entry_point2() function is found
...
-main()
unloading libworker.so
worker shared static ~this
utils shared static ~this
main shared static ~this
```


Re: Calling D library from other languages on linux using foreign threads

2019-03-23 Thread tchaloupka via Digitalmars-d-learn

On Saturday, 23 March 2019 at 09:47:55 UTC, Andre Pany wrote:

On Friday, 22 March 2019 at 19:34:14 UTC, tchaloupka wrote:
Just to make sure, could you test it with dmd 2.78?


Actually when I remove the explicit GC call within unregistered 
thread (which is fixed in 
https://github.com/dlang/druntime/commit/42b4e0a9614ac794d4549ed5b2455fd0f805e123) then it works with dmd-2.078.1 but not with 2.079.1.


Well almost. Because sometimes it just hangs for me on:

```
#0  0x7fd5887d48ee in sigsuspend () from /usr/lib64/libc.so.6
#1  0x004526ec in 
core.thread.thread_suspendHandler(int).op(void*) ()
#2  0x0045274c in core.thread.callWithStackShell(scope 
void(void*) nothrow delegate) ()

#3  0x00452679 in thread_suspendHandler ()
#4  
#5  0x7fd588b1aacb in __pthread_timedjoin_ex () from 
/usr/lib64/libpthread.so.0

#6  0x004277ad in D main () at main.d:89
```

So thread is unregistered with:
```
rt_moduleTlsDtor();
thread_detachThis();
```

But hangs on pthread_join. Isn't it suspended by GC?

Manu's https://issues.dlang.org/show_bug.cgi?id=18815 seems to be 
related but for me it passes thread_attachThis() fine, but crash 
on GC during the work.


Calling D library from other languages on linux using foreign threads

2019-03-22 Thread tchaloupka via Digitalmars-d-learn
I've searched a lot at it should be working at least on linux, 
but apparently is not or I'm doing something totally wrong..


Our use case is to call shared D library from C# (.Net Core) and 
from different threads.


What I've read about this, is that foreign thread should be 
registered by `thread_attachThis()`.


But even with this, there is a trouble with GC being called in 
that thread which ends up with:


```D
Thread 3 "main" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x77aeb700 (LWP 5850)]
0x0045bed7 in 
_D4core6thread15scanAllTypeImplFNbMDFNbEQBmQBk8ScanTypePvQcZvQgZv 
()

(gdb) bt
#0  0x0045bed7 in 
_D4core6thread15scanAllTypeImplFNbMDFNbEQBmQBk8ScanTypePvQcZvQgZv 
()
#1  0x0045be7f in 
_D4core6thread18thread_scanAllTypeUNbMDFNbEQBpQBn8ScanTypePvQcZvZ__T9__lambda2TQvZQoMFNbQBeZv ()
#2  0x0044dc10 in core.thread.callWithStackShell(scope 
void(void*) nothrow delegate) ()

#3  0x0045be56 in thread_scanAllType ()
#4  0x004599aa in thread_scanAll ()
#5  0x004582a9 in 
_D2gc4impl12conservativeQw3Gcx__T7markAllS_DQBqQBqQBoQBzQBe16markConservativeMFNbNlPvQcZvZQCfMFNbbZv ()
#6  0x00453f1d in 
_D2gc4impl12conservativeQw3Gcx11fullcollectMFNbbZm ()
#7  0x0045732f in 
_D2gc4impl12conservativeQw14ConservativeGC__T9runLockedS_DQCeQCeQCcQCnQBs11fullCollectMFNbZ2goFNbPSQDtQDtQDrQEc3GcxZmTQvZQCyMFNbKQBgZm ()
#8  0x0045167d in 
_D2gc4impl12conservativeQw14ConservativeGC11fullCollectMFNbZm ()
#9  0x0045165e in 
_D2gc4impl12conservativeQw14ConservativeGC7collectMFNbZv ()

#10 0x0042f2bd in gc_collect ()
#11 0x0042ca11 in core.memory.GC.collect() ()
#12 0x0042c8a6 in entry_point2 (_param_0=0x0) at 
worker.d:36
#13 0x0042c62e in threadFun (arg=0x42c864 ) 
at main.d:24
#14 0x77f6e58e in start_thread () from 
/usr/lib64/libpthread.so.0

#15 0x77cee6a3 in clone () from /usr/lib64/libc.so.6

```

I've tried to compile phobos with debug symbols and ended up with 
this:


```D
Thread 3 "main" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x77aeb700 (LWP 11470)]
0x0043d7e3 in invariant._d_invariant(Object) 
(o=0x76eeb000) at src/rt/invariant.d:27

27  c = typeid(o);
(gdb) bt
#0  0x0043d7e3 in invariant._d_invariant(Object) 
(o=0x76eeb000) at src/rt/invariant.d:27
#1  0x0043927a in 
_D4core6thread6Thread6removeFNbNiCQBgQBeQBaZv (t=0x76eeb000) 
at src/core/thread.d:1864
#2  0x0043947b in thread_detachThis () at 
src/core/thread.d:2263
#3  0x004378b0 in entry_point2 (_param_0=0x0) at 
worker.d:39
#4  0x0043762e in threadFun (arg=0x437864 ) 
at main.d:24
#5  0x77f6e58e in start_thread () from 
/usr/lib64/libpthread.so.0

#6  0x77cee6a3 in clone () from /usr/lib64/libc.so.6
```

So actually after manual GC.collect call in test method which is 
really strange.


I've tested this with dmd-2.085.0 because of this fix: 
https://github.com/dlang/druntime/commit/f60eb358ccbc14a1a5fc1774eab505ed0132e999 which seemed to be exactly what is needed for this to work.


I've even created a github repo with my tests using 4 variations:

* static lib from D
* dynamic lib from D
* static lib from C
* dynamic lib from D

all using pthread as a foreign thread on linux.

You can see it here: https://github.com/tchaloupka/dlangsharedlib

All tests fails with the same result.
C tests only differs in that they explicitly calls `rt_init()` to 
initialize DRuntime. Should be the same otherwise.


I wonder where is the problem.
Is GC supposed to work with foreign threads when registered?


Strange appender behavior

2019-03-13 Thread tchaloupka via Digitalmars-d-learn

Is this expected?:

```
import std.stdio;
import std.algorithm;
import std.array;

void main()
{
auto d = Appender!string();
//auto d = appender!string(); // works

string[] arr = ["foo", "bar", "baz"];
arr.joiner("\n").copy(d);
writeln(d.data);
}
```

Using Appender outpust nothing, using appender works ok.


Re: SysTime bug or feature?

2015-10-06 Thread tchaloupka via Digitalmars-d-learn
On Tuesday, 6 October 2015 at 05:54:44 UTC, Jonathan M Davis 
wrote:
It is by design, albeit undesirable. When SysTime was 
originally written, it was impossible to have a default value 
for a class reference other than null. So, unless SysTime was 
going to take the performance hit of constantly checking 
whether its TimeZone was null, SysTime.init was going to 
segfault if you did anything with it that required its 
TimeZone. And I wasn't about to have it constantly checking for 
null. In the vast majority of cases, the value of a SysTime 
comes from Clock.currTime() or from parsing a string, and if 
code is trying to do anything but assign to a SysTime which is 
SysTime.init, then that means that it failed to initialize it 
like it should have.


Thanks for thorough explanation.
I found the problem using vibe and REST API with SysTime argument 
with default value (which didn't work due to the bug there) when 
I tried to print out the passed value and ended up with the 
segfault. So I guess it doesn't bite devs often as it is mostly 
used as you wrote.


SysTime bug or feature?

2015-10-05 Thread tchaloupka via Digitalmars-d-learn

This code:

import std.stdio;
import std.datetime;

void main()
{
SysTime t = SysTime.init;
writeln(t);
}

results in segfault with dmd-2.068.2

Is it ok?

Backtrace:

#0  0x004733f3 in std.datetime.SysTime.adjTime() const ()
#1  0x004730b9 in std.datetime.SysTime.toSimpleString() 
const ()

#2  0x00473339 in std.datetime.SysTime.toString() const ()
#3  0x00463dc4 in 
std.format.formatObject!(std.stdio.File.LockingTextWriter, 
std.datetime.SysTime, char).formatObject(ref 
std.stdio.File.LockingTextWriter, ref std.datetime.SysTime, ref 
std.format.FormatSpec!(char).FormatSpec) ()
#4  0x00463cb7 in 
std.format.formatValue!(std.stdio.File.LockingTextWriter, 
std.datetime.SysTime, 
char).formatValue(std.stdio.File.LockingTextWriter, ref 
std.datetime.SysTime, ref 
std.format.FormatSpec!(char).FormatSpec) ()
#5  0x00463c5a in 
std.format.formatGeneric!(std.stdio.File.LockingTextWriter, 
std.datetime.SysTime, 
char).formatGeneric(std.stdio.File.LockingTextWriter, 
const(void)*, ref std.format.FormatSpec!(char).FormatSpec)---Type 
 to continue, or q  to quit---

 ()
#6  0x00463b63 in 
std.format.formattedWrite!(std.stdio.File.LockingTextWriter, 
char, 
std.datetime.SysTime).formattedWrite(std.stdio.File.LockingTextWriter, const(char[]), std.datetime.SysTime) ()
#7  0x00463675 in 
std.stdio.File.write!(std.datetime.SysTime, 
char).write(std.datetime.SysTime, char)

()
#8  0x00463591 in 
std.stdio.writeln!(std.datetime.SysTime).writeln(std.datetime.SysTime) ()

#9  0x00461b38 in D main ()




Contracts with interface

2015-09-19 Thread tchaloupka via Digitalmars-d-learn

This bites me again:

import std.stdio;

interface ITest
{
void test();

void test2()
in { writeln("itest2"); }

void test3()
in { writeln("itest3"); }

void test4()
in { writeln("itest4"); assert(false); }
}

class Test: ITest
{
void test()
in { writeln("ctest"); }
body { }

void test2()
{
}

void test3()
in { writeln("ctest3"); }
body {}

void test4()
in { writeln("ctest4"); }
body {}
}

void main()
{
auto t = new Test();
t.test();
t.test2();
t.test3();
t.test4();
}

What is expected output?

Docs says just:
Interface member functions can have contracts even though there 
is no body for the function. The contracts are inherited by any 
class member function that implements that interface member 
function.


and:
If a function in a derived class overrides a function in its 
super class, then only one of the in contracts of the function 
and its base functions must be satisfied. Overriding functions 
then becomes a process of loosening the in contracts.
A function without an in contract means that any values of the 
function parameters are allowed. This implies that if any 
function in an inheritance hierarchy has no in contract, then 
in contracts on functions overriding it have no useful effect.


What I expected is, that if there is no contract in interface and 
is in class implementation - it will be called. Or if interface 
has contract and class implementation doesn't, it will be called 
too.


But apparently it works the way that you have to have the same IN 
contract in both interface and class implementation to be safe. 
So it works the same way as with class inheritance per docs.
Which seems at least to me a bit strange and not much usable. 
What's the point of defining contract in interface just to write 
it again in the implementation class?
It's simpler to just write it in the class method body and not 
use the IN contracts at all.

At least a warning would be nice.


Re: Should this compile?

2015-08-26 Thread tchaloupka via Digitalmars-d-learn
On Tuesday, 25 August 2015 at 18:29:08 UTC, Vladimir Panteleev 
wrote:


I think this is a bug, but is easily worked around with:

auto test(string a) {
return .test(a, b);
}



Thanks, this worked.
Filled it: https://issues.dlang.org/show_bug.cgi?id=14965


Should this compile?

2015-08-25 Thread tchaloupka via Digitalmars-d-learn

import std.stdio;
import std.range : chain;

auto test(string a) {
return test(a, b);
}

auto test(string a, string b) {
return chain(a, b);
}

void main() {
writeln(test(a));
}

Ends with: Error: forward reference to inferred return type of 
function call 'test'


I know this exact sample is solvable by default parameter but 
there are cases where it is not possible. What to do then?


Speed of horizontal flip

2015-04-01 Thread tchaloupka via Digitalmars-d-learn

Hi,
I have a bunch of square r16 and png images which I need to flip
horizontally.

My flip method looks like this:
void hFlip(T)(T[] data, int w)
{
   import std.datetime : StopWatch;

   StopWatch sw;
   sw.start();

   foreach(int i; 0..w)
   {
 auto row = data[i*w..(i+1)*w];
 row.reverse();
   }

   sw.stop();
   writeln(Img flipped in: , sw.peek().msecs, [ms]);
}

With simple r16 file format its pretty fast, but with RGB PNG
files (2048x2048) I noticed its somewhat slow so I tried to
compare it with C# and was pretty surprised by the results.

C#:
PNG load - 90ms
PNG flip - 10ms
PNG save - 380ms

D using dlib (http://code.dlang.org/packages/dlib):
PNG load - 500ms
PNG flip - 30ms
PNG save - 950ms

D using imageformats
(http://code.dlang.org/packages/imageformats):
PNG load - 230ms
PNG flip - 30ms
PNG save - 1100ms

I used dmd-2.0.67 with -release -inline -O
C# was just with debug and VisualStudio attached to process for
debugging and even with that it is much faster.

I know that System.Drawing is using Windows GDI+, that can be
used with D too, but not on linux.
If we ignore the PNG loading and saving (didn't tried libpng
yet), even flip method itself is 3 times slower - I don't know D
enough to be sure if there isn't some more effecient way to make
the flip. I like how the slices can be used here.

For a C# user who is expecting things to just work as fast as
possible from a system level programming language this can be
somewhat disappointing to see that pure D version is about 3
times slower.

Am I doing something utterly wrong?
Note that this example is not critical for me, it's just a simple
hobby script I use to move and flip some images - I can wait. But
I post it to see if this can be taken somewhat closer to what can
be expected from a system level programming language.

dlib:
auto im = loadPNG(name);
hFlip(cast(ubyte[3][])im.data, cast(int)im.width);
savePNG(im, newName);

imageformats:
auto im = read_image(name);
hFlip(cast(ubyte[3][])im.pixels, cast(int)im.w);
write_image(newName, im.w, im.h, im.pixels);

C# code:
static void Main(string[] args)
 {
 var files = Directory.GetFiles(args[0]);

 foreach (var f in files)
 {
 var sw = Stopwatch.StartNew();
 var img = Image.FromFile(f);

 Debug.WriteLine(Img loaded in {0}[ms],
(int)sw.Elapsed.TotalMilliseconds);
 sw.Restart();

 img.RotateFlip(RotateFlipType.RotateNoneFlipX);
 Debug.WriteLine(Img flipped in {0}[ms],
(int)sw.Elapsed.TotalMilliseconds);
 sw.Restart();

 img.Save(Path.Combine(args[0], test_ +
Path.GetFileName(f)));
 Debug.WriteLine(Img saved in {0}[ms],
(int)sw.Elapsed.TotalMilliseconds);
 sw.Stop();
 }
 }


Re: Speed of horizontal flip

2015-04-01 Thread tchaloupka via Digitalmars-d-learn

On Wednesday, 1 April 2015 at 14:00:52 UTC, bearophile wrote:

tchaloupka:


Am I doing something utterly wrong?


If you have to perform performance benchmarks then use ldc or 
gdc.




I tried it on my slower linux box (i5-2500K vs i7-2600K) without 
change with these results:


C# (mono with its own GDI+ library):
Img loaded in 108[ms]
Img flipped in 22[ms]
Img saved in 492[ms]

dmd-2.067:
Png loaded in: 150[ms]
Img flipped in: 28[ms]
Png saved in: 765[ms]

gdc-4.8.3:
Png loaded in: 121[ms]
Img flipped in: 4[ms]
Png saved in: 686[ms]

ldc2-0_15:
Png loaded in: 106[ms]
Img flipped in: 4[ms]
Png saved in: 610[ms]

I'm ok with that, thx.


Re: Speed of horizontal flip

2015-04-01 Thread tchaloupka via Digitalmars-d-learn

On Wednesday, 1 April 2015 at 16:08:14 UTC, John Colvin wrote:

On Wednesday, 1 April 2015 at 13:52:06 UTC, tchaloupka wrote:
snip

I'm pretty sure that the flipping happens in GDI+ as well. You 
might be writing C#, but the code your calling that's doing all 
the work is C and/or C++, quite possibly carefully optimised 
over many years by microsoft.




Yes thats right, load, flip and save are all performed by GDI+ so 
just pinvoke to optimised code from C#.