Re: CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn
On Wednesday, 27 April 2022 at 14:34:27 UTC, rikki cattermole 
wrote:

This works:


Cool, thanks.

Unfortunately, with that implementation, I need to know the 
maximum size for the array. It works for that particular example, 
but in the context of an XML file analysis, it's a bit awkward.


Regarding my comment above, I tried using cork functions for 
missing symbols: it also works! However the linker does not 
optimize those functions out (I see the symbols in the executable 
binary)...


Re: CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn
On Wednesday, 27 April 2022 at 14:27:43 UTC, Stanislav Blinov 
wrote:
This is a long-standing pain point with BetterC (see 
https://issues.dlang.org/show_bug.cgi?id=19268).


That's what I was afraid of... Thanks for the link to the 
bug-report.


On Wednesday, 27 April 2022 at 14:27:43 UTC, Stanislav Blinov 
wrote:
When not using BetterC, but not linking against druntime 
either, you have to provide your own implementation for those 
functions. This is e.g. so you can replace druntime with your 
own version.


Yeah... The problem is that there will a lot of those functions 
to define (for a whole XML parser). I suppose I can use cork 
functions with empty bodies??


I will check if the linker optimize them out...


CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn

Hello,

I want to make a SAX XML parser in D that I could both use at 
run-time or compile-time.


Also when I use it at compile-time, I would like to use BetterC 
so I don't have to link D-runtime.


But I have some compilation problems. I use GDC (GCC 9.4.0).

Here's a reduced sample code:

```
struct Data
{
int[] digits;
}

int parseDigit(char c) pure
{
return c - '0';
}

Data parse(string str) pure
{
Data data;

while (str.length != 0)
{
// Skip spaces
while (str[0] == ' ')
str = str[1 .. $];

// Parse single digit integer
data.digits ~= parseDigit(str[0]);

// Consume digit
str = str[1 .. $];
}

return data;
}

enum Data parsedData = parse("5 4 2 6 9");

extern(C) int main()
{
pragma(msg, "First digit=", parsedData.digits[0]);
return 0;
}
```

If I compile and link against D-runtime, it works:
```
$ gcc test.d -lgdruntime -o test
First digit=5
```

If I compile with BetterC (no D-runtime for GDC), I get a 
compilation error about RTTI:

```
$ gcc test.d -fno-druntime -o test
test.d: In function ‘parse’:
test.d:25:21: error: ‘object.TypeInfo’ cannot be used with 
-fno-rtti

   25 | data.digits ~= parseDigit(str[0]);
  | ^
```

If I compile without the BetterC switch, compilation actually 
works but I'll have some linker issues:

```
$ gcc test.d -o test
First digit=5
/tmp/ccuPwjdv.o : In function 
« _D5test5parseFNaAyaZS5test4Data » :

test.d:(.text+0x137) : undefined reference to « _d_arraybounds »
test.d:(.text+0x183) : undefined reference to « _d_arraybounds »

etc...
```

The operation requiring the D-runtime is appending the array, but 
it should **only** be done at compile-time.


I don't understand why it requires to link against the D-runtime 
whereas it only needs it at compilation-time (and the compilation 
and CTFE interpretation works, as we can see in the last example).


Is there a way to force the compiler to not emit any object ode 
for those functions?


Or am I missing something?

Regards,

Claude


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 12:49:21 UTC, Alain De Vos wrote:

PS :
I use
```
ldc2 --gcc=cc ,
cc -v : clang version 11.0.1
```


We only have gcc in our toolchain (we target an ARM-based 
embedded system).


---

I also encountered problems while I was trying to use CTFE only 
functions (using betterC so I don't have to link 
phobos/D-runtime).


However, if those functions use the GC for instance (like 
appending a dynamic-array), it will require me to link D-runtime, 
whereas I only use them at compile-time. So I'm a bit confused... 
I'll try and get more information and reduce a code sample.


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 10:29:39 UTC, Iain Buclaw wrote:

On Tuesday, 26 April 2022 at 10:23:15 UTC, Claude wrote:

Hello,



Hello,

<%--SNIP--%>



Does anyone have any idea what's going on?

(if I just compile a single D file with "int main() { int* a = 
new int(42); return *a; }", it works as intended.)


The `new` keyword requests the druntime GC to allocate memory, 
however you haven't initialized the D run-time in your program.


main.cpp
```D
extern "C" int rt_init();
extern "C" const int* ct_parse();

int main(int argc, char ** argv)
{
rt_init();
return *ct_parse();
}
```


Ok, thanks!
I should have suspected something like this.


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 10:23:15 UTC, Claude wrote:

It seg-faults...


Just to make it clear, it seg-faults at run-time (not at 
compilation or link time) when I launch the executable "test".


Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

Hello,

I'm working on a C++ project requiring an XML parser. I decided 
to make it in D so I could easily parse at run-time or 
compile-time as I wish.


As our project uses a gcc tool-chain, I naturally use GDC (GCC 
9.4.0).


But I have a few problems with D, linking with it, trying to use 
better-C and CTFE, etc.


Here's a reduced sample of one of my problems:

parser.d
```
extern(C) int* ct_parse()
{
int* a = new int(42);
return a;
}
```

main.cpp
```
extern "C" const int* ct_parse();

int main(int argc, char ** argv)
{
return *ct_parse();
}
```

Compiling/linking using the following command-lines:
```
gcc -c parser.d -o parser.o
gcc -std=c++17 -c main.cpp -o main.o
gcc main.o parser.o -lstdc++ -lgphobos -lgdruntime -o test
```

It seg-faults...

Here's the output of gdb:
```
Program received signal SIGSEGV, Segmentation fault.
0x7777858a in gc_qalloc () from 
/usr/lib/x86_64-linux-gnu/libgdruntime.so.76

```

Does anyone have any idea what's going on?

(if I just compile a single D file with "int main() { int* a = 
new int(42); return *a; }", it works as intended.)


Re: Conditional Compilation Multiple Versions

2017-01-09 Thread Claude via Digitalmars-d-learn

Druntime uses this for its translation of POSIX header files:

https://github.com/dlang/druntime/blob/master/src/core/sys/posix/config.d

An example:

https://github.com/dlang/druntime/blob/master/src/core/sys/posix/sys/resource.d#L96


Ok, I see. Thanks!
(I've gotta try reggae someday) :)




Re: Conditional Compilation Multiple Versions

2017-01-06 Thread Claude via Digitalmars-d-learn

On Friday, 6 January 2017 at 13:27:06 UTC, Mike Parker wrote:

version(Windows)
enum bool WindowsSupported = true;
else
enum bool WindowsSupported = false;


Well, yes, that was a bad example. I thought to change it before 
sending my post but I could find any other meaningful alternative.
My point was, that you can re-define WindowsSupported as a 
version even if it already defined, but not as an enum. And 
sometimes, you cannot simply use the else statement without 
creating another indented block (which seems a bit awkward).


Yes, it works quite well for most use cases and should 
generally be preferred. I disagree that it scales, though. At 
some point (a point that is highly project-dependent), it 
breaks down, requiring either very large modules or duplicated 
versions across multiple modules.


Yes, in that case, you would probably break it down into several 
specialized config modules. I meant it forces you not to put 
directly version(Windows) into your code, but rather 
version(ThatFeatureSupportedByWindowsAmongstOtherOSs).


My position is that I will always choose version blocks first, 
but if I find myself in a situation where I have to choose 
between duplicating version statements (e.g. version(A) 
{version=AorB; version=AorC;}) across multiple modules and 
restructuring my code to accommodate versioning, I much prefer 
to use the enum alternative as an escape hatch.


Ok, that's interesting.
Do you have code samples where you do that? I'm just curious.


Re: Conditional Compilation Multiple Versions

2017-01-06 Thread Claude via Digitalmars-d-learn

On Thursday, 20 October 2016 at 09:58:07 UTC, Claude wrote:
I'm digging up that thread, as I want to do some multiple 
conditional compilation a well.


Well I'm digging up that thread again, but to post some positive 
experience feedback this time as I've found an answer to my own 
questions, and I thought I could share them.


I wanted to convert some C preprocessing code to D: thousands of 
conditional compilation macros #ifdef, #if defined() used in a 
program that determine the capabilities of a platform (number of 
CPU cores, SIMD availability, etc). So it had to check compiler 
types and versions, combined with the target architecture, and 
the OS, and the endianess and so on.


So the C implementation is a stream of:
#if defined(MYOS) || defined(ARCHITECTURE) && 
defined(__weirdstuff)

# define SPECIFIC FEATURE
#else
# blabla
...

And I though I would have to use some || and && operators in my D 
code as well.


So I did. I used that trick from Mike Parker and anonymous (see 
above in the thread) by declaring "enum bool"s to be checked with 
"static if"s later to implement specific feature.


So I had a stream of:

version (Win32)
  enum bool WindowsSupported = true;
else
  enum bool WindowsSupported = false;

version (Win64)
  enum bool WindowsSupported = true; //Ooops
else
  enum bool WindowsSupported = false; //Ooops

It turned out to be not so readable (even when using a "string 
mixin" to make the code tighter), and I cannot define twice an 
enum without using "static if", which was a deal-breaker. Also 
the conciseness of the versions for the D compilers (only 4: DMD, 
GDC, LDC and SDC), as well as the different OS versions made the 
code a lot tighter than the C version.


So I just dropped the enum definition thing and just used 
"version" as it was designed to be used:


version (Win32)
  version = WindowsSupported;
else version (Win64)
  version = WindowsSupported;
else etc...

So to my older question:

* Is there an "idiomatic" or "elegant" way of doing it? Should 
we use Mike Parker solution, or use the "template 
Version(string name)" solution (which basically just circumvent 
"version" specific limitation)?


That little experience showed that using version as it is 
designed currently is enough to elegantly cover my needs. And it 
seemed to scale well.
Also I think it may force developers to handle all version 
specific stuff into one specific module and define your own 
version identifiers to list features from compiler, OS, target 
architecture version identifiers; which is a good coding practice 
anyway.


So:

module mylib.platform;

version (ThisOs)
 version = ThatFeature;
else
 version = blabla;
etc...

And:

module mylib.feature;

void doFeature()
{
version (ThatFeature)
  blabla;
}

But again, that's just my feedback from one single experience 
(even though I think that kind of code is quite common in C/C++ 
cross-platform libraries).


So I'm still curious as why Walter designed "version" that 
particular way, and if anyone has bumped on "version" 
(quasi-)limitations and what they think about it!




Re: Conditional Compilation Multiple Versions

2016-10-20 Thread Claude via Digitalmars-d-learn

On Saturday, 13 June 2015 at 12:21:50 UTC, ketmar wrote:

On Fri, 12 Jun 2015 20:41:59 -0400, bitwise wrote:


Is there a way to compile for multiple conditions?

Tried all these:

version(One | Two){ }
version(One || Two){ }
version(One && Two){ }
version(One) |  version(Two){ }
version(One) || version(Two){ }
version(One) && version(Two){ }

   Bit


nope. Walter is against that, so we'll not have it, despite the 
triviality of the patch.


I'm digging up that thread, as I want to do some multiple 
conditional compilation a well.


I have a couple of questions:
* Why is Walter against that? There must be some good reasons.
* Is there an "idiomatic" or "elegant" way of doing it? Should we 
use Mike Parker solution, or use the "template Version(string 
name)" solution (which basically just circumvent "version" 
specific limitation)?


Here' the kind of stuff I'd like to translate from C:

#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
#define YEP_MICROSOFT_COMPILER
#elif defined(__GNUC__) && !defined(__clang__) && 
!defined(__INTEL_COMPILER) && !defined(__CUDA_ARCH__)

#define YEP_GNU_COMPILER
#elif defined(__INTEL_COMPILER)
...

#if defined(_M_IX86) || defined(i386) || defined(__i386) || 
defined(__i386__) || defined(_X86_) || defined(__X86__) || 
defined(__I86__) || defined(__INTEL__) || defined(__THW_INTEL__)

#define YEP_X86_CPU
#define YEP_X86_ABI
#elif defined(_M_X64) || defined(_M_AMD64) || defined(__amd64__) 
|| defined(__amd64) || defined(__x86_64__) || defined(__x86_64)

...


Re: Meta-programming: iterating over a container with different types

2016-10-20 Thread Claude via Digitalmars-d-learn

On Friday, 23 September 2016 at 12:55:42 UTC, deed wrote:

// Maybe you can try using std.variant?



Thanks for your answer.
However I cannot use variants, as I have to store the components 
natively in a void[] array (for cache coherency reasons).


So I found a way to solve that problem: delegate callbacks.
There may be more elegant solutions but well, it works.

Basically I register some kind of accessor delegate of the form:

void accessor(Entity e, Component* c)
{
  // Do stuff, like save the component struct for that entity in 
a file

}

And it is stored in the entity class in an array of delegates:

  void delegate(Entity e, void* c);


Here's a very basic implementation:

class Entity
{
public:
  void register!Component(Component val);
  void unregister!Component();
  Component getComponent!Component();

  alias CompDg = void delegate(Entity e, void* c);

  void accessor!Component(void delegate(Entity e, Component* c) 
dg) @property

  {
auto compId = getComponentId!Component;
mCompDg[compId] = cast(CompDg)dg;
  }


  // Iterating over the components
  void iterate()
  {
// For every possible components
foreach (compId; 0..mMaxNbComponents)
  if (isRegistered(compId))
if (mCompDg[compId] !is null)
  mCompDg[compId](this, getComponentStoragePtr(compId));
  }

private:
  void* getComponentStoragePtr(uint compId);
  bool isRegistered(uint compId);

  void[]   mComponentStorage;  // Flat contiguous storage of all 
components

  CompDg[] mCompDg;
  // ...
}

unittest
{
  // registering
  auto e = new Entity;
  e.register!int(42);
  e.register!string("loire");
  e.register!float(3.14);

  assert(e.getComponent!float() == 3.14); // that is OK

  e.accessor!int= (Entity et, int i){ writefln("%d", i); 
};
  e.accessor!string = (Entity et, string s) { writefln("%s", s); 
};
  e.accessor!float  = (Entity et, float f)  { writefln("%f", f); 
};


  // Print the values
  e.iterate();
}


Meta-programming: iterating over a container with different types

2016-09-23 Thread Claude via Digitalmars-d-learn
It's more a general meta-programming question than a specific D 
stuff.


For an entity-component engine, I am trying to do some run-time 
composition: registering a certain type (component) to a 
structure (entity).


I would like to know how I can iterate an entity and get the 
different type instances registered to it.


Here is a simple example to clarify:

class Entity
{
  void register!Component(Component val);
  void unregister!Component();
  Component getComponent!Component();

  //iterating over the components (?!??)
  void opApply(blabla);
}

unittest
{
  // registering
  auto e = new Entity;
  e.register!int(42);
  e.register!string("loire");
  e.register!float(3.14);

  assert(e.getComponent!float() == 3.14); // that is OK

  // the code below is wrong, but how can I make that work??
  foreach (c; e)
  {
writeln(c); // it would display magically 42, "loire" and 3.14
// and c would have the correct type at each step
  }
}


Re: LDC with ARM backend

2016-08-09 Thread Claude via Digitalmars-d-learn

On Monday, 1 August 2016 at 06:21:48 UTC, Kai Nacke wrote:

Thanks! That's really awesome!

Did you manage to build more complex applications? EABI is a 
bit different from the hardfloat ABI and there may be still 
bugs lurking in LDC...


Unfortunately no, I didn't have the time.

I was interested in building audio applications in D, but I do 
not use much float arithmetic on embedded systems (I prefer 
integer/fixed-point over it). Anyway I have some pieces of DSP 
algorithms I could try out in float (FFT, biquads, FIR etc).


I could also try to run the phobos test suite on the board I use, 
if there is an easy way to do it (I'm pretty new to all this).


On Tuesday, 2 August 2016 at 04:19:15 UTC, Joakim wrote:
Sorry, I didn't see this thread till now, or I could have saved 
you some time by telling you not to apply the llvm patch on 
non-Android linux.  Note that you don't have to compile llvm 
yourself at all, as long as the system llvm has the ARM backend 
built in, as it often does.


Ah ok. I am totally new to llvm. I did it the hard way. :)


Re: LDC with ARM backend

2016-07-21 Thread Claude via Digitalmars-d-learn

On Thursday, 21 July 2016 at 10:30:55 UTC, Andrea Fontana wrote:

On Thursday, 21 July 2016 at 09:59:53 UTC, Claude wrote:
I can build a "Hello world" program on ARM GNU/Linux, with 
druntime and phobos.

I'll write a doc page about that.


It's a good idea :)


Done:

https://wiki.dlang.org/LDC_cross-compilation_for_ARM_GNU/Linux

I based it totally on Kai's previous page for LDC on Android.

It lacks the build for druntime/phobos unit-tests.


Re: LDC with ARM backend

2016-07-21 Thread Claude via Digitalmars-d-learn

On Wednesday, 20 July 2016 at 16:10:48 UTC, Claude wrote:

R_ARM_TLS_IE32 used with non-TLS symbol ??


Oh, that was actually quite obvious... If I revert the first 
android patch on LLVM sources, and build it back it works!


I can build a "Hello world" program on ARM GNU/Linux, with 
druntime and phobos.

I'll write a doc page about that.


Re: LDC with ARM backend

2016-07-20 Thread Claude via Digitalmars-d-learn
So I'm trying to build druntime correctly, I corrected some 
problems here and there, but I still cannot link with 
libdruntime-ldc.a:



/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc loire.o 
lib/libdruntime-ldc.a -o loire



I get many errors like:

/opt/arm-2009q1/bin/../lib/gcc/arm-none-linux-gnueabi/4.3.3/../../../../arm-none-linux-gnueabi/bin/ld:
 
lib/libdruntime-ldc.a(libunwind.o)(.text._D3ldc2eh6common61__T21eh_personality_commonTS3ldc2eh9libunwind13NativeContextZ21eh_personality_commonUKS3ldc2eh9libunwind13NativeContextZ3acbMFNbNcNiNfZPS3ldc2eh6common18ActiveCleanupBlock[_D3ldc2eh6common61__T21eh_personality_commonTS3ldc2eh9libunwind13NativeContextZ21eh_personality_commonUKS3ldc2eh9libunwind13NativeContextZ3acbMFNbNcNiNfZPS3ldc2eh6common18ActiveCleanupBlock]+0x38):
 R_ARM_TLS_IE32 used with non-TLS symbol 
_D3ldc2eh6common21innermostCleanupBlockPS3ldc2eh6common18ActiveCleanupBlock




R_ARM_TLS_IE32 used with non-TLS symbol ??


Re: LDC with ARM backend

2016-07-20 Thread Claude via Digitalmars-d-learn

I think my cross-compile LDC is fine.

I tried to build this D program:

/// loire.d
int main()
{
return 42;
}



However, the run-time is not (neither is phobos), most of the 
linker issues come from the druntime. So...


I wrote my own druntime. Here's the code:

/// dummyruntime.d
// from rt/sections_elf_shared.d, probably don't need it right 
now...

extern(C) void _d_dso_registry(void* data)
{
}

// from rt/dmain2.d, just call my D main(), ignore args...
private alias extern(C) int function(char[][] args) MainFunc;

extern (C) int _d_run_main(int argc, char **argv, MainFunc 
mainFunc)

{
return mainFunc(null);
}



I built everything:

# Compilation
./bin/ldc2 -mtriple=arm-none-linux-gnueabi -c loire.d
./bin/ldc2 -mtriple=arm-none-linux-gnueabi -c dummyruntime.d
# Link
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc loire.o 
dummyruntime.o -o loire



And I ran it successfully on my ARM target:

$> loire
$> echo $?
42



So now I know I have a proper LDC cross-compiler! :)

I'm jut missing a proper druntime and phobos for GNU/Linux ARM.


Re: LDC with ARM backend

2016-07-19 Thread Claude via Digitalmars-d-learn

On Friday, 15 July 2016 at 15:24:36 UTC, Kai Nacke wrote:
There is a reason why we do not distribute a binary version of 
LDC with all LLVM targets enabled. LDC still uses the real 
format of the host. This is different on ARM (80bit on 
Linux/x86 vs. 64bit on Linux/ARM). Do not expect that 
applications using real type work correctly.
(The Windows version of LDC uses 64bit reals. The binary build 
has the ARM target enabled.)


Regards,
Kai


Hello Kai,

Thanks for your answer.

From the link https://wiki.dlang.org/Build_LDC_for_Android , I 
did exactly the same steps described in section "Compile LLVM" 
(patch applied).


At section "Build ldc for Android/ARM", I did it quite the same. 
I applied the patch ldc_1.0.0_android_arm, but changed 
runtime/CMakeList.txt, instead of using Android specific stuff, I 
did:



Line 15:
set(D_FLAGS   -w;-mtriple=arm-none-linux-gnueabi  
  CACHE STRING  "Runtime build flags, separated by ;")


Line 505:
#
# Set up build targets.
#
set(RT_CFLAGS "-g")
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_C_COMPILER 
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc)
set(CMAKE_CXX_COMPILER 
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-c++)




On the command line, I aliased DMD to /usr/bin/dmd and runt cmake 
as described...


Afterwards, I ran make for ldc2, phobos2-ldc an druntime-ldc, but 
I did not apply the patches on phobos and runtime. It looked like 
the path introduced some static compilation towards Android, so I 
thought it would not apply to my needs.


So here' what I get if I do a "ldc2 -version":


LDC - the LLVM D compiler (1.0.0):
  based on DMD v2.070.2 and LLVM 3.8.1
  built with DMD64 D Compiler v2.071.1
  Default target: x86_64-unknown-linux-gnu
  Host CPU: westmere
  http://dlang.org - http://wiki.dlang.org/LDC

  Registered Targets:
arm - ARM
armeb   - ARM (big endian)
thumb   - Thumb
thumbeb - Thumb (big endian)



I can strictly compile a "hello world" program:
./bin/ldc2 -mtriple=arm-none-linux-gnueabi test.d

I get the expected "test.o"

But I don't know how to link it. I don't have "clang". I tried to 
link it with the gcc from the gnu ARM toolchain with 
libdruntime-ldc.a, libldc.a and libphobos2-ldc.a, but it fails 
miserably: many undefined symbols (pthread, and some other os 
related stuff).


Re: LDC with ARM backend

2016-07-15 Thread Claude via Digitalmars-d-learn

On Friday, 15 July 2016 at 15:02:15 UTC, Radu wrote:

Hi,
LDC on Linux ARM is fairly complete. I think it is a fully 
supported platform (all tests are passing). Check in 
https://wiki.dlang.org/Compilers the LDC column.


This is the close for a tutorial for cross-compiling 
https://wiki.dlang.org/Build_LDC_for_Android builds.


Great, I didn't see it.

However I don't use Android on my ARM target, I have a 
arm-none-linux-gnueabi toolchain.


I think I have to change the Android patch, keep the "80-bit 
float" stuff, and modify the build scripts somehow to use GNU 
version.


LDC with ARM backend

2016-07-15 Thread Claude via Digitalmars-d-learn

Hello,

I would like to cross-compile a D program from a x86 machine to 
an ARM target.


I work on GNU/Linux Ubuntu 64-bit.
I have an ARM gcc toolchain, which I can use to make programs on 
an ARM Cortex-A9 architecture running a Linux kernel 3.4.11+.


I managed to build and install LLVM 3.8.1 with LDC 1.1-alpha1, 
which works fine to build and run native programs.


I read some documentation here:
http://wiki.dlang.org/Minimal_semihosted_ARM_Cortex-M_%22Hello_World%22

... but it seems to target bare-metal programming, whereas I 
already have a GNU/Linux running on my ARM target and want to use 
it. It does noty tell how to have an LDC with ARM backend.


So I'm a bit confused of what the current state of LDC+ARM is. 
For example, is the run-time fully ported on ARM/Linux?


What would be the steps to have an LDC cross-compiling to ARM?

Thanks


Re: Dynamic arrays, emplace and GC

2016-07-05 Thread Claude via Digitalmars-d-learn

On Tuesday, 5 July 2016 at 12:43:14 UTC, ketmar wrote:

On Tuesday, 5 July 2016 at 10:04:05 UTC, Claude wrote:

So here's my question: Is it normal???


yes. `ubyte` arrays by definition cannot hold pointers, so GC 
doesn't bother to scan 'em.


Ah ok. I tried using void[size] static array and it seems to work 
without having to use GC.addRange().


Dynamic arrays, emplace and GC

2016-07-05 Thread Claude via Digitalmars-d-learn

Hello,

I've been working on some kind of allocator using a dynamic array 
as a memory pool. I used emplace to allocate class instances 
within that array, and I was surprised to see I had to use 
GC.addRange() to avoid the GC to destroy stuff referenced in that 
array.


Here's a chunk of code[1]:

struct Pool(T)
{
public:
T alloc(Args...)(Args args)
{
mData.length++;
import core.memory : GC;
//GC.addRange(mData[$ - 1].data.ptr, mData[$ - 
1].data.length);

import std.conv : emplace;
auto t = emplace!T(mData[$ - 1].data, args);
return t;
}

private:
struct Storage
{
ubyte[__traits(classInstanceSize, T)] data;
}

Storage[] mData;
}

class Foo
{
this(int a)
{
aa = a;
}
~this()
{
import std.stdio; writefln("DTOR");
aa = 0;
}
int aa;
}

class Blob
{
this(int b)
{
foo = new Foo(b);
}

Foo foo;
}

void main()
{
Pool!Blob pool;

Blob blob;
foreach(a; 0 .. 1)
{
blob = pool.alloc(6);
}
while(true){}
import std.stdio; writefln("END");
}

Basically Blob instances are allocated in the array using 
emplace. And Blob creates references to Foo. If I comment out 
GC.addRange(), I see that Foo destructor is called by the GC[2]. 
If I leave it uncommented, the GC leaves the array alone.


So here's my question: Is it normal???
I thought that allocating memory in a dynamic array using 
"mData.length++;" was GC-compliant (unlike 
core.stdc.stdlib.malloc()), and I did not have to explictly use 
GC.addRange().



[1] I left out alignment management code. It's not the issue here.
[2] I used the helpful destructor tracker function of p0nce 
there: https://p0nce.github.io/d-idioms/#GC-proof-resource-class


Re: Procedural drawing using ndslice

2016-02-12 Thread Claude via Digitalmars-d-learn

Thanks for your replies, John and Ali. I wasn't sure I was clear.

I'm going to try to see if I can fit Ali concept (totally lazy, 
which is what I was looking for) within ndslices, so that I can 
also use it in 3D and apply window() function to the result and 
mess around with it.


Procedural drawing using ndslice

2016-02-11 Thread Claude via Digitalmars-d-learn

Hello,

I come from the C world and try to do some procedural terrain 
generation, and I thought ndslice would help me to make things 
look clean, but I'm very new to those semantics and I need help.


Here's my problem: I have a C-style rough implementation of a 
function drawing a disk into a 2D buffer. Here it is:



import std.math;
import std.stdio;

void draw(ref float[16][16] buf, int x0, int y0, int x1, int y1)
{
float xc = cast(float)(x0 + x1) / 2;
float yc = cast(float)(y0 + y1) / 2;
float xr = cast(float)(x1 - x0) / 2;
float yr = cast(float)(y1 - y0) / 2;

float disk(size_t x, size_t y)
{
float xx, yy;
xx = (x - xc) / xr;
yy = (y - yc) / yr;
return 1.0 - sqrt(xx * xx + yy * yy);
}

for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
buf[x][y] = disk(x, y);
writef(" % 3.1f", buf[x][y]);
}
writeln("");
}
}

void main()
{
float[16][16] buf;

draw(buf, 2, 2, 10, 10);
}


The final buffer contains values where positive floats are the 
inside of the disk, negative are outside, and 0's represents the 
perimeter of the disk.


I would like to simplify the code of draw() to make it look more 
something like:


Slice!(stuff) draw(int x0, int y0, int x1, int y1)
{
float disk(size_t x, size_t y)
{
// ...same as above
}

return Slice!stuff.something!disk.somethingElseMaybe;
}

Is it possible?

Do I need to back-up the slice with an array, or could the slice 
be used lazily and modified as I want using some other drawing 
functions.


auto diskNoiseSlice = diskSlice.something!AddNoiseFunction;

... until I do a:

auto buf = mySlice.array;

... where the buffer would be allocated in memory and filled with 
the values according to all the drawing primitives I used on the 
slice.


Re: Segfault while compiling simple program

2015-12-16 Thread Claude via Digitalmars-d-learn

I tested it on linux (64-bit distro), and it segfaults as well:

-

$ echo "struct S { ushort a, b; ubyte c, d; } struct T { ushort 
e; S s; }" > test.d


$ dmd -v test.d
binarydmd
version   v2.069.0
config/etc/dmd.conf
parse test
importall test
importobject(/usr/include/dmd/druntime/import/object.d)
semantic  test
Segmentation fault

$ uname -r
3.13.0-37-generic

$ cat /etc/issue
Linux Mint 17.1 Rebecca \n \l

$ dmd --version
DMD64 D Compiler v2.069.0
Copyright (c) 1999-2015 by Digital Mars written by Walter Bright

-

It doesn't crash if compiled in 32-bit:

-

$ dmd -v -m32 test.d
binarydmd
version   v2.069.0
config/etc/dmd.conf
parse test
importall test
importobject(/usr/include/dmd/druntime/import/object.d)
semantic  test
semantic2 test
semantic3 test
code  test
gcc test.o -o test -m32 -L/usr/lib/i386-linux-gnu -Xlinker 
--export-dynamic -Xlinker -Bstatic -lphobos2 -Xlinker -Bdynamic 
-lpthread -lm -lrt -ldl
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib32/crt1.o: In 
function `_start':

(.text+0x18): undefined reference to `main'
collect2: error: ld returned 1 exit status
--- errorlevel 1

-

Using ubyte[2] or swapping the fields also solve the issue as 
mentioned above.


I also reproduce the issue using DMD v2.069.2.

So it may be good to add that information in the bug-report.




Re: Define methods using templates

2015-01-08 Thread Claude via Digitalmars-d-learn
I just saw this post, which is essentially the same question as 
Basile Burg's. I hope that a college (in France?) is teaching D 
and that this is a homework assignment. Cool stuff! :)


Maybe using templates to create properties is a bit overkill in 
this example. But I could not solve what I thought would be a 
very simple and straightforward template use-case (initially 
I'm an embedded RT system C/asm developer).


I'm doing this for a personal project of a 3D engine. As I know 
little about C++/Java or other OO language, I thought I would do 
it directly in D, which seems very promising to me (but 
unfortunately not taught in France as far as I know).


Define methods using templates

2014-12-30 Thread Claude via Digitalmars-d-learn
Hello, I'm trying to use templates to define several methods 
(property setters) within a class to avoid some code duplication.

Here is an attempt:

class Camera
{
private:
Vector4 m_pos;
float m_fov, m_ratio, m_near, m_far;
bool m_matrixCalculated;

public:
void SetProperty(Tin, alias Field)(ref Tin param) @property 
pure @safe

{
Field = param;
m_matrixCalculated = false;
}

alias pos   = SetProperty!(float[], m_pos);
alias pos   = SetProperty!(Vector4, m_pos);
alias ratio = SetProperty!(float,   m_ratio);
alias near  = SetProperty!(float,   m_near);
alias far   = SetProperty!(float,   m_far);
}

I get this kind of compilation error:
Error: template instance SetProperty!(float[], m_pos) cannot use 
local 'm_pos' as parameter to non-global template 
SetProperty(Tin, alias Field)(ref Tin param)


I don't understand why that error occurs.

And I cannot find any elegant solutions (even with mixin's) to 
declare a template and then instantiate it in a single line to 
define the methods I want.


Does any of you have an idea?

Thanks


Re: Define methods using templates

2014-12-30 Thread Claude via Digitalmars-d-learn

Thanks Steven and Daniel for your explanations.


mixin template opAssign(alias Field) {
void opAssign(Tin)(auto ref Tin param) @property pure 
@safe

{
Field = param;
m_matrixCalculated = false;
}
}

mixin opAssign!(m_pos)   pos;


I tested both the string mixin and opAssign implementations, 
and they work like a charm.
I would have never thought of using both @property and 
opAssign, but it looks like a secure way of doing it for the 
compilation fails nicely if I type a wrong field in.


src/camera.d(58): Error: undefined identifier m_os, did you mean 
variable m_pos?