Re: Linking on MS Windows.

2016-08-06 Thread Kai Nacke via Digitalmars-d-learn

On Friday, 5 August 2016 at 18:28:48 UTC, ciechowoj wrote:
Is default dmd linker (on MS Windows, OPTILINK) supposed to 
link against static libraries created with Visual Studio?


Specifically I want to link a project compiled on windows with 
dmd against pre-compiled library `libclang.lib` from LLVM 
suite. I'm pretty sure they used Visual Studio to compile the 
library.


If you are already using Visual Studio and LLVM/clang then why 
not use ldc? The compiler itself is built with this toolchain...


Regards,
Kai


Re: LDC with ARM backend

2016-08-01 Thread Kai Nacke via Digitalmars-d-learn

On Thursday, 21 July 2016 at 13:13:39 UTC, Claude wrote:

On Thursday, 21 July 2016 at 10:30:55 UTC, Andrea Fontana wrote:

On Thursday, 21 July 2016 at 09:59:53 UTC, Claude wrote:
I can build a "Hello world" program on ARM GNU/Linux, with 
druntime and phobos.

I'll write a doc page about that.


It's a good idea :)


Done:

https://wiki.dlang.org/LDC_cross-compilation_for_ARM_GNU/Linux

I based it totally on Kai's previous page for LDC on Android.

It lacks the build for druntime/phobos unit-tests.


Thanks! That's really awesome!

Did you manage to build more complex applications? EABI is a bit 
different from the hardfloat ABI and there may be still bugs 
lurking in LDC...


Regards,
Kai


Re: LDC with ARM backend

2016-07-15 Thread Kai Nacke via Digitalmars-d-learn

Hi Claude!

On Friday, 15 July 2016 at 14:09:40 UTC, Claude wrote:

Hello,

I would like to cross-compile a D program from a x86 machine to 
an ARM target.

[...]
So I'm a bit confused of what the current state of LDC+ARM is. 
For example, is the run-time fully ported on ARM/Linux?


LDC is fully ported to Linux/ARM. The current release also 
includes LDC pre-compiled for ARMv7 with hard floats (e.g. 
matches recent Raspberry hardware). On such a platform you can 
simply unpack the binary packages and LDC should run out of the 
box.



What would be the steps to have an LDC cross-compiling to ARM?


That is a somewhat different story. First, you need to built LLVM 
with support for ARM and then compile LDC against this version of 
LLVM. You can run ldc2 -version to see if the ARM target is 
supported. As Radu already mentioned, most of the required steps 
are described in the wiki at 
https://wiki.dlang.org/Build_LDC_for_Android. From a different 
perspective there is an old news post about cross-compiling to 
AArch64, too: 
http://forum.dlang.org/post/fhwvxatxezkafnalw...@forum.dlang.org.


There is a reason why we do not distribute a binary version of 
LDC with all LLVM targets enabled. LDC still uses the real format 
of the host. This is different on ARM (80bit on Linux/x86 vs. 
64bit on Linux/ARM). Do not expect that applications using real 
type work correctly.
(The Windows version of LDC uses 64bit reals. The binary build 
has the ARM target enabled.)


Regards,
Kai


Re: bigint compile time errors

2015-07-10 Thread Kai Nacke via Digitalmars-d-learn

On Tuesday, 7 July 2015 at 22:19:22 UTC, Paul D Anderson wrote:

On Sunday, 5 July 2015 at 20:35:03 UTC, Kai Nacke wrote:

On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:

On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:
On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson 
wrote:



[...]


Should be plusTwo(in BigInt n) instead.



Yes, I had aliased BigInt to bigint.

And I checked and it compiles for me too with Windows m64. 
That makes it seem more like a bug than a feature.


I'll open a bug report.

Paul


The point here is that x86 uses an assembler-optimized 
implementation (std.internal.math.biguintx86) and every other 
cpu architecture (including x64) uses a D version 
(std.internal.math.biguintnoasm). Because of the inline 
assembler, the x86 version is not CTFE-enabled.


Regards,
Kai


Could we add a version or some other flag that would allow the 
use of .biguintnoasm with the x86?


Paul


biguintx86 could import biguintnoasm. Every function would need 
to check for CTFE and if yes then call the noasm function. Should 
work but requires some effort.


Regards,
Kai


Re: bigint compile time errors

2015-07-05 Thread Kai Nacke via Digitalmars-d-learn

On Friday, 3 July 2015 at 04:08:32 UTC, Paul D Anderson wrote:

On Friday, 3 July 2015 at 03:57:57 UTC, Anon wrote:

On Friday, 3 July 2015 at 02:37:00 UTC, Paul D Anderson wrote:



enum BigInt test1 = BigInt(123);
enum BigInt test2 = plusTwo(test1);

public static BigInt plusTwo(in bigint n)


Should be plusTwo(in BigInt n) instead.



Yes, I had aliased BigInt to bigint.

And I checked and it compiles for me too with Windows m64. That 
makes it seem more like a bug than a feature.


I'll open a bug report.

Paul


The point here is that x86 uses an assembler-optimized 
implementation (std.internal.math.biguintx86) and every other cpu 
architecture (including x64) uses a D version 
(std.internal.math.biguintnoasm). Because of the inline 
assembler, the x86 version is not CTFE-enabled.


Regards,
Kai


Re: DMD 64 bit on Windows

2015-04-14 Thread Kai Nacke via Digitalmars-d-learn

On Tuesday, 14 April 2015 at 01:31:27 UTC, Etienne wrote:
I'm currently experiencing Out Of Memory errors when compiling 
in DMD on Windows


Has anyone found a way to compile a DMD x86_64 compiler on 
Windows?


Short recipe:
Download  VisualStudio 2013 Community Edition
Download the DMD source OR clone from GitHub repository.
Start VS 2013.
Open solution dmd_msc_vs10.sln (in folser src)
Right click solution dmd_msc_vs10 and select Properties.
Change Configuration to Release and Platform to x64.
Right click solution dmd_msc_vs10 and select Rebuild.
Result is 64bit exex dmd_msc.exe in folder src.

Regards,
Kai