Re: using DCompute

2017-07-27 Thread James Dean via Digitalmars-d-learn

On Friday, 28 July 2017 at 00:23:35 UTC, Nicholas Wilson wrote:

On Thursday, 27 July 2017 at 21:33:29 UTC, James Dean wrote:
I'm interested in trying it out, says it's just for ldc. Can 
we simply compile it using ldc then import it and use dmd, 
ldc, or gdc afterwards?


The ability to write kernels is limited to LDC, though there is 
no practical reason that, once compiled, you couldn't use 
resulting generated files with GDC or DMD (as long as the 
mangling matches, which it should). This is not a priority to 
get working, since the assumption is if you're trying to use 
the GPU to boost your computing power, then you like care 
enough to use LDC, as opposed to DMD (GDC is still a bit behind 
DMD so I don't consider it) to get good optimisations in the 
first place.




Yes, but dmd is still good for development since LDC sometimes 
has problems.


Can we compile kernels in LDC and import them in to a D project 
seamlessly? Basically keep an LDC project that deals with the 
kernels while using dmd for the bulk of the program. I mean, is 
it a simple import/export type of issue?






using DCompute

2017-07-27 Thread James Dean via Digitalmars-d-learn
I'm interested in trying it out, says it's just for ldc. Can we 
simply compile it using ldc then import it and use dmd, ldc, or 
gdc afterwards?


---
a SPIRV capable LLVM (available here to build ldc to to support 
SPIRV (required for OpenCL)).
or LDC built with any LLVM 3.9.1 or greater that has the NVPTX 
backend enabled, to support CUDA.

---

Is the LDC from the download pages have these enabled?

Also, can DCompute or any GPU stuff efficiently render stuff 
because it is already on the GPU or does one sort of have to jump 
through hoops to, say, render a buffer?


e.g., suppose I want to compute a 3D mathematical function and 
visualize it's volume. Do I go in to the GPU, do the compute, 
back out to cpu, then to the graphics system(opengl/directX) or 
can I just essentially do it all from the gpu?




Randomed/encoded code

2017-07-27 Thread James Dean via Digitalmars-d-learn
I would like to encode code in such a way that each compilation 
produces "random" code as compared to what the previous 
compilation produced, but ultimately the same code is ran each 
time(same effect).


Basically we can code a function that does a job X in many 
different ways. Each way looks different in binary but does the 
same job(Same effect). I'd like a way to sort of randomly 
sample/generate the different functions that do the same job.



The easiest way to wrap your head around this is to realize that 
certain instructions and groups of instructions can be 
rearranged, producing a binary that is different but the effect 
is the same. Probably, ultimately, that is all that can be 
done(certain other tricks could possibly be added to increase the 
sampling coverage such as nop like instructions, dummy 
instructions, etc).


The main issue is how to take an actual D function and transform 
it in to a new D function, which, when ran, ultimately does the 
same thing as the original but is not the same "binary".


Encrypting is a subset of this problem as we can take any string 
and use it to encode the code then decrypt it. And this may be 
usable, but then the encryption and decryption instructions must 
somehow be randomizable, else we are back at square one. It might 
be easier though, to use the encryption method to randomize the 
original function since the encryption routine is known while the 
original function is not(as it could be any function).


I'm not looking for a mathematical solution, just something that 
works well in practice. i.e., The most skilled human reading the 
disassembly would find it very difficult to interpret what is 
going on. He might be able to figure out one encryption routine, 
say, but when he sees the "same code"(same effect) he will have 
to start from scratch to understand it because its been 
"randomized".


The best way I can see how to do this is to have a list of well 
known encoding routines that take an arbitrary function, encrypt 
it. Each routine can be "randomized" by using various techniques 
to disguise it such as those mentioned earlier. This expands the 
list of functions tremendously. If there are N functions and M 
different ways to alter each of those functions then there are 
N*M total functions that we can use to encrypt the original 
function. If we further allow function composition of these 
functions, then we can get several orders of magnitude of 
complexity with such as a few N.


The goal though, is to do this efficiently and effectively in a 
way that can be amended. It will be useful in copy protection and 
used with other techniques to make it much more effective. 
Ultimately the weak point with the encryption techniques is 
decryption the functions but by composing encryption routines 
makes it stronger.


Any ideas how to achieve this in D nicely?







Code Construction

2017-07-14 Thread James Dean via Digitalmars-d-learn
Heres a module I just started working on, completely 
incompletely, but demonstrates an ideas that might be very useful 
in D: Code Construction.


The idea is very simple: We have code strings like "class 
%%name%% { }"


and %%name%% is replaced with the name of a type T.

The idea is that we can map a type to a code string and have the 
type "fill in the blanks" rather than having to do it all in D, 
which is far more verbose and terse, we can do it using a 
different method.


Eventually the idea is that we can take any type T, map it in to 
a code string, then map that string to a new type that relates to 
T.


What I use this for is to simplify generating related members.

enum eState
{
Ready,
Starting,
Running,
Pausing,
Paused,
Unpausing,
Stopping,   
Stopped,
Finished,
}


Can be used to generate new stuff:


mixin(sCodeConstruction.MemberMap!("\t\tsMultiCallback!callbackSignature 
Callback_%%name%%;\n", eState));

Generates the following code:

sMultiCallback!callbackSignature Callback_Ready;
sMultiCallback!callbackSignature Callback_Starting;
sMultiCallback!callbackSignature Callback_Running;
sMultiCallback!callbackSignature Callback_Pausing;
sMultiCallback!callbackSignature Callback_Paused;
sMultiCallback!callbackSignature Callback_Unpausing;
sMultiCallback!callbackSignature Callback_Stopping;
sMultiCallback!callbackSignature Callback_Stopped;
sMultiCallback!callbackSignature Callback_Finished;


and if the enum changes, one does not have to regenerate the 
callbacks. (Obviously this is a simple case and D can handle this 
reasonably easily, but the more complex cases are far more 
verbose than what can be done by substitutions)


Using a substitution grammar is a lot easier IMO and D needs 
something like. Unfortunately, I won't be finishing it anytime 
soon so I thought I'd mention the idea and maybe someone else 
could tackle it.




module mCodeConstruction;
import std.meta, std.traits, std.string, std.algorithm, 
std.array, std.conv;




/*
Code construction that simplifies common tasks. 
*/

struct sCodeConstruction
{
		//[ For examples, let X in myModule.myClass.X represents the 
member `int X = 4;` and Y in myModule.myClass.Y be `bool Y(double 
q)`]

public static:
enum Tokens : string
{
		Name= "%%name%%",	// The member name (X => "X", Y => 
"Y")
		FullName			= "%%fullName%%",// The full member name (X => 
"myClass.X", Y => "myClass.Y")
		CompleteName		= "%%completeName%%",			// The complete member 
name (X => "myModule.myClass.X", Y => "myModule.myClass.Y")
		Value= "%%value%%",	// The default value of the member 
(X => 4, Y => "")
		Type= "%%type%%",	// The type of the member (X => int, 
Y => bool delegate(double))
		Module= "%%module%%",	// The module the member is in (X 
=> "myModule", Y => "myModule")
		ParentType			= "%%parentType%%",// The type of the parent 
(X => "class", Y => "class")
		MethodReturn		= "%%method/return%%",			// The return type of a 
member if it has one (X => "", Y => "bool")
		MethodParamNName	= "%%methodParam/N/Name%%",		// The Nth 
parameter's name (Y => "q")
		MethodParamNType	= "%%methodParam/N/type%%",		// The Nth 
parameter's type (Y => "double")

DoublePercent   = "%%%",
  // the %% symbol
}


/*
		A Resolver is a function takes a grammar symbol and returns the 
corresponding element for it from T.

The following list of resolvers are provided for common use:

		MemberResolver: Resolves member information: 
MemeberResolver("%%name%%", myClass.X) = "X".

*/
auto MemberResolver(string S, alias T)()
{
mixin("import "~moduleName!(T)~";");
mixin("alias val = " ~ fullyQualifiedName!(T) ~ ";");

switch(S)
{
case Tokens.Name : return to!string( 
__traits(identifier, T));

}

//pragma(msg, name);
		static if (is(typeof(T) == function) || is(typeof(*T) == 
function))

{

// Get the parameters in a (w)string array.
			enum p = split(Parameters!(member).stringof[1..$-1], 
",").map!(n => strip(n));

Tuple!(S,S)[] params;
foreach(a; aliasSeqOf!(p))
{
}
}   

return "";
}

auto StandardResolver(string S, alias 

teething troubles

2014-07-17 Thread Dean via Digitalmars-d-learn

Hi all,

  I need a little helping hand with dmd on a 32 bit Debian box. I
installed dmd from http://d-apt.sourceforge.net/

i) First trial:


$cat test.d

import std.stdio;
void main() {
   writeln(hello);
}

$ time dmd test.d

real0m2.355s
user0m1.652s
sys 0m0.364s

$./test
hello

$ dmd -v test.d | wc -l
84


Seems to be working. My only concern is whether 2.35s for
compiling such a trivial file is normal.

ii) 2nd trial

==
I installed gdc. Now I get
$ time gdc test.d

real0m6.286s
user0m3.856s
sys 0m0.884s
==

Given the dmd and gdc timings, it seems I am doing something
wrong.

iii) 3rd trial

I installed tango from http://d-apt.sourceforge.net/ I know this
can ruffle feathers. Please assume good faith. I am just trying
to learn from the tango book.

===

$ls /usr/include/dmd/tango

core  io  math  net  stdc  sys  text  time  util

$cat test.d

import tango.io.Stdout;

void main() {
   Stdout (hello).newline;
}

$dmd -I/usr/include/dmd/tango  -v test.d


binarydmd
version   v2.065
config/etc/dmd.conf
parse test
importall test
importobject
(/usr/include/dmd/druntime/import/object.di)
importtango.io.Stdout   (tango/io/Stdout.d)
test.d(1): Error: module Stdout is in file 'tango/io/Stdout.d'
which cannot be read
import path[0] = /usr/include/dmd/tango
import path[1] = /usr/include/dmd/phobos
import path[2] = /usr/include/dmd/druntime/import


It seems inspite of specifying usr/include/dmd/tango it cannot
import tango.io.Stdout.

Here is my /etc/dmd.conf

[Environment32]
DFLAGS=-I/usr/include/dmd/phobos
-I/usr/include/dmd/druntime/import -L-L/usr/lib/i386-linux-gnu
-L--export-dynamics

What am I doing wrong.

Thanks for the help.

-- Dean


Re: teething troubles

2014-07-17 Thread Dean via Digitalmars-d-learn
On a Windows 32 bit that little program compiles in 0.74 
seconds (warmed up time) using and old CPU. Modern CPUs should 
take about 0.5 seconds.


Thanks for checking the timings wait I am not alone in using a
32 bit box !

Mine is an Athlon 1045.456 MHz. The minimum of 3 consecutive
compilation runs that I get is 2.3 seconds.


Keep in mind that writeln and std.stdio are lot of stuff.


Just to be clear I am not complaining dmd is slow. I am just
concerned that I am doing something wrong.


I have also tried this C++ version:

#include iostream
int main() {
std::cout  hello  std::endl;
return 0;
}
With gcc 4.8.0 it takes me 0.48 seconds to compile (warmed up 
time).


On my box this compiles in 1.2 seconds. So it seems somewhat
consistent (as in 3 times slower for both). I got worried because
I expected dmd to compile hello world substantially faster than
g++. I have heard that dmd is instantaneous.


I am still at a loss about tango for D2 problem. Shouldnt
providing the -I option with the path to tango work. Does any
other magic need to happen. I dont know the internal mechanics of
importing modules.


Re: teething troubles

2014-07-17 Thread Dean via Digitalmars-d-learn

On Thursday, 17 July 2014 at 09:32:24 UTC, bearophile wrote:

Dean:


Mine is an Athlon 1045.456 MHz.

   
Didn't notice that before hitting send.


compilation runs that I get is 2.3 seconds.
Then I think your timings could be OK, I am using an old 2.3 
GHz CPU.


Glad to know that I am not doing something stupid, yet.

dmd compiles very quickly, but to compile writeln D has to 
digest a good amount of Phobos code.


Are the reasons for this similar to why C++ STL is not an object
code library ?


I am still at a loss about tango for D2 problem.

I think I've never used Tango with dmd.


Not even sure if its a tango problem. Dmd doesn't seem to be
picking up the path that I specify with -I. Perhaps the mechanics
of module loading is not as simple as I imagine. I initially
thought its a permission problem, but that is not the case.


Re: teething troubles

2014-07-17 Thread Dean via Digitalmars-d-learn

On Thursday, 17 July 2014 at 10:13:46 UTC, bearophile wrote:

Dean:

dmd compiles very quickly, but to compile writeln D has to 
digest a good amount of Phobos code.


Are the reasons for this similar to why C++ STL is not an 
object

code library ?


The reasons for the large amount of code compiled for a writeln 
are that: writeln is more powerful, Phobos modules import each 
other a lot. And several parts of Phobos are not compiled 
because there are templates everywhere. Take a look at Phobos 
sources and you will see.


Bye,
bearophile


Apologies, I wasnt clear. I was talking about the reason behind
compiling the code from source as opposed to linking precompiled
objects. I was speculating wether the reasons are similar to that 
of STL, i.e. specializing to the type as late as possible.


Re: teething troubles

2014-07-17 Thread Dean via Digitalmars-d-learn

On Thursday, 17 July 2014 at 11:08:16 UTC, Mike Parker wrote:

On 7/17/2014 7:01 PM, Dean wrote:



Not even sure if its a tango problem. Dmd doesn't seem to be
picking up the path that I specify with -I. Perhaps the 
mechanics

of module loading is not as simple as I imagine. I initially
thought its a permission problem, but that is not the case.


What does your tango source tree look like? Is it a) or b)?

a) /usr/include/dmd/tango/io/Stdout.d
b) /usr/include/dmd/tango/tango/io/Stdout.d

When you pass -I/usr/include/dmd/tango, then it needs to look 
like b). If it's a), then you should pass -I/usr/include/dmd. 
The reason is that 'tango' is the top-level package directory. 
Its *parent* directory needs to be on the import path.


Hi Mike,

  that was it. Thanks a lot.