Tutorial on C++ Integration?

2015-09-28 Thread Mike McKee via Digitalmars-d-learn
I'm using Qt/C++ on a Mac. I want to try my hand at making a 
dylib in D that can receive a C++ string, reverse it, and respond 
with the result back to Qt/C++.


Are there any easy tutorials out there with something simple like 
that?


You probably want me to type some code so that I showed that I at 
least tried something. Okay, here's me just guessing:


// test.dylib

auto _cstr2dstr(inout(char)* cstr) {
import core.stdc.string: strlen;
return cstr ? cstr[0 .. strlen(cstr)] : cstr[0 .. 0];
}

extern(C++) char*  reverseString(char* cstr) {
  import std.string;
  import std.algorithm;
  s = _cstr2dstr(cstr);
  s = s.reverse();
  return toStringz(s);
}

Also, what do I type to define this as a class that C++ can call, 
such as the following C++?


QObject Foo = new Foo();
qDebug() << Foo.reverseString('bar');

Next, I'm a bit fuzzy on how to compile that into a dylib yet, as 
well as how to call this inside of Qt/C++ with QtCreator.


Re: Can someone help me make a binary heap with std.container.binaryheap? It's not working... AssertError

2015-09-28 Thread Ali Çehreli via Digitalmars-d-learn

On 09/25/2015 08:22 PM, Enjoys Math wrote:

Init:

programResultsQ = heapify!(compareResults,
Array!(Results!(O,I)))(Array!(Results!(O,I))([Results!(O,I)()]), 1);

Decl:

alias ProgramResultsQueue(O,I) = BinaryHeap!(Array!(Results!(O,I)),
compareResults);

Error:

assert error in std.container.array.d (line 381)

upon running.  Compiles fine.  I'd like to initialize the heap to empty
if possible.


import std.container;
import std.algorithm;
import std.traits;

void main()
{
// The data to start with
auto data = [ 10, 5, -7, 20, 0, 3 ];

// This will move elements around to make a binary heap
auto heap = heapify(data);

// Yes, it is a BinaryHeap:
static assert(isInstanceOf!(BinaryHeap, typeof(heap)));

// Yes, the elements are in binary heap order:
assert(data == [20, 10, 3, 5, 0, -7]);

/* As is the case with binary heaps, although the data is
   ordered al a binary heap, that data is the equivalent
   of the following binary search tree:

 20
   /\
  /  \
103
   /  \  /
  50-7

*/

// Let's visit the elements through the binary heap
// range. They should appear in descending order:
assert(heap.equal([20, 10, 5, 3, 0, -7 ]));
}

> I'd like to initialize the heap to empty
> if possible.

The following works (at least for int elements):

import std.container;
import std.algorithm;
import std.traits;

void main()
{
// Let's start with an empty array:
Array!int data;

auto heap = heapify(data);
assert(heap.empty);

heap.insert(42);
heap.insert(-10);
heap.insert(7);
assert(heap.equal([ 42, 7, -10 ]));
}

Ali



Re: Testing Return Value Optimization (RVO)

2015-09-28 Thread chmike via Digitalmars-d-learn

Oops found it my self.

I had to use

auto x = foo();





Re: Testing Return Value Optimization (RVO)

2015-09-28 Thread chmike via Digitalmars-d-learn

I tried your code as this and it doesn't work.

#!/usr/bin/rdmd -O

import std.stdio;

struct S {
int a;
@disable this(this);
};

S foo() {
S v;
v.a = 1;
writeln(&v);
return v;
}


void main()
{
S x;
x = foo();
writeln(&x);
}

I even tried with dmd -O without success.

What am I doing wrong ?


Re: Server side command execution.

2015-09-28 Thread holo via Digitalmars-d-learn

On Monday, 28 September 2015 at 04:55:30 UTC, tcak wrote:

On Sunday, 27 September 2015 at 23:56:10 UTC, holo wrote:

Hello

Im trying to execute commands on server side. Here is my 
server based on other example from forum:


[...]


You are comparing whole buffer to "exit"


I changed my condition to:

if(to!string(buffer[0..received]) == "exit")
{

break;
}


But it still dint help.


Re: Server side command execution.

2015-09-28 Thread anonymous via Digitalmars-d-learn
On Monday 28 September 2015 11:59, holo wrote:

> I changed my condition to:
> 
> if(to!string(buffer[0..received]) == "exit")
>   {
>   
>   break;
>   }
> 
> 
> But it still dint help.

The client probably sends a newline; i.e. buffer[0 .. received] is "exit\n".


Re: Server side command execution.

2015-09-28 Thread anonymous via Digitalmars-d-learn
On Monday 28 September 2015 12:40, anonymous wrote:

> The client probably sends a newline; i.e. buffer[0 .. received] is
> "exit\n".

Or more likely it's "exit\r\n".


Re: Reduce parameters [was pi program]

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 10:46 +, John Colvin via Digitalmars-d-learn
wrote:
> […]

I guess the summary is: it's a breaking change, so do it. No we can't
do that it's a breaking change. 

Seems lame given all the other breaking changes that have been. Sad
given that reduce is probably the single most important operation in
parallel programming.

> It's been argued about a lot.
> 
> https://issues.dlang.org/show_bug.cgi?id=8755
> https://github.com/D-Programming-Language/phobos/pull/861
> https://github.com/D-Programming-Language/phobos/pull/1955
> https://github.com/D-Programming-Language/phobos/pull/2033
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 12:32 +, Zoidberg via Digitalmars-d-learn
wrote:
> > Here's a correct version:
> > 
> > import std.parallelism, std.range, std.stdio, core.atomic;
> > void main()
> > {
> > shared ulong i = 0;
> > foreach (f; parallel(iota(1, 100+1)))
> > {
> > i.atomicOp!"+="(f);
> > }
> > i.writeln;
> > }
> 
> Thanks! Works fine. So "shared" and "atomic" is a must?

Yes and no. But mostly no. If you have to do this as an explicit
iteration (very 1970s) then yes to avoid doing things wrong you have to
ensure the update to the shared mutable state is atomic.

A more modern (1930s/1950s) way of doing things is to use implicit
iteration – something Java, C++, etc. are all getting into more and
more, you should use a reduce call. People have previously mentioned:

taskPool.reduce!"a + b"(iota(1UL,101))

which I would suggest has to be seen as the best way of writing this
algorithm.

 
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 14:33 +0200, anonymous via Digitalmars-d-learn
wrote:
> […]
> I'm pretty sure atomicOp is faster, though.

Rough and ready anecdotal evidence would indicate that this is a
reasonable statement, by quite a long way. However a proper benchmark
is needed for statistical significance.

On the other hand std.parallelism.taskPool.reduce surely has to be the
correct way of expressing the algorithm?

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 15:56 +, Jay Norwood via Digitalmars-d-learn
wrote:
> std.parallelism.reduce documentation provides an example of a 
> parallel sum.
> 
> This works:
> auto sum3 = taskPool.reduce!"a + b"(iota(1.0,101.0));
> 
> This results in a compile error:
> auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));
> 
> I believe there was discussion of this problem recently ...

Which may or may not already have been fixed, or…

On the other hand:

taskPool.reduce!"a + b"(1UL, iota(101));

seems to work fine.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Sat, 2015-09-26 at 17:20 +, Jay Norwood via Digitalmars-d-learn
wrote:
> This is a work-around to get a ulong result without having the 
> ulong as the range variable.
> 
> ulong getTerm(int i)
> {
> return i;
> }
> auto sum4 = taskPool.reduce!"a + 
> b"(std.algorithm.map!getTerm(iota(11)));

Not needed as reduce can take an initial value that sets the type of
the template. See previous email.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: pi program

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Fri, 2015-09-25 at 18:45 +, Meta via Digitalmars-d-learn wrote:
> […]
> 
> The main difference is that "method call" style is more amenable 
> to chaining (and IMO, it looks cleaner as you don't have nesting 
> parentheses.

I guess coming from Clojure I was less worried about Lisp-style code.

I'll try to be more Python/Java/Scala-ish ;-)

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Reduce parameters [was pi program]

2015-09-28 Thread John Colvin via Digitalmars-d-learn

On Monday, 28 September 2015 at 11:04:56 UTC, Russel Winder wrote:
On Sat, 2015-09-26 at 10:46 +, John Colvin via 
Digitalmars-d-learn wrote:

[…]


I guess the summary is: it's a breaking change, so do it. No we 
can't do that it's a breaking change.


Seems lame given all the other breaking changes that have been. 
Sad given that reduce is probably the single most important 
operation in parallel programming.


My thoughts exactly, even though it was partly me that pointed 
out the breaking changes...


I still think we should add fold[lr]? just to fix the ordering.


Re: Parallel processing and further use of output

2015-09-28 Thread John Colvin via Digitalmars-d-learn

On Monday, 28 September 2015 at 11:31:33 UTC, Russel Winder wrote:
On Sat, 2015-09-26 at 14:33 +0200, anonymous via 
Digitalmars-d-learn wrote:

[…]
I'm pretty sure atomicOp is faster, though.


Rough and ready anecdotal evidence would indicate that this is 
a reasonable statement, by quite a long way. However a proper 
benchmark is needed for statistical significance.


On the other hand std.parallelism.taskPool.reduce surely has to 
be the correct way of expressing the algorithm?


It would be really great if someone knowledgable did a full 
review of std.parallelism to find out the answer, hint, hint...  
:)


Re: Server side command execution.

2015-09-28 Thread holo via Digitalmars-d-learn

On Monday, 28 September 2015 at 10:52:07 UTC, anonymous wrote:

On Monday 28 September 2015 12:40, anonymous wrote:

The client probably sends a newline; i.e. buffer[0 .. 
received] is "exit\n".


Or more likely it's "exit\r\n".


I changed condition to:

if(to!string(buffer[0..received]) == "exit\n")
{

break;
}

and now it is working. Thank you all for help.




Re: Threading Questions

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
I hadn't answered as I do not have answers to the questions you ask. My
reason: people should not be doing their codes using these low-level
shared memory techniques. Data parallel things should be using the
std.parallelism module. Dataflow-style things should be using spawn and
channels – akin to the way you do things in Go.

So to give you an answer I would go back a stage, forget threads,
mutexes, synchronized, etc. and ask what do you want you workers to do?
If they are to do something and return a result then spawn and channel
is exactly the right abstraction to use. Think "farmer–worker", the
farmer spawns the workers and then collects their results. No shared
memory anywyere – at least not mutable.

On Fri, 2015-09-25 at 15:19 +, bitwise via Digitalmars-d-learn
wrote:
> Hey, I've got a few questions if anybody's got a minute.
> 
> I'm trying to wrap my head around the threading situation in D. 
> So far, things seem to be working as expected, but I want to 
> verify my solutions.
> 
> 1) Are the following two snippets exactly equivalent(not just in 
> observable behaviour)?
> a)
> 
> Mutex mut;
> mut.lock();
> scope(exit) mut.unlock();
> 
> b)
> Mutex mut;
> synchronized(mut) { }
> 
> Will 'synchronized' call 'lock' on the Mutex, or do something 
> else(possibly related to the interface Object.Monitor)?
> 
> 2) Phobos has 'Condition' which takes a Mutex in the constructor. 
> The documentation doesn't exactly specify this, but should I 
> assume it works the same as std::condition_variable in C++?
> 
> For example, is this correct?
> 
> Mutex mut;
> Condition cond = new Condition(mut);
> 
> // mut must be locked before calling Condition.wait
> synchronized(mut)  // depends on answer to (1)
> {
>  // wait() unlocks the mutex and enters wait state
>  // wait() must re-acquire the mutex before returning when 
> cond is signalled
>  cond.wait();
> }
> 
> 3) Why do I have to pass a "Mutex" to "Condition"? Why can't I 
> just pass an "Object"?
> 
> 4) Will D's Condition ever experience spurious wakeups?
> 
> 5) Why doesn't D's Condition.wait take a predicate? I assume this 
> is because the answer to (4) is no.
> 
> 6) Does 'shared' actually have any effect on non-global variables 
> beside the syntactic regulations?
> 
> I know that all global variables are TLS unless explicitly marked 
> as 'shared', but someone once told me something about 'shared' 
> affecting member variables in that accessing them from a separate 
> thread would return T.init instead of the actual value... or 
> something like that. This seems to be wrong(thankfully).
> 
> For example, I have created this simple Worker class which seems 
> to work fine without a 'shared' keyword in sight(thankfully). I'm 
> wondering though, if there would be any unexpected consequences 
> of doing things this way.
> 
> http://dpaste.com/2ZG2QZV
> 
> 
> 
> 
> Thanks!
>  Bit
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Reduce parameters [was pi program]

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Mon, 2015-09-28 at 11:37 +, John Colvin via Digitalmars-d-learn
wrote:
> 
[…]
> My thoughts exactly, even though it was partly me that pointed 
> out the breaking changes...

Curses, if no-one had pointed out it was breaking maybe no-one would
have noticed, and just made the change?

> I still think we should add fold[lr]? just to fix the ordering.

Whatever happens in std.algorithm must also happen in std.parallelism.
reduce in std.parallelism was constructed to be in harmony with
std.algorithm and not UFCS.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Mon, 2015-09-28 at 11:38 +, John Colvin via Digitalmars-d-learn
wrote:
> […]
> 
> It would be really great if someone knowledgable did a full 
> review of std.parallelism to find out the answer, hint, hint...  
> :)

Indeed, I would love to be able to do this. However I don't have time
in the next few months to do this on a volunteer basis, and no-one is
paying money whereby this review could happen as a side effect. Sad,
but…
-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
As a single data point:

==  anonymous_fix.d ==
5050

real0m0.168s
user0m0.200s
sys 0m0.380s
==  colvin_fix.d ==
5050

real0m0.036s
user0m0.124s
sys 0m0.000s
==  norwood_reduce.d ==
5050

real0m0.009s
user0m0.020s
sys 0m0.000s
==  original.d ==
218329750363

real0m0.024s
user0m0.076s
sys 0m0.000s


Original is the original, not entirely slow, but broken :-). anonymous
is the anonymous' synchronized keyword version, slow. colvin_fix is
John Colvin's use of atomicOp, correct but only ok-ish on speed. Jay
Norword first proposed the reduce answer on the list, I amended it a
tiddly bit, but clearly it is a resounding speed winner.

I guess we need a benchmark framework that can run these 100 times
taking processor times and then do the statistics on them. Most people
would assume normal distribution of results and do mean/std deviation
and median. 

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Parallel processing and further use of output

2015-09-28 Thread John Colvin via Digitalmars-d-learn

On Monday, 28 September 2015 at 12:18:28 UTC, Russel Winder wrote:

As a single data point:

==  anonymous_fix.d == 5050

real0m0.168s
user0m0.200s
sys 0m0.380s
==  colvin_fix.d ==
5050

real0m0.036s
user0m0.124s
sys 0m0.000s
==  norwood_reduce.d ==
5050

real0m0.009s
user0m0.020s
sys 0m0.000s
==  original.d ==
218329750363

real0m0.024s
user0m0.076s
sys 0m0.000s


Original is the original, not entirely slow, but broken :-). 
anonymous is the anonymous' synchronized keyword version, slow. 
colvin_fix is John Colvin's use of atomicOp, correct but only 
ok-ish on speed. Jay Norword first proposed the reduce answer 
on the list, I amended it a tiddly bit, but clearly it is a 
resounding speed winner.


Pretty much as expected. Locks are slow, shared accumulators 
suck, much better to write to thread local and then merge.


Re: Server side command execution.

2015-09-28 Thread Adam D. Ruppe via Digitalmars-d-learn

On Monday, 28 September 2015 at 11:44:32 UTC, holo wrote:

if(to!string(buffer[0..received]) == "exit\n")


You shouldn't need that to!string by the way. I believe that will 
work just comparing the buffer directly.


Converting to string is more important when you are storing a 
copy than just comparing it.




Re: Parallel processing and further use of output

2015-09-28 Thread Russel Winder via Digitalmars-d-learn
On Mon, 2015-09-28 at 12:46 +, John Colvin via Digitalmars-d-learn
wrote:
> […]
> 
> Pretty much as expected. Locks are slow, shared accumulators 
> suck, much better to write to thread local and then merge.

Quite. Dataflow is where the parallel action is. (Except for those
writing concurrency and parallelism libraries) Anyone doing concurrency
and parallelism with shared memory multi-threading, locks,
synchronized, mutexes, etc. is doing it wrong. This has been known
since the 1970s, but the programming community got sidetracked by lack
of abstraction (*) for a couple of decades.


(*) I blame C, C++ and Java. And programmers who programmed before (or
worse, without) thinking.

-- 
Russel.
=
Dr Russel Winder  t: +44 20 7585 2200   voip: sip:russel.win...@ekiga.net
41 Buckmaster Roadm: +44 7770 465 077   xmpp: rus...@winder.org.uk
London SW11 1EN, UK   w: www.russel.org.uk  skype: russel_winder



signature.asc
Description: This is a digitally signed message part


Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread ponce via Digitalmars-d-learn

On Tuesday, 22 September 2015 at 09:38:12 UTC, thedeemon wrote:

On Monday, 21 September 2015 at 15:00:24 UTC, ponce wrote:

All in the title.

DMD 64-bit links with the VS linker.
Do users need to install the VS redistributable libraries?


I think they don't.
Generated .exe seems to depend only on kernel32.dll and 
shell32.dll, i.e. things users already have.


So I've released software with LDC 0.16.0-alpha4 Win64, and one 
user send me that http://i.imgur.com/xbU1VeS.png


I thought it was only used for linking :(

Does it also affect executable made with DMD and linked with MS 
linker?


Re: Tutorial on C++ Integration?

2015-09-28 Thread Kagamin via Digitalmars-d-learn
http://wiki.dlang.org/Vision/2015H1 C++ integration was planned 
to be available by the end of 2015. May be too optimistic still.


Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread Sebastiaan Koppe via Digitalmars-d-learn

On Monday, 28 September 2015 at 15:10:25 UTC, ponce wrote:

On Tuesday, 22 September 2015 at 09:38:12 UTC, thedeemon wrote:

On Monday, 21 September 2015 at 15:00:24 UTC, ponce wrote:

All in the title.

DMD 64-bit links with the VS linker.
Do users need to install the VS redistributable libraries?


I think they don't.
Generated .exe seems to depend only on kernel32.dll and 
shell32.dll, i.e. things users already have.


So I've released software with LDC 0.16.0-alpha4 Win64, and one 
user send me that http://i.imgur.com/xbU1VeS.png


I thought it was only used for linking :(

Does it also affect executable made with DMD and linked with MS 
linker?


Basically you executable is bound to whatever runtime you had 
installed when linking the thing. If those aren't installed on 
the end user's machine, you get that error. Pretty neat huh?


I had the same problem trying your Vibrant game.

I could not find out which redistributable I had to install (what 
version of VS did you have installed / on what version of windows 
are you?). I decided to install them all, but couldn't install 
the one for 2015 (due to Windows6.1-KB2999226-x64.msu). After 
trying some workarounds I gave up.


I am on windows 7 by the way.


Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread ponce via Digitalmars-d-learn
On Monday, 28 September 2015 at 16:01:54 UTC, Sebastiaan Koppe 
wrote:

On Monday, 28 September 2015 at 15:10:25 UTC, ponce wrote:

On Tuesday, 22 September 2015 at 09:38:12 UTC, thedeemon wrote:

On Monday, 21 September 2015 at 15:00:24 UTC, ponce wrote:

All in the title.

DMD 64-bit links with the VS linker.
Do users need to install the VS redistributable libraries?


I think they don't.
Generated .exe seems to depend only on kernel32.dll and 
shell32.dll, i.e. things users already have.


So I've released software with LDC 0.16.0-alpha4 Win64, and 
one user send me that http://i.imgur.com/xbU1VeS.png


I thought it was only used for linking :(

Does it also affect executable made with DMD and linked with 
MS linker?


Basically you executable is bound to whatever runtime you had 
installed when linking the thing. If those aren't installed on 
the end user's machine, you get that error. Pretty neat huh?




OK, but why does that need to happen? I don't get why does 
linking with MS linker implies a runtime dependency.


I thought we would be left out of these sort of problems when 
using D :(






Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread Dicebot via Digitalmars-d-learn
Maybe LDC uses dynamic linking by default and DMD static one, 
same as on Linux?


Re: Tutorial on C++ Integration?

2015-09-28 Thread Jacob Carlborg via Digitalmars-d-learn

On 2015-09-28 09:08, Mike McKee wrote:

I'm using Qt/C++ on a Mac. I want to try my hand at making a dylib in D


Dynamic libraries are not officially supported on OS X.

--
/Jacob Carlborg


Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread kinke via Digitalmars-d-learn
Maybe LDC uses dynamic linking by default and DMD static one, 
same as on Linux?


Yes, LDC does use dynamic linking by default (as MSVC iirc). 
Static linking can be enabled by providing the 
-DLINK_WITH_MSVCRT=OFF switch to the CMake command.


OK, but why does that need to happen? I don't get why does 
linking with MS linker

implies a runtime dependency.
I thought we would be left out of these sort of problems when 
using D :(


druntime is a layer on top of a C runtime, as redoing all of that 
low-level and platform-specific stuff in D doesn't make a lot of 
sense. Then it's just a choice of linking dynamically or 
statically against it. Doesn't have anything to do with the used 
linker.


Re: Mac IDE with Intellisense

2015-09-28 Thread wobbles via Digitalmars-d-learn
On Sunday, 27 September 2015 at 22:55:38 UTC, Johannes Loher 
wrote:
On Saturday, 26 September 2015 at 18:27:52 UTC, Mike McKee 
wrote:

On Saturday, 26 September 2015 at 10:31:13 UTC, wobbles wrote:

Have you installed dkit for sublime?


As in?

https://github.com/yazd/DKit

Looks like it's alpha and doesn't run on Mac? No homebrew 
install?


I'm using this and it works great. It uses dcd, so you need to 
install that first (via homebrew). As DKit it is a sublime text 
plugin, you don't install it via homebrew. Usually you'd 
install sublime packages via Package Control, but sadly DKit is 
not in the repositories yet. There is an issue about that 
already (https://github.com/yazd/DKit/issues/18). So instead, 
you install the package manually (just follow the instrcutions 
in the readme).


I don't know where you got the idea it doesn't work on OS X, it 
works like a charm for me.


Yeah, me too (on linux and windows). I used to try out quite a 
few of the D IDEs (Mono-d, DDT, Visual D etc) but this one plugin 
has replaced them all. For now at least...




Re: Do users need to install VS runtime redistributable if linking with Microsoft linker?

2015-09-28 Thread ponce via Digitalmars-d-learn

On Monday, 28 September 2015 at 15:10:25 UTC, ponce wrote:


Does it also affect executable made with DMD and linked with MS 
linker?


Just tested: no.




Re: Purity of std.conv.to!string

2015-09-28 Thread Nordlöw via Digitalmars-d-learn

On Sunday, 27 September 2015 at 05:52:26 UTC, Jack Stouffer wrote:
Please make an issue on https://issues.dlang.org and I'll take 
a look a this later. Most of the functions in std.conv are 
templated so it must be some internal function that's not 
properly annotated, or it's using manual memory management.


Already file here:

https://issues.dlang.org/show_bug.cgi?id=3437
https://issues.dlang.org/show_bug.cgi?id=4850


SList and DList init doesn't work in global variable

2015-09-28 Thread Martin Krejcirik via Digitalmars-d-learn
Is this intended or known issue ? It works with 2.066.

SList!int gslist = [1,2,3,4,5,6]; // broken since 2.067
// Error: reinterpreting cast from NodeWithoutPayload* to Node* is not
supported in CTFE

DList!int gdlist = [1,2,3,4,5,6]; // broken since 2.067
// Error: non-constant expression ...

void main()
{
SList!int lslist = [1,2,3,4,5,6]; // OK
DList!int ldlist = [1,2,3,4,5,6]; // OK
}

-- 
mk


Re: SList and DList init doesn't work in global variable

2015-09-28 Thread Ali Çehreli via Digitalmars-d-learn

On 09/28/2015 02:15 PM, Martin Krejcirik wrote:

Is this intended or known issue ? It works with 2.066.

SList!int gslist = [1,2,3,4,5,6]; // broken since 2.067
// Error: reinterpreting cast from NodeWithoutPayload* to Node* is not
supported in CTFE

DList!int gdlist = [1,2,3,4,5,6]; // broken since 2.067
// Error: non-constant expression ...

void main()
{
 SList!int lslist = [1,2,3,4,5,6]; // OK
 DList!int ldlist = [1,2,3,4,5,6]; // OK
}



Yes, it looks like there are problems with their implementations. Those 
questions aside, I would initialize such module variables in a module 
static this() anyway:


import std.container;

SList!int gslist;
DList!int gdlist;

static this() {
gslist = [1,2,3,4,5,6];
gdlist = [1,2,3,4,5,6];
}

void main() {
}

If you want the lists to be immutable, then the following initialization 
pattern works:


import std.container;

immutable SList!int gslist;
immutable DList!int gdlist;

auto init_gslist()
{
return SList!int([1,2,3,4,5,6]);
}

auto init_gd_list()
{
return DList!int([1,2,3,4,5,6]);
}

static this() {
gslist = init_gslist();
gdlist = init_gd_list();
}

void main() {
}

The fact that I needed the two module-level init_* functions is also 
problematic, right? For example, moving those inside static this() does 
not work.


Ali



Re: Parallel processing and further use of output

2015-09-28 Thread Jay Norwood via Digitalmars-d-learn

On Saturday, 26 September 2015 at 15:56:54 UTC, Jay Norwood wrote:

This results in a compile error:
auto sum3 = taskPool.reduce!"a + b"(iota(1UL,101UL));

I believe there was discussion of this problem recently ...


https://issues.dlang.org/show_bug.cgi?id=14832

https://issues.dlang.org/show_bug.cgi?id=6446

looks like the problem has been reported a couple of times.  I 
probably saw the discussion of the 8/22 bug.





Re: Threading Questions

2015-09-28 Thread bitwise via Digitalmars-d-learn

On Monday, 28 September 2015 at 11:47:38 UTC, Russel Winder wrote:
I hadn't answered as I do not have answers to the questions you 
ask. My reason: people should not be doing their codes using 
these low-level shared memory techniques. Data parallel things 
should be using the std.parallelism module. Dataflow-style 
things should be using spawn and channels – akin to the way you 
do things in Go.


So to give you an answer I would go back a stage, forget 
threads, mutexes, synchronized, etc. and ask what do you want 
you workers to do? If they are to do something and return a 
result then spawn and channel is exactly the right abstraction 
to use. Think "farmer–worker", the farmer spawns the workers 
and then collects their results. No shared memory anywyere – at 
least not mutable.


https://www.youtube.com/watch?v=S7pGs7JU7eM

Bit


Re: Server side command execution.

2015-09-28 Thread holo via Digitalmars-d-learn

On Monday, 28 September 2015 at 13:01:25 UTC, Adam D. Ruppe wrote:

On Monday, 28 September 2015 at 11:44:32 UTC, holo wrote:

if(to!string(buffer[0..received]) == "exit\n")


You shouldn't need that to!string by the way. I believe that 
will work just comparing the buffer directly.


Converting to string is more important when you are storing a 
copy than just comparing it.


Yep it it like you wrote. Thank you for advice.


enum to flags

2015-09-28 Thread Nicholas Wilson via Digitalmars-d-learn
so I have a bunch of enums (0 .. n) that i also want to represent 
as flags ( 1 << n foreach n ). Is there anyway to do this other 
than a string mixin?


use like:

enum blah
{
foo,
bar,
baz,
}

alias blahFlags = EnumToFlags!blah;

static assert(blahFlags.baz == 1 << blah.baz)


Re: enum to flags

2015-09-28 Thread Cauterite via Digitalmars-d-learn
On Tuesday, 29 September 2015 at 03:31:44 UTC, Nicholas Wilson 
wrote:
so I have a bunch of enums (0 .. n) that i also want to 
represent as flags ( 1 << n foreach n ). Is there anyway to do 
this other than a string mixin?


You could cheat with operator overloading:

enum blah {
foo,
bar,
baz,
};

struct EnumToFlags(alias E) {
template opDispatch(string Name) {
enum opDispatch = 1 << __traits(getMember, E, Name);
};
};

alias blahFlags = EnumToFlags!blah;

static assert(blahFlags.foo == (1 << blah.foo));
static assert(blahFlags.bar == (1 << blah.bar));
static assert(blahFlags.baz == (1 << blah.baz));