Re: So how exactly does one make a persistent range object?

2011-06-06 Thread Andrej Mitrovic
On 6/6/11, Jonathan M Davis jmdavisp...@gmx.com wrote:
 On the whole, I believe that ranges were generally intended to be processed
 and then tossed, which is usually what happens with iterators..

That's what I thought but wasn't sure if that was really the case. I
don't really have a solid C++ background (I've only had a brief
experiment with C++ years ago), and so I've never used iterators. But
I did read Andrei's paper on ranges, and the documentation in
std.range, plus there are some NG posts which show the inception of
the range implementation for D, which I haven't fully read yet. I'm
referring to these:

http://www.digitalmars.com/d/archives/digitalmars/D/announce/RFC_on_range_design_for_D2_12922.html

http://www.digitalmars.com/d/archives/digitalmars/D/announce/Revised_RFC_on_range_design_for_D2_13211.html

They seem like a nice bit of history that could be linked from the
main site, in some sort of 'trivia' section. :)

On 6/6/11, Jonathan M Davis jmdavisp...@gmx.com wrote:
 So, anything you do on your own could be polymorphic, but as soon as you get 
 ranges from Phobos, you lose the polymorphism.

Yeah, I've noticed that. I wouldn't want to loose the ability to call
into std.algorithm/std.range or even Philippe's dranges library, which
looks really neat. I guess I can use take() on a range and then
array() to get the underlying type which I could easily pass to other
functions. I'll see what I can come up with as I experiment with these
features.


Re: So how exactly does one make a persistent range object?

2011-06-06 Thread Timon Gehr
Andrej Mitrovic wrote:
 On 6/6/11, Jonathan M Davis jmdavisp...@gmx.com wrote:
 So, anything you do on your own could be polymorphic, but as soon as you
 get ranges from Phobos, you lose the polymorphism.

 Yeah, I've noticed that. I wouldn't want to loose the ability to call
 into std.algorithm/std.range or even Philippe's dranges library, which
 looks really neat. I guess I can use take() on a range and then
 array() to get the underlying type which I could easily pass to other
 functions. I'll see what I can come up with as I experiment with these
 features.

I don't get what the issue is. Static polymorphism can be turned into dynamic
polymorphism very easily. Just write a polymorphic wrapper template for ranges.
This will work with Phobos ranges just as well as with your own. ;)

Timon


Re: Is this a good way of setting up a timer?

2011-06-06 Thread Steven Schveighoffer
On Fri, 03 Jun 2011 17:37:40 -0400, Andrej Mitrovic  
andrej.mitrov...@gmail.com wrote:



On 6/3/11, Jonathan M Davis jmdavisp...@gmx.com wrote:


Generally, you'd just put it to sleep for the period of time that you  
want

to
wait for. The only reason that I see to keep waking it up is if it  
could be

interrupted and effectively told to wake up - in which case you would be
sleeping and waking up over and over again, checking to see if enough  
time

had
passed or if you had been signaled to stop waiting.


Yeah, I should have made a better example. Basically I have a loop
that ends either after a specific time period, or it could end by
receiving a signal. I just use a shared book for that, this would be
it:

while (engineActive)  // shared bool
{
if (Clock.currTime  finalTime)
break;

Thread.sleep(dur!(seconds)(1));
}



I'm going to put on my old-school concurrency hat here :)

How I would write this is with a mutex and a condition.  Then instead of  
sleeping, I'd wait for the conditional.  I.e.:


while(engineActive) // shared bool
{
   auto curtime = Clock.currTime;
   if(curtime  finalTime)
  break;
   synchronized(mutex) cond.wait(finalTime - curTime);
}

Then, you have a function that sets the bool and signals the condition:

void endProgram()
{
   synchronized(mutex)
   {
  if(engineActive)
  {
  engineActive = false;
  cond.notifyAll();
  }
   }
}

Taking my hat off, I think the better way to do this is with  
std.concurrency.  But I have zero experience with it.  I'd guess there are  
ways to wait on your message queue, and some way to broadcast a message to  
all threads.


In any case, sleeping for a set period of time, and then checking a  
boolean works, but is both inefficient and less responsive than waiting  
for the exact condition you are looking for.


Trouble is, in some cases, a thread is waiting for *multiple* conditions,  
like input from a stream or exiting the program.  On some oses, it's very  
difficult to wait for both of those.  There are ways to solve this with  
helper threads, but this can also be wasteful.  Without further details on  
what your thread is doing, I can't say whether this would be a problem for  
you.


What I've done in the past is wait for the condition that needs to be more  
responsive (i.e. handling I/O) and use a timeout that allows a reasonable  
response to the other condition.


-Steve


Re: Template error and parallel foreach bug?

2011-06-06 Thread Steven Schveighoffer
On Sat, 04 Jun 2011 08:11:01 -0400, simendsjo simen.end...@pandavre.com  
wrote:



On 04.06.2011 14:02, simendsjo wrote:

The template implementation works on low numbers, but not on 1000
(haven't checked when it fails). It gives me the following error:
euler1.d(23): Error: template instance
euler1.SumMultiple3Or5(499LU,Below,57918LU) recursive expansion


Ehem.. Guess it fails at recursion depth 500 :)
This is perhaps even documented somewhere? Or is it possible to change  
the value?


Not sure if it's documented somewhere, but I think it is dmd's attempt to  
avoid infinite template recursion.  Preventing infinite recursion is  
equivalent to solving the halting problem, so I don't think it's possible  
to solve perfectly.


You can likely fix it by modifying the source for dmd.

-Steve


Are spawn'ed threads waited automatically?

2011-06-06 Thread Ali Çehreli
First, the answer may be as simple as use core.thread.thread_joinAll. 
Is that the proper way of waiting for all threads?


Second, my question may not be a valid example as starting a thread 
without communicating with it may be under the umbrella of 
parallelization. Maybe in concurrency, threads communicate with each 
other so that the following situation should not occur in practice.


Third is my question: :) If I spawn a single thread in main, the single 
thread seems to run to completion.


import std.stdio;
import std.concurrency;
import core.thread;

void foo()
{
foreach (i; 0 .. 5) {
Thread.sleep(dur!msecs(500));
writeln(i,  foo);
}
}

void main()
{
spawn(foo);
writeln(main done);
}

I get all of foo's output after main done:

main done
0 foo
1 foo
2 foo
3 foo
4 foo

If I introduce an intermediate thread that spawns the foo thread, now 
foo sometimes terminates early:


import std.stdio;
import std.concurrency;
import core.thread;

void foo()
{
foreach (i; 0 .. 5) {
Thread.sleep(dur!msecs(500));
writeln(i,  foo);
}
}

void intermediate()
{
spawn(foo);
writeln(intermediate done);
}

void main()
{
spawn(intermediate);
writeln(main done);
}

The output is inconsistent. Sometimes there is nothing from foo:

main done
intermediate done

Sometimes it runs fully:

main done
intermediate done
0 foo
1 foo
2 foo
3 foo
4 foo

Is the inconsistency a bug or a natural consequence of something? :) 
(Perhaps even the first example that seems to run correctly just has a 
higher probability of showing this behavior.)


I am aware of thread_joinAll(). Is that the recommended way of waiting 
for all threads?


Thank you,
Ali


Re: Is this a good way of setting up a timer?

2011-06-06 Thread Andrej Mitrovic
On 6/6/11, Steven Schveighoffer schvei...@yahoo.com wrote:
 Then, you have a function that sets the bool and signals the condition:

 void endProgram()
 {
 synchronized(mutex)
 {
if(engineActive)
{
engineActive = false;
cond.notifyAll();
}
 }
 }

Interesting, thanks. There's some WinAPI functions like
WaitForMultipleObjects, I've read about them too.

Other than that, the while loop will go away and be replaced with some
kind of front-end for the user (e.g. GUI), while the background thread
crunches some numbers. The background thread should be able to signal
if something went wrong. Throwing exceptions from the work thread is
off limits, because they don't propagate to the foreground thread, so
I'm left with either using some kind of global boolean or I'd use
std.concurrency.send() to signal that something went wrong.


Re: Are spawn'ed threads waited automatically?

2011-06-06 Thread Steven Schveighoffer

On Mon, 06 Jun 2011 14:09:25 -0400, Ali Çehreli acehr...@yahoo.com wrote:

First, the answer may be as simple as use core.thread.thread_joinAll.  
Is that the proper way of waiting for all threads?


main (the C main, not D main) does this already:

https://github.com/D-Programming-Language/druntime/blob/master/src/rt/dmain2.d#L512

But note that daemonized threads will not be included:

http://www.digitalmars.com/d/2.0/phobos/core_thread.html#isDaemon

-Steve


Re: Are spawn'ed threads waited automatically?

2011-06-06 Thread Ali Çehreli

On 06/06/2011 12:07 PM, Steven Schveighoffer wrote:

On Mon, 06 Jun 2011 14:09:25 -0400, Ali Çehreli acehr...@yahoo.com wrote:


First, the answer may be as simple as use
core.thread.thread_joinAll. Is that the proper way of waiting for all
threads?


main (the C main, not D main) does this already:

https://github.com/D-Programming-Language/druntime/blob/master/src/rt/dmain2.d#L512


But note that daemonized threads will not be included:

http://www.digitalmars.com/d/2.0/phobos/core_thread.html#isDaemon

-Steve


Thank you but it doesn't explain the inconsistent behavior. It seems 
like thread_joinAll() has a different idea at different times. Now I 
also print the result of isDaemon():


import std.stdio;
import std.concurrency;
import core.thread;

void tell_daemon_state(string name)
{
writeln(name,  isDaemon: , Thread.getThis.isDaemon);
}

void foo()
{
tell_daemon_state(foo);

foreach (i; 0 .. 5) {
Thread.sleep(dur!msecs(500));
writeln(i,  foo);
}
}

void intermediate()
{
tell_daemon_state(intermediate);

spawn(foo);
writeln(intermediate done);
}

void main()
{
tell_daemon_state(main);

spawn(intermediate);
writeln(main done);
}

I see that only the main thread is a daemon:

main isDaemon: true
main done
intermediate isDaemon: false
intermediate done
foo isDaemon: false
0 foo
1 foo
2 foo
3 foo
4 foo

That makes sense.

There is a race condition: Just because I added the printing of the 
isDaemon state, now foo() runs to completion seemingly everytime I start 
the program. When I remove the printing AND run the program under 
'time', I get inconsistent behavior.


The following are two consecutive runs:

$ time ./deneme
main done
intermediate done
0 foo--- foo()'s output is present
1 foo
2 foo
3 foo
4 foo

real0m2.504s
user0m0.000s
sys 0m0.000s

$ time ./deneme
main done
intermediate done--- foo()'s output is missing

real0m0.003s
user0m0.000s
sys 0m0.000s

As if thread_joinAll() misses the fact that there is still the 
non-daemon foo() thread.


Note that it's not failing to flush stdout either. The program runs 
shorter in the case where foo()'s output is missing.


Thank you,
Ali


Re: Are spawn'ed threads waited automatically?

2011-06-06 Thread Jonathan M Davis
On 2011-06-06 14:37, Ali Çehreli wrote:
 On 06/06/2011 12:07 PM, Steven Schveighoffer wrote:
  On Mon, 06 Jun 2011 14:09:25 -0400, Ali Çehreli acehr...@yahoo.com 
wrote:
  First, the answer may be as simple as use
  core.thread.thread_joinAll. Is that the proper way of waiting for all
  threads?
  
  main (the C main, not D main) does this already:
  
  https://github.com/D-Programming-Language/druntime/blob/master/src/rt/dma
  in2.d#L512
  
  
  But note that daemonized threads will not be included:
  
  http://www.digitalmars.com/d/2.0/phobos/core_thread.html#isDaemon
  
  -Steve
 
 Thank you but it doesn't explain the inconsistent behavior. It seems
 like thread_joinAll() has a different idea at different times. Now I
 also print the result of isDaemon():
 
 import std.stdio;
 import std.concurrency;
 import core.thread;
 
 void tell_daemon_state(string name)
 {
 writeln(name,  isDaemon: , Thread.getThis.isDaemon);
 }
 
 void foo()
 {
 tell_daemon_state(foo);
 
 foreach (i; 0 .. 5) {
 Thread.sleep(dur!msecs(500));
 writeln(i,  foo);
 }
 }
 
 void intermediate()
 {
 tell_daemon_state(intermediate);
 
 spawn(foo);
 writeln(intermediate done);
 }
 
 void main()
 {
 tell_daemon_state(main);
 
 spawn(intermediate);
 writeln(main done);
 }
 
 I see that only the main thread is a daemon:
 
 main isDaemon: true
 main done
 intermediate isDaemon: false
 intermediate done
 foo isDaemon: false
 0 foo
 1 foo
 2 foo
 3 foo
 4 foo
 
 That makes sense.
 
 There is a race condition: Just because I added the printing of the
 isDaemon state, now foo() runs to completion seemingly everytime I start
 the program. When I remove the printing AND run the program under
 'time', I get inconsistent behavior.
 
 The following are two consecutive runs:
 
 $ time ./deneme
 main done
 intermediate done
 0 foo --- foo()'s output is present
 1 foo
 2 foo
 3 foo
 4 foo
 
 real 0m2.504s
 user 0m0.000s
 sys 0m0.000s
 
 $ time ./deneme
 main done
 intermediate done --- foo()'s output is missing
 
 real 0m0.003s
 user 0m0.000s
 sys 0m0.000s
 
 As if thread_joinAll() misses the fact that there is still the
 non-daemon foo() thread.
 
 Note that it's not failing to flush stdout either. The program runs
 shorter in the case where foo()'s output is missing.

Unless the code has changed (and Sean was working on it a couple of months 
back, so I'm not sure what the current state is), on Linux, none of the 
spawned threads ever get joined, and they're all joinable - which causes 
problems. As I understand it, they should all be detached (as in the pthread 
concept of detached, not detached from the GC like core.Thread talks about) 
rather than joinable.

At this point, I don't trust spawn at all (on Linux at least). I've had too 
many problems with it. But I don't know what the current state is, because 
Sean was at least working on improving the situation. It's possible that the 
joinable issues and whatnot were worked out, but I don't know and kind of 
doubt it.

Regardless, spawned threads aren't intended to be joined by you. They should 
run until they're done doing whatever they're doing and then exit. And the 
program should wait for them all to exit, even if main finishes. If that's not 
happening, then there are bugs that need to be fixed. You shouldn't ever have 
to worry about joining spawned threads.

- Jonathan M Davis


Re: Are spawn'ed threads waited automatically?

2011-06-06 Thread Ali Çehreli

On 06/06/2011 03:52 PM, Jonathan M Davis wrote:

 At this point, I don't trust spawn at all (on Linux at least). I've 
had too

 many problems with it.

Thank you.

I've spawned threads from within threads and now received 
ThreadException and segmentation faults as well:


  http://d.puremagic.com/issues/show_bug.cgi?id=6116

Ali



Conventions for installing third party libraries?

2011-06-06 Thread Jonathan Sternberg
I know for C/C++, include files are usually placed in prefix/include and
libraries in prefix/lib. What's the convention for D files (as the module
files would need to be given for imports and I'm assuming that you don't
recompile the D files for every project).

Where are these files typically placed on a normal Linux installation?


Re: Problem with dsss 0.78 rebuild error: out of memory

2011-06-06 Thread armando sano
I am trying to compile dsss from source using dmd v2.051 and get the same 
problem
when the dsss executable (v0.78) is being compiled by rebuild as of:

 ./rebuild/rebuild -full -Irebuild sss/main.d -ofdsss

Linux here too. Has the problem been fixed?

armando


Re: Conventions for installing third party libraries?

2011-06-06 Thread Jonathan M Davis
On 2011-06-06 18:56, Jonathan Sternberg wrote:
 I know for C/C++, include files are usually placed in prefix/include and
 libraries in prefix/lib. What's the convention for D files (as the module
 files would need to be given for imports and I'm assuming that you don't
 recompile the D files for every project).
 
 Where are these files typically placed on a normal Linux installation?

I don't think that there _are_ any D libraries in a typical Linux 
installation. Some distros have dmd in their repositories, so then you get 
druntime and Phobos, but beyond that, I'm not aware of any D libraries in any 
distros. And where dmd and the standard libraries go is entirely up to the 
distro. They just have to make sure that the dmd.conf that they provide points 
to them. You can look at the deb and rpm packages on www.digitalmars.com to 
see where they put it.

But as for libraries beyond the standard ones, I believe that they typically 
get built by the people who use them. If the project in question is set up to 
be a library as opposed to just providing the soure files, then it may be that 
people build that library and then reuse it betwen projects, but that's 
entirely up to the people using them.

D isn't mainstream enough yet for Linux to really have a standard way of 
dealing with where to put its libraries and source files.

Personally, I don't even install dmd or Phobos on my system. I just set up my 
path to point to where I unzipped dmd to, and it works. I then have the code 
for each of my projects in my projects. But I don't currently use any 
libraries beyond druntime and Phobos.

- Jonathan M Davis