Re: [racket-users] Re: Do futures actually work? What are they useful for?

2017-09-11 Thread James Swaine
They do work.  They do exactly as advertised in the documentation.  As far
as their usefulness goes, I think it is fair to say that they are not as
useful as they could be, or put another way, the set of operations that are
future-safe is too small to make them practical for many folks.

On Mon, Sep 11, 2017 at 5:53 PM  wrote:

> Oh. That does seem troubling then.
>
> On Monday, September 11, 2017 at 6:45:45 PM UTC-4, Jon Zeppieri wrote:
>
>>
>>
>> > On Sep 11, 2017, at 6:39 PM, mrmyers.ra...@gmail.com wrote:
>> >
>> > As far as I'm aware, futures usually shouldn't improve performance
>> outside of networking or hardware-latency type situations. The main goal of
>> futures is just time-sharing, not improving performance. It doesn't
>> genuinely do things in parallel, it just interleaves the execution of
>> several things at once.
>>
>> This isn't true. Futures are for parallelism; they just happen to be
>> defeated by many, many operations. More to the point, they're not for
>> interleaving work. Racket's threads are for that.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Racket Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to racket-users+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [racket-users] Re: Do futures actually work? What are they useful for?

2017-09-11 Thread mrmyers . random . suffix
Oh. That does seem troubling then.

On Monday, September 11, 2017 at 6:45:45 PM UTC-4, Jon Zeppieri wrote:
>
>
>
> > On Sep 11, 2017, at 6:39 PM, mrmyers.ra...@gmail.com  
> wrote: 
> > 
> > As far as I'm aware, futures usually shouldn't improve performance 
> outside of networking or hardware-latency type situations. The main goal of 
> futures is just time-sharing, not improving performance. It doesn't 
> genuinely do things in parallel, it just interleaves the execution of 
> several things at once. 
>
> This isn't true. Futures are for parallelism; they just happen to be 
> defeated by many, many operations. More to the point, they're not for 
> interleaving work. Racket's threads are for that.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [racket-users] Re: Do futures actually work? What are they useful for?

2017-09-11 Thread Jon Zeppieri


> On Sep 11, 2017, at 6:39 PM, mrmyers.random.suf...@gmail.com wrote:
> 
> As far as I'm aware, futures usually shouldn't improve performance outside of 
> networking or hardware-latency type situations. The main goal of futures is 
> just time-sharing, not improving performance. It doesn't genuinely do things 
> in parallel, it just interleaves the execution of several things at once.

This isn't true. Futures are for parallelism; they just happen to be defeated 
by many, many operations. More to the point, they're not for interleaving work. 
Racket's threads are for that.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[racket-users] Re: Do futures actually work? What are they useful for?

2017-09-11 Thread mrmyers . random . suffix
As far as I'm aware, futures usually shouldn't improve performance outside 
of networking or hardware-latency type situations. The main goal of futures 
is just time-sharing, not improving performance. It doesn't genuinely do 
things in parallel, it just interleaves the execution of several things at 
once.

To actually have multiple things happening at the same time, you would need 
to use places, since those take place on separate OS level threads.

On Monday, September 11, 2017 at 4:46:18 PM UTC-4, Alexis King wrote:
>
> Yesterday, a user asked a question on Stack Overflow[1] about attempting 
> to parallelize a solution to the n-queens problem. The program in 
> question is short, so I can reproduce it here in its entirety: 
>
>   #lang racket 
>
>   ; following returns true if queens are on diagonals:   
>   (define (check-diagonals bd) 
> (for*/or ([r1 (in-range (length bd))] 
>   [r2 (in-range (add1 r1) (length bd))]) 
>   (= (abs (- r1 r2)) 
>  (abs (- (list-ref bd r1) 
>  (list-ref bd r2)) 
>
>   ; set board size: 
>   (define N 8) 
>   ; start searching: 
>   (for ([brd (in-permutations (range N))]) 
> (unless (check-diagonals brd) 
>   (displayln brd))) 
>
> I make no comment on the efficiency of the algorithm (or even its 
> correctness), as I didn’t write it. Either way, it does seem like the 
> sort of thing that could be parallelized. The above program runs very 
> quickly, but if N is increased to something larger, like 11, it takes 
> an appreciable amount of time: 
>
>   $ time racket n-queens.rkt 
>   [program output elided] 
>  14.44 real13.92 user 0.32 sys 
>
> Fortunately, this seems like a textbook use case for fine-grained 
> parallel evaluation of subexpressions. Theoretically, it seems like it 
> should be possible to use for/async from racket/future to parallelize 
> the top-level for loop. However, making that change does not improve 
> performance at all; in fact, it significantly reduces it. Merely running 
> this version with N=9 takes ~6.5 seconds; with N=10, it takes ~55. 
>
> This is, however, not too surprising. Running the code with the futures 
> visualizer (using N=7) indicates that displayln is not legal from within 
> a future, preventing any parallel execution from ever actually taking 
> place. Presumably, we can fix this by creating futures that just compute 
> the results, then print them serially: 
>
>   (define results-futures 
> (for/list ([brd (in-permutations (range N))]) 
>   (future (λ () (cons (check-diagonals brd) brd) 
>
>   (for ([result-future (in-list results-futures)]) 
> (let ([result (touch result-future)]) 
>   (unless (car result) 
> (displayln (cdr result) 
>
> With this change, we get a small speedup over the naïve attempt with 
> for/async, but we’re still far slower than the serial version. Now, with 
> N=9, it takes ~4.58 seconds, and with N=10, it takes ~44. 
>
> Taking a look at the futures visualizer for this program (again, with 
> N=7), there are now no blocks, but there are some syncs (on 
> jit_on_demand and allocate memory). However, after some time spent 
> jitting, execution seems to get going, and it actually runs a lot of 
> futures in parallel[2]! After a little bit of that, however, the 
> parallel execution seems to run out of steam, and things seem to start 
> running relatively serially again[3]. 
>
> I wasn’t really sure what was going on here, but I thought maybe it was 
> because some of the futures are just too *small*. It seems possible that 
> the overhead of scheduling thousands of futures (or, in the case of 
> N=10, millions) was far outweighing the actual runtime of the work being 
> done in the futures. Fortunately, this seems like something that could 
> be solved by just grouping the work into chunks, something that’s 
> relatively doable using in-slice: 
>
>   (define results-futures 
> (for/list ([brds (in-slice 1 (in-permutations (range N)))]) 
>   (future 
>(λ () 
>  (for/list ([brd (in-list brds)]) 
>(cons (check-diagonals brd) brd)) 
>
>   (for* ([results-future (in-list results-futures)] 
>  [result (in-list (touch results-future))]) 
> (unless (car result) 
>   (displayln (cdr result 
>
> It seems that my suspicion was correct, because that change helps a 
> whole lot. Now, running the parallel version of the program only takes 
> ~3.9 seconds for N=10, a speedup of more than a factor of 10 over the 
> previous version using futures. However, unfortunately, that’s still 
> *slower* than the completely serial version, which only takes ~1.4 
> seconds. Increasing N to 11 makes the parallel version take ~44 seconds, 
> and if the chunk size provided to in-slice is increased to 100,000, it 
> takes even longer, ~55 seconds. 
>
> Taking a look at the future visualizer for that version of the program, 
> with N=11 and a chunk size of 

[racket-users] Re: Do futures actually work? What are they useful for?

2017-09-11 Thread Alex Harsanyi


On Tuesday, September 12, 2017 at 4:46:18 AM UTC+8, Alexis King wrote:
>
>
> Furthermore, if this problem really is a lost cause, what are futures 
> actually useful for? The documentation would imply that they are useful 
> for lots of number crunching using machine integers or flonums, but 
> that’s a pretty specific domain. I would hope that they would be able 
> to work with simple list-based operations as well, since operations that 
> are truly exclusively numeric are relatively uncommon in my experience. 
>

This has been my experience as well :  futures seem to block on pretty much 
any operation that is not a `fl+`, `fl-` or similar.  I also noticed that 
accessing data stored in lists and vectors (even if these are not 
modified), seems to block the futures.  This is even more frustrating, 
since even after you get parallel implementation working, a simple and 
seemingly harmless change to the code can silently break the parallelism.

In my case, I just gave up on them and optimized the algorithms for "single 
thread" execution.  So far, this has been fast enough in my use case.

Best Regards,
Alex.

-- 
You received this message because you are subscribed to the Google Groups 
"Racket Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to racket-users+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.