Following up - how do I even begin to determine what's eating up memory on 
remote processes? Is there something out there I can use to get a report?

On Saturday, September 5, 2015 at 5:53:54 PM UTC-7, Seth wrote:
>
> I've finally made some progress in parallelizing my code. However, at the 
> end of the run, I have my answer in my main process (the REPL) and each 
> worker process has about 1GB of memory held. Is there a way to tell the 
> worker processes to free that memory? @everywhere gc() didn't seem to do 
> it, and I don't really know what it's from since the only thing that was 
> done on the worker processes was
>
> @sync @parallel for s in i
>     state = dijkstra_shortest_paths_sparse(spmx, s, distmx, true)
>     if endpoints
>         _parallel_accumulate_endpoints!(betweenness, state, s)
>     else
>         _parallel_accumulate_basic!(betweenness, state, s)
>     end
> end
>
>
>
> Every large structure I'm passing to the remote workers is some form of 
> shared array (spmx, distmx, betweenness). (The answer I need is in the 
> betweenness shared array.)
>
> Any ideas? Thank you.
>

Reply via email to