thanks for your reply

``It is clear to you that if you run 10 processes in parallel you will need 10 
times more memory, right?``

this is clear to me. however 2GB*10 ~= 20GB, and the machine has 260GB memory. 
so unless algorithm is creating 10 copies of the graph within each iteration, 
it should be within bounds

ideally the parallelization would not make a copy of the entire graph. since 
the edges are fixed, all it needs to do is make vertex properties. i haven't 
figured that out yet. i was hoping someone might have tips on how to do this in 
graph-tool. there are ways to do this with other objects (data frames, lists), 
but i am not sure how to approach it with graph-tool graphs

```Maybe you can reproduce the problem for a smaller graph, or show how 
much memory you are actually using for a single process. It would also 
be important to tell us what version of graph-tool you are running.```

will do this soon
_______________________________________________
graph-tool mailing list -- graph-tool@skewed.de
To unsubscribe send an email to graph-tool-le...@skewed.de

Reply via email to