hi,

here is a simple example where i run minimize_blockmodel_dl() 10 times in 
parallel using multiprocessing and collect the entropy. when i run this, i get 
the same value of entropy every single time.

```
import multiprocessing as mp
import numpy as np
import time
import graph_tool.all as gt

# load graph
g = gt.collection.data["celegansneural"]

N_iter = 10

def get_sbm_entropy():
    np.random.seed()
    state = gt.minimize_blockmodel_dl(g)
    return state.entropy()

def _parallel_mc(iter=N_iter):
    pool = mp.Pool(10)

    future_res = [pool.apply_async(get_sbm_entropy) for _ in range(iter)]
    res = [f.get() for f in future_res]

    return res

def parallel_monte_carlo(iter=N_iter):
    entropies = _parallel_mc(iter)

    return entropies

parallel_monte_carlo()
```

result: [8331.810102822546, 8331.810102822546, 8331.810102822546, 
8331.810102822546, 8331.810102822546, 8331.810102822546, 8331.810102822546, 
8331.810102822546, 8331.810102822546, 8331.810102822546]

ultimately i would like to use this to keep entropy as well as the block 
membership vector for each iteration

any ideas?

cheers,
-sam
_______________________________________________
graph-tool mailing list -- graph-tool@skewed.de
To unsubscribe send an email to graph-tool-le...@skewed.de

Reply via email to