On Sep 2, 2015, at 11:31 PM, Davies Liu
> wrote:
Could you have a short script to reproduce this?
Good point. Here you go. This is Python 3.4.3 on Ubuntu 15.04.
import pandas as pd # must be in default path for interpreter
import pyspark
On Sep 3, 2015, at 12:39 PM, Davies Liu
> wrote:
I think this is not a problem of PySpark, you also saw this if you
profile this script:
```
list(map(map_, range(sc.defaultParallelism)))
```
81777/808740.0860.0000.3600.000
Hello,
I have a PySpark computation that relies on Pandas and NumPy. Currently, my
inner loop iterates 2,000 times. I’m seeing the following show up in my
profiling:
74804/291020.2040.0002.1730.000 :2234(_find_and_load)
74804/291020.1450.0001.8670.000