This should work and I don't think triggers any actions: df.rdd.partitions.length
On Fri, Jul 22, 2016 at 2:20 PM, Neil Chang <iam...@gmail.com> wrote: > Seems no function does this in Spark 2.0 preview? > -- Pedro Rodriguez PhD Student in Distributed Machine Learning | CU Boulder UC Berkeley AMPLab Alumni ski.rodrig...@gmail.com | pedrorodriguez.io | 909-353-4423 Github: github.com/EntilZha | LinkedIn: https://www.linkedin.com/in/pedrorodriguezscience