[ https://issues.apache.org/jira/browse/SPARK-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Reynold Xin updated SPARK-1308: ------------------------------- Assignee: Syed A. Hashmi (was: Reynold Xin) > Add getNumPartitions() method to PySpark RDDs > --------------------------------------------- > > Key: SPARK-1308 > URL: https://issues.apache.org/jira/browse/SPARK-1308 > Project: Spark > Issue Type: New Feature > Components: PySpark > Affects Versions: 0.9.0 > Reporter: Nicholas Chammas > Assignee: Syed A. Hashmi > Priority: Minor > Fix For: 1.1.0 > > > In Spark, you can do this: > {code} > // Scala > val a = sc.parallelize(List(1, 2, 3, 4), 4) > a.partitions.size > {code} > Please make this possible in PySpark too. > The work-around available is quite simple: > {code} > # Python > a = sc.parallelize([1, 2, 3, 4], 4) > a._jrdd.splits().size() > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)