[ https://issues.apache.org/jira/browse/SPARK-1308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14019742#comment-14019742 ]
Syed A. Hashmi commented on SPARK-1308: --------------------------------------- Created pull request https://github.com/apache/spark/pull/995 to address this issue. [~matei]: Can you please assign this JIRA to me and review the PR? > Add partitions() method to PySpark RDDs > --------------------------------------- > > Key: SPARK-1308 > URL: https://issues.apache.org/jira/browse/SPARK-1308 > Project: Spark > Issue Type: New Feature > Components: PySpark > Affects Versions: 0.9.0 > Reporter: Nicholas Chammas > Priority: Minor > > In Spark, you can do this: > {code} > // Scala > val a = sc.parallelize(List(1, 2, 3, 4), 4) > a.partitions.size > {code} > Please make this possible in PySpark too. > The work-around available is quite simple: > {code} > # Python > a = sc.parallelize([1, 2, 3, 4], 4) > a._jrdd.splits().size() > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)