yes, the SparkContext in the Python API has a reference to the
JavaSparkContext (jsc)
https://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext

through which you can access the hadoop configuration

On Tue, May 12, 2015 at 6:39 AM, ayan guha <guha.a...@gmail.com> wrote:

> Hi
>
> I found this method in scala API but not in python API (1.3.1).
>
> Basically, I want to change blocksize in order to read a binary file using
> sc.binaryRecords but with multiple partitions (for testing I want to
> generate partitions smaller than default blocksize)/
>
> Is it possible in python? if so, how?
>
> --
> Best Regards,
> Ayan Guha
>

Reply via email to