[ 
https://issues.apache.org/jira/browse/SPARK-23646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16394392#comment-16394392
 ] 

Hyukjin Kwon commented on SPARK-23646:
--------------------------------------

It sounds rather a question. I would recommend to ask it to dev mailing list 
first before filing an issue here.

> pyspark DataFrameWriter ignores customized settings?
> ----------------------------------------------------
>
>                 Key: SPARK-23646
>                 URL: https://issues.apache.org/jira/browse/SPARK-23646
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.2.1
>            Reporter: Chuan-Heng Hsiao
>            Priority: Major
>
> I am using spark-2.2.1-bin-hadoop2.7 with stand-alone mode.
> (python version: 3.5.2 from ubuntu 16.04)
> I intended to have DataFrame write to hdfs with customized block-size but 
> failed.
> However, the corresponding rdd can successfully write with the customized 
> block-size.
>  
>  
> The following is the test code:
> (dfs.namenode.fs-limits.min-block-size has been set as 131072 in hdfs)
>  
>  
> ##########
> # init
> ##########from pyspark import SparkContext, SparkConf
> from pyspark.sql import SparkSession
>  
> import hdfs
> from hdfs import InsecureClient
> import os
>  
> import numpy as np
> import pandas as pd
> import logging
>  
> os.environ['SPARK_HOME'] = '/opt/spark-2.2.1-bin-hadoop2.7'
>  
> block_size = 512 * 1024
>  
> conf = 
> SparkConf().setAppName("DCSSpark").setMaster("spark://spark1[:7077|http://10.7.34.47:7077/]";).set('spark.cores.max',
>  20).set("spark.executor.cores", 10).set("spark.executor.memory", 
> "10g").set("spark.hadoop.dfs.blocksize", 
> str(block_size)).set("spark.hadoop.dfs.block.size", str(block_size))
>  
> spark = SparkSession.builder.config(conf=conf).getOrCreate()
> spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.blocksize", 
> block_size)
> spark.sparkContext._jsc.hadoopConfiguration().setInt("dfs.block.size", 
> block_size)
>  
> ##########
> # main
> ##########
>  # create DataFrame
> df_txt = spark.createDataFrame([\{'temp': "hello"}, \{'temp': "world"}, 
> \{'temp': "!"}])
>  
> # save using DataFrameWriter, resulting 128MB-block-size
> df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df')
>  
> # save using rdd, resulting 512k-block-size
> client = InsecureClient('[http://spark1:50070|http://spark1:50070/]')
> client.delete('/tmp/temp_with_rrd', recursive=True)
> df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd')



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to