Re: pyspark DataFrameWriter ignores customized settings?

2018-03-16 Thread chhsiao1981
Hi all,

Found the answer from the following link:

https://forums.databricks.com/questions/918/how-to-set-size-of-parquet-output-files.html

I can successfully setup parquet block size with
spark.hadoop.parquet.block.size.

The following is the sample code:

# init
block_size = 512 * 1024 

conf =
SparkConf().setAppName("myapp").setMaster("spark://spark1:7077").set('spark.cores.max',
20).set("spark.executor.cores", 10).set("spark.executor.memory",
"10g").set('spark.hadoop.parquet.block.size',
str(block_size)).set("spark.hadoop.dfs.blocksize",
str(block_size)).set("spark.hadoop.dfs.block.size",
str(block_size)).set("spark.hadoop.dfs.namenode.fs-limits.min-block-size",
str(131072))

sc = SparkContext(conf=conf) 
spark = SparkSession(sc) 

# create DataFrame 
df_txt = spark.createDataFrame([{'temp': "hello"}, {'temp': "world"},
{'temp': "!"}]) 

# save using DataFrameWriter, resulting 512k-block-size 

df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df')





--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: pyspark DataFrameWriter ignores customized settings?

2018-03-16 Thread chhsiao1981
Hi all,

Looks like it's parquet-specific issue.

I can successfully write with 512k block-size
if I use df.write.csv() or use df.write.text()
(I can successfully do csv write when I put hadoop-lzo-0.4.15-cdh5.13.0.jar
into the jars dir)

sample code:


block_size = 512 * 1024

conf =
SparkConf().setAppName("myapp").setMaster("spark://spark1:7077").set('spark.cores.max',
20).set("spark.executor.cores", 10).set("spark.executor.memory",
"10g").set("spark.hadoop.dfs.blocksize",
str(block_size)).set("spark.hadoop.dfs.block.size",
str(block_size)).set("spark.hadoop.dfs.namenode.fs-limits.min-block-size",
str(131072))

sc = SparkContext(conf=conf)
spark = SparkSession(sc)

# create DataFrame
df_txt = spark.createDataFrame([\{'temp': "hello"}, \{'temp': "world"},
\{'temp': "!"}])

# save using DataFrameWriter, resulting 128MB-block-size
df_txt.write.mode('overwrite').format('parquet').save('hdfs://spark1/tmp/temp_with_df')

# save using DataFrameWriter.csv, resulting 512k-block-size
df_txt.write.mode('overwrite').csv('hdfs://spark1/tmp/temp_with_df_csv')

# save using DataFrameWriter.text, resulting 512k-block-size
df_txt.write.mode('overwrite').text('hdfs://spark1/tmp/temp_with_df_text')

# save using rdd, resulting 512k-block-size
client = InsecureClient('http://spark1:50070')
client.delete('/tmp/temp_with_rrd', recursive=True)
df_txt.rdd.saveAsTextFile('hdfs://spark1/tmp/temp_with_rrd')



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org