[ 
https://issues.apache.org/jira/browse/SPARK-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14725211#comment-14725211
 ] 

Vinod KC edited comment on SPARK-10199 at 9/1/15 2:15 PM:
----------------------------------------------------------

[~mengxr]
I've measured the overhead of reflection in save/load operation, please refer 
the results in this link
https://github.com/vinodkc/xtique/blob/master/overhead_duetoReflection.csv

Also I've measured the performance gain in save/load methods without reflection 
after taking  average of 5  times test executions
Please refer the performance gain % in this two links
https://github.com/vinodkc/xtique/blob/master/performance_Benchmark_save.csv
https://github.com/vinodkc/xtique/blob/master/performance_Benchmark_load.csv



was (Author: vinodkc):
[~mengxr]
I've measured the overhead of reflexion in save/load operation, please refer 
the results in this link
https://github.com/vinodkc/xtique/blob/master/overhead_duetoReflection.csv

Also I've measured the performance gain in save/load methods without reflexion 
after taking  average of 5  times test executions
Please refer the performance gain % in this two links
https://github.com/vinodkc/xtique/blob/master/performance_Benchmark_save.csv
https://github.com/vinodkc/xtique/blob/master/performance_Benchmark_load.csv


> Avoid using reflections for parquet model save
> ----------------------------------------------
>
>                 Key: SPARK-10199
>                 URL: https://issues.apache.org/jira/browse/SPARK-10199
>             Project: Spark
>          Issue Type: Improvement
>          Components: ML, MLlib
>            Reporter: Feynman Liang
>            Priority: Minor
>
> These items are not high priority since the overhead writing to Parquest is 
> much greater than for runtime reflections.
> Multiple model save/load in MLlib use case classes to infer a schema for the 
> data frame saved to Parquet. However, inferring a schema from case classes or 
> tuples uses [runtime 
> reflection|https://github.com/apache/spark/blob/d7b4c095271c36fcc7f9ded267ecf5ec66fac803/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L361]
>  which is unnecessary since the types are already known at the time `save` is 
> called.
> It would be better to just specify the schema for the data frame directly 
> using {{sqlContext.createDataFrame(dataRDD, schema)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to