[jira] [Commented] (SPARK-8277) SparkR createDataFrame is slow
[ https://issues.apache.org/jira/browse/SPARK-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14969993#comment-14969993 ] Apache Spark commented on SPARK-8277: - User 'saurfang' has created a pull request for this issue: https://github.com/apache/spark/pull/9234 > SparkR createDataFrame is slow > -- > > Key: SPARK-8277 > URL: https://issues.apache.org/jira/browse/SPARK-8277 > Project: Spark > Issue Type: Bug > Components: SparkR >Affects Versions: 1.4.0 >Reporter: Shivaram Venkataraman > > For example calling `createDataFrame` on the data from > http://s3-us-west-2.amazonaws.com/sparkr-data/flights.csv takes a really long > time > This is mainly because we try to convert a DataFrame to a List in order to > parallelize it by rows and the conversion from DF to list is very slow for > large data frames. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-8277) SparkR createDataFrame is slow
[ https://issues.apache.org/jira/browse/SPARK-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14608689#comment-14608689 ] Shivaram Venkataraman commented on SPARK-8277: -- Yeah so the bottleneck is in converting R data frames from columns to a list of rows. It would be interesting to see if we can serialize each column at a time and then somehow add them as columns to the Scala DataFrame (or do a column to row conversion in Scala). [~cafreeman] was looking at some related stuff at some point. > SparkR createDataFrame is slow > -- > > Key: SPARK-8277 > URL: https://issues.apache.org/jira/browse/SPARK-8277 > Project: Spark > Issue Type: Bug > Components: SparkR >Affects Versions: 1.4.0 >Reporter: Shivaram Venkataraman > > For example calling `createDataFrame` on the data from > http://s3-us-west-2.amazonaws.com/sparkr-data/flights.csv takes a really long > time > This is mainly because we try to convert a DataFrame to a List in order to > parallelize it by rows and the conversion from DF to list is very slow for > large data frames. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-8277) SparkR createDataFrame is slow
[ https://issues.apache.org/jira/browse/SPARK-8277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607190#comment-14607190 ] Felix Cheung commented on SPARK-8277: - what would be a better approach? Would it work to serialize the R native DataFrame into bytes and then run a version of SQLUtils. bytesToRow? > SparkR createDataFrame is slow > -- > > Key: SPARK-8277 > URL: https://issues.apache.org/jira/browse/SPARK-8277 > Project: Spark > Issue Type: Bug > Components: SparkR >Affects Versions: 1.4.0 >Reporter: Shivaram Venkataraman > > For example calling `createDataFrame` on the data from > http://s3-us-west-2.amazonaws.com/sparkr-data/flights.csv takes a really long > time > This is mainly because we try to convert a DataFrame to a List in order to > parallelize it by rows and the conversion from DF to list is very slow for > large data frames. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org