[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196526223 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53109/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196526180 **[Test build #53109 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53109/consoleFull)** for PR 11664 at commit [`4f9cf91`](https://github.com/apache/spark/commit/4f9cf91a50e15f0246087b70ee855d08f84b4c3e). * This patch **fails MiMa tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196526222 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196522347 **[Test build #53109 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53109/consoleFull)** for PR 11664 at commit [`4f9cf91`](https://github.com/apache/spark/commit/4f9cf91a50e15f0246087b70ee855d08f84b4c3e). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196518907 **[Test build #53106 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53106/consoleFull)** for PR 11664 at commit [`a859392`](https://github.com/apache/spark/commit/a85939264610870d41402225ba8983b101814476). * This patch **fails Scala style tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196518923 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/53106/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196518917 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-196518282 **[Test build #53106 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/53106/consoleFull)** for PR 11664 at commit [`a859392`](https://github.com/apache/spark/commit/a85939264610870d41402225ba8983b101814476). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user rxin commented on a diff in the pull request: https://github.com/apache/spark/pull/11664#discussion_r55956646 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala --- @@ -220,7 +222,61 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ * Runs this query returning the result as an array. */ def executeCollect(): Array[InternalRow] = { -execute().map(_.copy()).collect() +// Packing the UnsafeRows into byte array for faster serialization. +// The byte arrays are in the following format: +// [size] [bytes of UnsafeRow] [size] [bytes of UnsafeRow] ... [-1] +val byteArrayRdd = execute().mapPartitionsInternal { iter => + new Iterator[Array[Byte]] { --- End diff -- i also find this more understandable if you just write it imperatively within the map partitions; something like ```scala execute().mapPartitionsInternal { iter => while (iter.hasNext) { // write each row to a buffer } Iterator(buffer) } ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user rxin commented on a diff in the pull request: https://github.com/apache/spark/pull/11664#discussion_r55956582 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala --- @@ -220,7 +222,61 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ * Runs this query returning the result as an array. */ def executeCollect(): Array[InternalRow] = { -execute().map(_.copy()).collect() +// Packing the UnsafeRows into byte array for faster serialization. +// The byte arrays are in the following format: +// [size] [bytes of UnsafeRow] [size] [bytes of UnsafeRow] ... [-1] +val byteArrayRdd = execute().mapPartitionsInternal { iter => + new Iterator[Array[Byte]] { +private var row: UnsafeRow = _ +override def hasNext: Boolean = row != null || iter.hasNext +override def next: Array[Byte] = { --- End diff -- next() rather than next --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user rxin commented on a diff in the pull request: https://github.com/apache/spark/pull/11664#discussion_r55956568 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala --- @@ -220,7 +222,61 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ * Runs this query returning the result as an array. */ def executeCollect(): Array[InternalRow] = { -execute().map(_.copy()).collect() +// Packing the UnsafeRows into byte array for faster serialization. +// The byte arrays are in the following format: +// [size] [bytes of UnsafeRow] [size] [bytes of UnsafeRow] ... [-1] +val byteArrayRdd = execute().mapPartitionsInternal { iter => + new Iterator[Array[Byte]] { +private var row: UnsafeRow = _ +override def hasNext: Boolean = row != null || iter.hasNext +override def next: Array[Byte] = { + var cap = 1 << 20 // 1 MB + if (row != null) { +// the buffered row could be larger than default buffer size +cap = Math.max(cap, 4 + row.getSizeInBytes + 4) // reverse 4 bytes for ending mark (-1). + } + val buffer = ByteBuffer.allocate(cap) + if (row != null) { +buffer.putInt(row.getSizeInBytes) +row.writeTo(buffer) +row = null + } + while (iter.hasNext) { +row = iter.next().asInstanceOf[UnsafeRow] --- End diff -- are we always taking UnsafeRow now? --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user rxin commented on a diff in the pull request: https://github.com/apache/spark/pull/11664#discussion_r55956440 --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala --- @@ -220,7 +222,61 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with Logging with Serializ * Runs this query returning the result as an array. */ def executeCollect(): Array[InternalRow] = { -execute().map(_.copy()).collect() +// Packing the UnsafeRows into byte array for faster serialization. +// The byte arrays are in the following format: +// [size] [bytes of UnsafeRow] [size] [bytes of UnsafeRow] ... [-1] +val byteArrayRdd = execute().mapPartitionsInternal { iter => + new Iterator[Array[Byte]] { +private var row: UnsafeRow = _ +override def hasNext: Boolean = row != null || iter.hasNext +override def next: Array[Byte] = { + var cap = 1 << 20 // 1 MB + if (row != null) { +// the buffered row could be larger than default buffer size +cap = Math.max(cap, 4 + row.getSizeInBytes + 4) // reverse 4 bytes for ending mark (-1). + } + val buffer = ByteBuffer.allocate(cap) + if (row != null) { +buffer.putInt(row.getSizeInBytes) +row.writeTo(buffer) +row = null + } + while (iter.hasNext) { +row = iter.next().asInstanceOf[UnsafeRow] +// Reserve last 4 bytes for ending mark +if (4 + row.getSizeInBytes + 4 <= buffer.remaining()) { + buffer.putInt(row.getSizeInBytes) + row.writeTo(buffer) + row = null +} else { + buffer.putInt(-1) + return buffer.array() +} + } + buffer.putInt(-1) + // copy the used bytes to make it smaller + val bytes = new Array[Byte](buffer.limit()) + System.arraycopy(buffer.array(), 0, bytes, 0, buffer.limit()) + bytes +} + } +} +// Collect the byte arrays back to driver, then decode them as UnsafeRows. +val nFields = schema.length +byteArrayRdd.collect().flatMap { bytes => --- End diff -- i think this block would be more readable if we just write it imperatively, e.g. ```scala val results = new ArrayBuffer byteArrayRdd.collect().foreach { bytes => var sizeOfNextRow = bytes.getInt() while (sizeOfNextRow >= 0) { val row = new UnsafeRow(nFields) row.pointTo(buffer.array(), Platform.BYTE_ARRAY_OFFSET + buffer.position(), sizeInBytes) buffer.position(buffer.position() + sizeOfNextRow) results += row } } results.toArray ``` --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-195614031 **[Test build #52962 has finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52962/consoleFull)** for PR 11664 at commit [`ac1a40b`](https://github.com/apache/spark/commit/ac1a40b3f04fd934ed2c4e82f7c62c28c4059e35). * This patch **fails Spark unit tests**. * This patch merges cleanly. * This patch adds no public classes. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-195614082 Test FAILed. Refer to this link for build results (access rights to CI server needed): https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/52962/ Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user AmplabJenkins commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-195614080 Merged build finished. Test FAILed. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user SparkQA commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-195602888 **[Test build #52962 has started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52962/consoleFull)** for PR 11664 at commit [`ac1a40b`](https://github.com/apache/spark/commit/ac1a40b3f04fd934ed2c4e82f7c62c28c4059e35). --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
Github user davies commented on the pull request: https://github.com/apache/spark/pull/11664#issuecomment-195602213 cc @nongli @rxin --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org
[GitHub] spark pull request: [SPARK-XX] [SQL] fast serialization for collec...
GitHub user davies opened a pull request: https://github.com/apache/spark/pull/11664 [SPARK-XX] [SQL] fast serialization for collecting DataFrame/Dataset ## What changes were proposed in this pull request? When we call DataFrame/Dataset.collect(), Java serializer (or Kryo Serializer) will be used to serialize the UnsafeRows in executor, then deserialize them into UnsafeRows in driver. Java serializer (and Kyro serializer) are slow on millions rows, because they try to find out the same rows, but usually there is no same rows. This PR will serialize the UnsafeRows as fixed size byte array, then Java serializer (or Kyro serializer) serialize the bytes very fast (there are fewer blocks and byte array are not compared by content). Test this with ``` sqlContext.range(5 << 20).collect() ``` After this PR, the collect() finished in 2 seconds (instead of 12 seconds before this PR). (this requires #11659) ## How was this patch tested? Existing unit tests. You can merge this pull request into a Git repository by running: $ git pull https://github.com/davies/spark serialize_row Alternatively you can review and apply these changes as the patch at: https://github.com/apache/spark/pull/11664.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #11664 commit ac1a40b3f04fd934ed2c4e82f7c62c28c4059e35 Author: Davies Liu Date: 2016-03-11T23:18:20Z fast serialization for collecting DataFrame/Dataset --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org