[jira] [Commented] (SPARK-19353) Support binary I/O in PipedRDD
[ https://issues.apache.org/jira/browse/SPARK-19353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15904058#comment-15904058 ] Apache Spark commented on SPARK-19353: -- User 'superbobry' has created a pull request for this issue: https://github.com/apache/spark/pull/17230 > Support binary I/O in PipedRDD > -- > > Key: SPARK-19353 > URL: https://issues.apache.org/jira/browse/SPARK-19353 > Project: Spark > Issue Type: Improvement >Reporter: Sergei Lebedev >Priority: Minor > > The current design of RDD.pipe is very restrictive. > It is line-based, each element of the input RDD [gets > serialized|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala#L143] > into one or more lines. Similarly for the output of the child process, one > line corresponds to a single element of the output RDD. > It allows to customize the output format via {{printRDDElement}}, but not the > input format. > It is not designed for extensibility. The only way to get a "BinaryPipedRDD" > is to copy/paste most of it and change the relevant parts. > These limitations have been discussed on > [SO|http://stackoverflow.com/questions/27986830/how-to-pipe-binary-data-in-apache-spark] > and the mailing list, but alas no issue has been created. > A possible solution to at least the first two limitations is to factor out > the format into a separate object (or objects). For instance, {{InputWriter}} > and {{OutputReader}}, following Hadoop streaming API. > {code} > trait InputWriter[T] { > def write(os: OutputStream, elem: T) > } > trait OutputReader[T] { > def read(is: InputStream): T > } > {code} > The default configuration would be to write and read in line-based format, > but the users will also be able to selectively swap those to the appropriate > implementations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-19353) Support binary I/O in PipedRDD
[ https://issues.apache.org/jira/browse/SPARK-19353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15852769#comment-15852769 ] Apache Spark commented on SPARK-19353: -- User 'superbobry' has created a pull request for this issue: https://github.com/apache/spark/pull/16805 > Support binary I/O in PipedRDD > -- > > Key: SPARK-19353 > URL: https://issues.apache.org/jira/browse/SPARK-19353 > Project: Spark > Issue Type: Improvement >Reporter: Sergei Lebedev >Priority: Minor > > The current design of RDD.pipe is very restrictive. > It is line-based, each element of the input RDD [gets > serialized|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala#L143] > into one or more lines. Similarly for the output of the child process, one > line corresponds to a single element of the output RDD. > It allows to customize the output format via {{printRDDElement}}, but not the > input format. > It is not designed for extensibility. The only way to get a "BinaryPipedRDD" > is to copy/paste most of it and change the relevant parts. > These limitations have been discussed on > [SO|http://stackoverflow.com/questions/27986830/how-to-pipe-binary-data-in-apache-spark] > and the mailing list, but alas no issue has been created. > A possible solution to at least the first two limitations is to factor out > the format into a separate object (or objects). For instance, {{InputWriter}} > and {{OutputReader}}, following Hadoop streaming API. > {code} > trait InputWriter[T] { > def write(os: OutputStream, elem: T) > } > trait OutputReader[T] { > def read(is: InputStream): T > } > {code} > The default configuration would be to write and read in line-based format, > but the users will also be able to selectively swap those to the appropriate > implementations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Commented] (SPARK-19353) Support binary I/O in PipedRDD
[ https://issues.apache.org/jira/browse/SPARK-19353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15852498#comment-15852498 ] Sergei Lebedev commented on SPARK-19353: For reference: we have a fully backward-compatible [implementation|https://github.com/criteo-forks/spark/pull/26] of binary PipedRDD in our GitHub fork. > Support binary I/O in PipedRDD > -- > > Key: SPARK-19353 > URL: https://issues.apache.org/jira/browse/SPARK-19353 > Project: Spark > Issue Type: Improvement >Reporter: Sergei Lebedev >Priority: Minor > > The current design of RDD.pipe is very restrictive. > It is line-based, each element of the input RDD [gets > serialized|https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala#L143] > into one or more lines. Similarly for the output of the child process, one > line corresponds to a single element of the output RDD. > It allows to customize the output format via {{printRDDElement}}, but not the > input format. > It is not designed for extensibility. The only way to get a "BinaryPipedRDD" > is to copy/paste most of it and change the relevant parts. > These limitations have been discussed on > [SO|http://stackoverflow.com/questions/27986830/how-to-pipe-binary-data-in-apache-spark] > and the mailing list, but alas no issue has been created. > A possible solution to at least the first two limitations is to factor out > the format into a separate object (or objects). For instance, {{InputWriter}} > and {{OutputReader}}, following Hadoop streaming API. > {code} > trait InputWriter[T] { > def write(os: OutputStream, elem: T) > } > trait OutputReader[T] { > def read(is: InputStream): T > } > {code} > The default configuration would be to write and read in line-based format, > but the users will also be able to selectively swap those to the appropriate > implementations. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org