[GitHub] spark pull request #17230: [SPARK-19353][CORE] Generalize PipedRDD to use I/...

2017-12-07 Thread superbobry
Github user superbobry closed the pull request at:

https://github.com/apache/spark/pull/17230


---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17230: [SPARK-19353][CORE] Generalize PipedRDD to use I/...

2017-03-28 Thread superbobry
Github user superbobry commented on a diff in the pull request:

https://github.com/apache/spark/pull/17230#discussion_r108351585
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala ---
@@ -198,17 +183,114 @@ private[spark] class PipedRDD[T: ClassTag](
 val t = childThreadException.get()
 if (t != null) {
   val commandRan = command.mkString(" ")
-  logError(s"Caught exception while running pipe() operator. 
Command ran: $commandRan. " +
-s"Exception: ${t.getMessage}")
-  proc.destroy()
+  logError("Caught exception while running pipe() operator. " +
+  s"Command ran: $commandRan.", t)
   cleanup()
+  proc.destroy()
   throw t
 }
   }
 }
   }
 }
 
+/** Specifies how to write the elements of the input [[RDD]] into the 
pipe. */
+trait InputWriter[T] extends Serializable {
+  def write(dos: DataOutput, elem: T): Unit
+}
+
+/** Specifies how to read the elements from the pipe into the output 
[[RDD]]. */
+trait OutputReader[T] extends Serializable {
+  /**
+   * Reads the next element.
+   *
+   * The input is guaranteed to have at least one byte.
+   */
+  def read(dis: DataInput): T
+}
+
+class TextInputWriter[I](
+encoding: String = Codec.defaultCharsetCodec.name,
+printPipeContext: (String => Unit) => Unit = null,
+printRDDElement: (I, String => Unit) => Unit = null
+) extends InputWriter[I] {
+
+  private[this] val lineSeparator = 
System.lineSeparator().getBytes(encoding)
+  private[this] var initialized = printPipeContext == null
+
+  private def writeLine(dos: DataOutput, s: String): Unit = {
+dos.write(s.getBytes(encoding))
+dos.write(lineSeparator)
+  }
+
+  override def write(dos: DataOutput, elem: I): Unit = {
+if (!initialized) {
+  printPipeContext(writeLine(dos, _))
+  initialized = true
+}
+
+if (printRDDElement == null) {
+  writeLine(dos, String.valueOf(elem))
+} else {
+  printRDDElement(elem, writeLine(dos, _))
+}
+  }
+}
+
+class TextOutputReader(
+encoding: String = Codec.defaultCharsetCodec.name
+) extends OutputReader[String] {
+
+  private[this] val lf = "\n".getBytes(encoding)
+  private[this] val cr = "\r".getBytes(encoding)
+  private[this] val crlf = cr ++ lf
+  private[this] var buf = Array.ofDim[Byte](64)
+  private[this] var used = 0
+
+  @inline
+  /** Checks that the suffix of [[buf]] matches [[other]]. */
+  private def endsWith(other: Array[Byte]): Boolean = {
+var i = used - 1
+var j = other.length - 1
+(j <= i) && {
+  while (j >= 0) {
+if (buf(i) != other(j)) {
+  return false
+}
+i -= 1
+j -= 1
+  }
+  true
+}
+  }
+
+  override def read(dis: DataInput): String = {
--- End diff --

I've initially had 
[`readLine`](https://docs.oracle.com/javase/7/docs/api/java/io/DataInput.html#readLine())
 here, but the problem with it is that it assumes ASCII and therefore does not 
work in the case of the general encoding. For example reading a UTF-32 encoded 
"foobar\n" would result in extra zeros in the end of the string. 

I am yet to have another pass over these changes, but previous benchmarking 
suggested that the bottleneck (to my surprise) is the `String(byte[], Charset)` 
constructor. Of course there's always a possibility that the profiler is biased 
:) 




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17230: [SPARK-19353][CORE] Generalize PipedRDD to use I/...

2017-03-27 Thread jodersky
Github user jodersky commented on a diff in the pull request:

https://github.com/apache/spark/pull/17230#discussion_r108307572
  
--- Diff: core/src/main/scala/org/apache/spark/rdd/PipedRDD.scala ---
@@ -198,17 +183,114 @@ private[spark] class PipedRDD[T: ClassTag](
 val t = childThreadException.get()
 if (t != null) {
   val commandRan = command.mkString(" ")
-  logError(s"Caught exception while running pipe() operator. 
Command ran: $commandRan. " +
-s"Exception: ${t.getMessage}")
-  proc.destroy()
+  logError("Caught exception while running pipe() operator. " +
+  s"Command ran: $commandRan.", t)
   cleanup()
+  proc.destroy()
   throw t
 }
   }
 }
   }
 }
 
+/** Specifies how to write the elements of the input [[RDD]] into the 
pipe. */
+trait InputWriter[T] extends Serializable {
+  def write(dos: DataOutput, elem: T): Unit
+}
+
+/** Specifies how to read the elements from the pipe into the output 
[[RDD]]. */
+trait OutputReader[T] extends Serializable {
+  /**
+   * Reads the next element.
+   *
+   * The input is guaranteed to have at least one byte.
+   */
+  def read(dis: DataInput): T
+}
+
+class TextInputWriter[I](
+encoding: String = Codec.defaultCharsetCodec.name,
+printPipeContext: (String => Unit) => Unit = null,
+printRDDElement: (I, String => Unit) => Unit = null
+) extends InputWriter[I] {
+
+  private[this] val lineSeparator = 
System.lineSeparator().getBytes(encoding)
+  private[this] var initialized = printPipeContext == null
+
+  private def writeLine(dos: DataOutput, s: String): Unit = {
+dos.write(s.getBytes(encoding))
+dos.write(lineSeparator)
+  }
+
+  override def write(dos: DataOutput, elem: I): Unit = {
+if (!initialized) {
+  printPipeContext(writeLine(dos, _))
+  initialized = true
+}
+
+if (printRDDElement == null) {
+  writeLine(dos, String.valueOf(elem))
+} else {
+  printRDDElement(elem, writeLine(dos, _))
+}
+  }
+}
+
+class TextOutputReader(
+encoding: String = Codec.defaultCharsetCodec.name
+) extends OutputReader[String] {
+
+  private[this] val lf = "\n".getBytes(encoding)
+  private[this] val cr = "\r".getBytes(encoding)
+  private[this] val crlf = cr ++ lf
+  private[this] var buf = Array.ofDim[Byte](64)
+  private[this] var used = 0
+
+  @inline
+  /** Checks that the suffix of [[buf]] matches [[other]]. */
+  private def endsWith(other: Array[Byte]): Boolean = {
+var i = used - 1
+var j = other.length - 1
+(j <= i) && {
+  while (j >= 0) {
+if (buf(i) != other(j)) {
+  return false
+}
+i -= 1
+j -= 1
+  }
+  true
+}
+  }
+
+  override def read(dis: DataInput): String = {
--- End diff --

Could a `dis.readLine()` be used here and would it be more efficient?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] spark pull request #17230: [SPARK-19353][CORE] Generalize PipedRDD to use I/...

2017-03-09 Thread superbobry
GitHub user superbobry opened a pull request:

https://github.com/apache/spark/pull/17230

[SPARK-19353][CORE] Generalize PipedRDD to use I/O formats

## What changes were proposed in this pull request?

This patch allows to use arbitrary input and output formats when
streaming data to and from the piped process. The API uses
java.io.Data{Input,Output} for I/O, therefore all methods operating
on multibyte primitives assume big-endian byte order.

The change is fully backward-compatible in terms of both public API
and behaviour. Additionally, existing line-based format is available
via TextInputWriter/TextOutputReader.

## How was this patch tested?

PipedRDD unit tests and in-house integration tests.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/criteo-forks/spark pipe-binary-io-upstream

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/spark/pull/17230.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #17230


commit 1265fda3c7399a635b985565b0d915d901d48382
Author: Sergei Lebedev 
Date:   2017-01-26T17:30:25Z

[SPARK-19353][CORE] Generalize PipedRDD to use I/O formats

This commit allows to use arbitrary input and output formats when
streaming data to and from the piped process. The API uses
java.io.Data{Input,Output} for I/O, therefore all method operating
on multibyte primitives assume big-endian byte order.

The change is fully backward-compatible in terms of both public API
and behaviour. Additionally, existing line-based format is available
via TextInputWriter/TextOutputReader.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org