Github user HyukjinKwon commented on a diff in the pull request: https://github.com/apache/spark/pull/20673#discussion_r170537137 --- Diff: core/src/main/scala/org/apache/spark/util/JsonProtocol.scala --- @@ -100,7 +102,18 @@ private[spark] object JsonProtocol { executorMetricsUpdateToJson(metricsUpdate) case blockUpdate: SparkListenerBlockUpdated => blockUpdateToJson(blockUpdate) - case _ => parse(mapper.writeValueAsString(event)) + case _ => + // Use piped streams to avoid extra memory consumption + val outputStream = new PipedOutputStream() + val inputStream = new PipedInputStream(outputStream) + try { + mapper.writeValue(outputStream, event) --- End diff -- Wait wait .. does this lazily work for sure? Can we add a test (or manual test in the PR description) that reads some more data (maybe more then the buffer size in that pipe)?
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org