[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-27 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r677461744



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {

Review comment:
   Thanks!
   Is this ready to merge?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-26 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r677089979



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  "record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(
+  s"""
+ |Checkpoint block $rddBlockId not found! Either the executor
+ |that originally checkpointed this partition is no longer alive, or 
the original RDD is
+ |unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()`
+ |instead, which is slower than local checkpointing but more 
fault-tolerant.
+   """.stripMargin.replaceAll("\n", " "))
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +

Review comment:
   if we do this, the message will be resulted in multiple lines, as 
stripMargin only remove "|" character




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-26 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r676743275



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(s"Checkpoint block $rddBlockId not found! Either the 
executor " +
+  s"that originally checkpointed this partition is no longer alive, or the 
original RDD is " +
+  s"unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()` " +
+  s"instead, which is slower than local checkpointing but more 
fault-tolerant.")
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +
+  "\n(1) RDD transformations and actions are NOT invoked by the driver, 
but inside of other " +
+  "transformations; for example, rdd1.map(x => rdd2.values.count() * x) is 
invalid " +
+  "because the values transformation and count action cannot be performed 
inside of the " +
+  "rdd1.map transformation. For more information, see SPARK-5063.\n(2) 
When a Spark " +
+  "Streaming job recovers from checkpoint, this exception will be hit if a 
reference to " +
+  "an RDD not defined by the streaming job is used in DStream operations. 
For more " +
+  "information, See SPARK-13758.")

Review comment:
   can you write your solution down, I'm not fully understand it. if we 
break line, shouldn't we need replaceAll("\n", " "), which will also replace 
"\n" between 2 paragraphs




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-26 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r676743275



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(s"Checkpoint block $rddBlockId not found! Either the 
executor " +
+  s"that originally checkpointed this partition is no longer alive, or the 
original RDD is " +
+  s"unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()` " +
+  s"instead, which is slower than local checkpointing but more 
fault-tolerant.")
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +
+  "\n(1) RDD transformations and actions are NOT invoked by the driver, 
but inside of other " +
+  "transformations; for example, rdd1.map(x => rdd2.values.count() * x) is 
invalid " +
+  "because the values transformation and count action cannot be performed 
inside of the " +
+  "rdd1.map transformation. For more information, see SPARK-5063.\n(2) 
When a Spark " +
+  "Streaming job recovers from checkpoint, this exception will be hit if a 
reference to " +
+  "an RDD not defined by the streaming job is used in DStream operations. 
For more " +
+  "information, See SPARK-13758.")

Review comment:
   can you write your solution down, I'm not fully understand it. if we 
break line, should we need replaceAll("\n", " "), which will also replace "\n" 
between 2 paragraphs




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-23 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r675428291



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(s"Checkpoint block $rddBlockId not found! Either the 
executor " +
+  s"that originally checkpointed this partition is no longer alive, or the 
original RDD is " +
+  s"unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()` " +
+  s"instead, which is slower than local checkpointing but more 
fault-tolerant.")
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +
+  "\n(1) RDD transformations and actions are NOT invoked by the driver, 
but inside of other " +
+  "transformations; for example, rdd1.map(x => rdd2.values.count() * x) is 
invalid " +
+  "because the values transformation and count action cannot be performed 
inside of the " +
+  "rdd1.map transformation. For more information, see SPARK-5063.\n(2) 
When a Spark " +
+  "Streaming job recovers from checkpoint, this exception will be hit if a 
reference to " +
+  "an RDD not defined by the streaming job is used in DStream operations. 
For more " +
+  "information, See SPARK-13758.")

Review comment:
   can we split paragraph like """ ... """.stripMargin.replaceAll("\n", " 
") + "\n" + """ ... """.stripMargin.replaceAll("\n", " ")




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-23 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r675420166



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(s"Checkpoint block $rddBlockId not found! Either the 
executor " +
+  s"that originally checkpointed this partition is no longer alive, or the 
original RDD is " +
+  s"unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()` " +
+  s"instead, which is slower than local checkpointing but more 
fault-tolerant.")
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +
+  "\n(1) RDD transformations and actions are NOT invoked by the driver, 
but inside of other " +
+  "transformations; for example, rdd1.map(x => rdd2.values.count() * x) is 
invalid " +
+  "because the values transformation and count action cannot be performed 
inside of the " +
+  "rdd1.map transformation. For more information, see SPARK-5063.\n(2) 
When a Spark " +
+  "Streaming job recovers from checkpoint, this exception will be hit if a 
reference to " +
+  "an RDD not defined by the streaming job is used in DStream operations. 
For more " +
+  "information, See SPARK-13758.")

Review comment:
   but if we do that, the 2nd line will access limit of 100 characters, 
right? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-22 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r675278202



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {

Review comment:
   @cloud-fan 

##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,144 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {

Review comment:
   cc @cloud-fan 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-22 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r674659342



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")

Review comment:
   Done!
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-22 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r674656856



##
File path: core/src/main/scala/org/apache/spark/errors/SparkCoreErrors.scala
##
@@ -0,0 +1,140 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ */
+private[spark] object SparkCoreErrors {
+  def rddBlockNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(string: String): Throwable = {
+new SparkException(s"Attempted to use $string after its blocks have been 
removed!")
+  }
+
+  def histogramOnEmptyRDDOrContainingInfinityOrNaNError(): Throwable = {
+new UnsupportedOperationException(
+  "Histogram on either an empty RDD or RDD containing +/-infinity or NaN")
+  }
+
+  def emptyRDDError(): Throwable = {
+new UnsupportedOperationException("empty RDD")
+  }
+
+  def pathNotSupportedError(path: String): Throwable = {
+new IOException(s"Path: ${path} is a directory, which is not supported by 
the " +
+  s"record reader when 
`mapreduce.input.fileinputformat.input.dir.recursive` is false.")
+  }
+
+  def checkpointRDDBlockIdNotFoundError(rddBlockId: RDDBlockId): Throwable = {
+new SparkException(s"Checkpoint block $rddBlockId not found! Either the 
executor " +
+  s"that originally checkpointed this partition is no longer alive, or the 
original RDD is " +
+  s"unpersisted. If this problem persists, you may consider using 
`rdd.checkpoint()` " +
+  s"instead, which is slower than local checkpointing but more 
fault-tolerant.")
+  }
+
+  def endOfStreamError(): Throwable = {
+new java.util.NoSuchElementException("End of stream")
+  }
+
+  def cannotUseMapSideCombiningWithArrayKeyError(): Throwable = {
+new SparkException("Cannot use map-side combining with array keys.")
+  }
+
+  def hashPartitionerCannotPartitionArrayKeyError(): Throwable = {
+new SparkException("HashPartitioner cannot partition array keys.")
+  }
+
+  def reduceByKeyLocallyNotSupportArrayKeysError(): Throwable = {
+new SparkException("reduceByKeyLocally() does not support array keys")
+  }
+
+  def noSuchElementException(): Throwable = {
+new NoSuchElementException()
+  }
+
+  def rddLacksSparkContextError(): Throwable = {
+new SparkException("This RDD lacks a SparkContext. It could happen in the 
following cases: " +
+  "\n(1) RDD transformations and actions are NOT invoked by the driver, 
but inside of other " +
+  "transformations; for example, rdd1.map(x => rdd2.values.count() * x) is 
invalid " +
+  "because the values transformation and count action cannot be performed 
inside of the " +
+  "rdd1.map transformation. For more information, see SPARK-5063.\n(2) 
When a Spark " +
+  "Streaming job recovers from checkpoint, this exception will be hit if a 
reference to " +
+  "an RDD not defined by the streaming job is used in DStream operations. 
For more " +
+  "information, See SPARK-13758.")

Review comment:
   there are "\n" in the message so this couldn't use replaceAll("\n", " ")




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-22 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r674580229



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {
+  def blockOfRddNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(): Throwable = {
+new SparkException("Attempted to use %s after its blocks have been 
removed!".format(toString))

Review comment:
   thanks, I will




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-20 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r672738789



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {
+  def blockOfRddNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(): Throwable = {
+new SparkException("Attempted to use %s after its blocks have been 
removed!".format(toString))

Review comment:
   @beliefer should we replace parameterless method with val instead of def?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-20 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r672738789



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {
+  def blockOfRddNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(): Throwable = {
+new SparkException("Attempted to use %s after its blocks have been 
removed!".format(toString))

Review comment:
   @beliefer should we replace parameterless method with val instead of def?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-19 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r672738789



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {
+  def blockOfRddNotFoundError(blockId: BlockId, id: Int): Throwable = {
+new Exception(s"Could not compute split, block $blockId of RDD $id not 
found")
+  }
+
+  def blockHaveBeenRemovedError(): Throwable = {
+new SparkException("Attempted to use %s after its blocks have been 
removed!".format(toString))

Review comment:
   @beliefer should we replace parameterless method with val instead of def?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-18 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r672023006



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {

Review comment:
   As @allisonwang-db created jira tickets for spark core, I think we will




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] dgd-contributor commented on a change in pull request #33317: [SPARK-36095][CORE] Grouping exception in core/rdd

2021-07-16 Thread GitBox


dgd-contributor commented on a change in pull request #33317:
URL: https://github.com/apache/spark/pull/33317#discussion_r671383750



##
File path: core/src/main/scala/org/apache/spark/errors/ExecutionErrors.scala
##
@@ -0,0 +1,142 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.errors
+
+import java.io.IOException
+
+import org.apache.hadoop.fs.Path
+
+import org.apache.spark.SparkException
+import org.apache.spark.storage.{BlockId, RDDBlockId}
+
+/**
+ * Object for grouping error messages from (most) exceptions thrown during 
query execution.
+ * This does not include exceptions thrown during the eager execution of 
commands, which are
+ * grouped into [[CompilationErrors]].
+ */
+private[spark] object ExecutionErrors {

Review comment:
   Query*Errors.scala are in different module (SQL) and not callable from 
here (Core), so I create new class to group errors into. Is the naming 
reasonable or should I change them?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org