Github user squito commented on the issue:
https://github.com/apache/spark/pull/18145
merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17603
merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17603
lgtm
rerunning tests just to be safe since its been so long
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17634
merged to master
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17113
> then you get fetch failure again and iterate until job failure
At first I was thinking the node goes bad, but you first detect it via
fetch failures -- in that case, you wouldn't n
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r119150136
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -145,6 +146,72 @@ private[scheduler] class BlacklistTracker
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r119150113
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -145,6 +146,72 @@ private[scheduler] class BlacklistTracker
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r119151885
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/BlacklistTracker.scala ---
@@ -145,6 +146,72 @@ private[scheduler] class BlacklistTracker
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17113#discussion_r119152848
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSchedulerImpl.scala ---
@@ -54,7 +54,7 @@ import org.apache.spark.util.{AccumulatorV2
Github user squito commented on the issue:
https://github.com/apache/spark/pull/18145
just a resubmit of https://github.com/apache/spark/pull/17893, all credit
to @lshmouse
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17893
since it seems @lshmouse is not updating this, I've resubmitted this here
https://github.com/apache/spark/pull/18145, but all credit to the original
author.
---
If your project is set up
GitHub user squito opened a pull request:
https://github.com/apache/spark/pull/18145
[SPARK-20633][SQL] FileFormatWriter should not wrap FetchFailedException
## What changes were proposed in this pull request?
Explicitly handle the FetchFailedException in FileFormatWriter
Github user squito commented on the issue:
https://github.com/apache/spark/pull/15505
@djvulee this is https://issues.apache.org/jira/browse/SPARK-19108. Note
that there is some discussion there about this being a bit harder than what I
originally thought, though I think its still
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17925
sorry didn't review this earlier, but in any case, lgtm
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17634
lgtm.
sorry for such a late review. Since its been a while I'll trigger tests
again just to be safe before merging.
---
If your project is set up for it, you can reply to this email and have
Github user squito commented on the issue:
https://github.com/apache/spark/pull/10991
whoops, now I see this is already present as
https://issues.apache.org/jira/browse/SPARK-18683
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user squito commented on the issue:
https://github.com/apache/spark/pull/10991
@stanzhai I think you have a good point. I don't think removing the REST
API for access to basic app info in the standalone master was really necessary
for the original goals of SPARK-12299
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17925#discussion_r115887910
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1054,15 +1053,17 @@ class DAGScheduler(
return
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17893
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17893
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17893
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17893
yeah, this isn't a bug anymore, but lets fix it -- cleaner error msg, and
at least it will be easier for folks to see the fix in the future, as multiple
people have stumbled on it and not realized
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17854
> It took 3~4 minutes to start an executor on an NM (most of the time was
spent on container localization: downloading spark jar, application jar and
etc. from the hdfs staging folder).
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
great! thanks @ueshin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17866
gosh I don't know about this off the top of my head -- maybe @tgravescs or
@vanzin know? if not I'll take a closer look next week.
---
If your project is set up for it, you can reply
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17854
also cc @tgravescs @vanzin
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17854
It looks to me like this is actually making 2 behavior changes:
1) throttle the requests for new containers, as you describe in your
description
2) drop newly received containers
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17854
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin sorry it took me a while to figure out how a table partitioned by
timestamps work (I didn't even realize that was possible, I don't think it is
in hive?) and I was traveling
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17762
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17700
thanks for the reminder, sorry it took me a couple days to circle back.
merged to master and 2.2
And thanks a lot for this issue -- for taking another look at these after
that last
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17762
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin updated per your feedback.
I should have explained that the last update *did* handle partition tables
(it added the second call to `getStorageTzOptions` in `HiveMetastoreCatalog
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17700#discussion_r112730552
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -114,10 +114,16 @@ private[spark] object ExecutorsPage {
val
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17700
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16781#discussion_r112503018
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/ParquetHiveCompatibilitySuite.scala
---
@@ -141,4 +160,326 @@ class
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16781#discussion_r112502905
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/ParquetHiveCompatibilitySuite.scala
---
@@ -42,6 +52,15 @@ class ParquetHiveCompatibilitySuite
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17700
Thanks for doing this quickly @jerryshao. I wasn't very clear about this on
the jira, but my original idea was to add another level of nesting in the json,
and put all the new memory related metrics
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17631
this doesn't seem safe in general. spark treats quotes as a valid part of
the name, eg:
```scala
scala> val df = sc.parallelize(1 to 10).map { x => (x, (x + 1).toLong, (x -
1).to
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16781#discussion_r112362704
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/ParquetHiveCompatibilitySuite.scala
---
@@ -141,4 +160,326 @@ class
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin I've pushed an update which addresses your comments. I also
realized that partitioned tables weren't handled correctly! I fixed that as
well.
---
If your project is set up for it, you
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin thanks for taking a look. Yes, that understanding is correct.
Another way to think about it is to compare those same operations with
different file formats, eg. textfile. Those work more
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16781#discussion_r112048338
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowConverter.scala
---
@@ -261,18 +266,20 @@ private[parquet
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/16781#discussion_r112042439
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -498,6 +498,11 @@ object DateTimeUtils {
false
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17656
yeah, looks like the right change, I think it was just overlooked in
https://issues.apache.org/jira/browse/SPARK-14245
I'd ask that you add an assertion to this unit test:
https
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
hi @jerryshao good points. First, we should probably move this discussion
to jira so its more visible -- feel free to open two issues for these if you
want, or first discuss on dev@. (Sorry its my
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17364
merged to master
sorry I forgot to take look at this for a while @steveloughran, thanks for
the reminder
---
If your project is set up for it, you can reply to this email and have your
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17625
@tgravescs what do you think about breaking this into two parts -- the
internal plumbing, and the UI stuff? by itself the plumbing part wouldn't do
anything, but I think it would be easier
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17625
yeah taking another look at the UI, I agree with @jerryshao and @tgravescs,
the memory tab is pretty weird. I think putting this info on the "stages" tab
makes sense -- that is reall
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111422141
--- Diff: core/src/test/scala/org/apache/spark/util/JsonProtocolSuite.scala
---
@@ -432,6 +439,25 @@ class JsonProtocolSuite extends SparkFunSuite
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111420946
--- Diff: core/src/test/scala/org/apache/spark/util/JsonProtocolSuite.scala
---
@@ -432,6 +439,25 @@ class JsonProtocolSuite extends SparkFunSuite
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111274638
--- Diff:
core/src/main/scala/org/apache/spark/network/netty/NettyBlockTransferService.scala
---
@@ -22,8 +22,12 @@ import java.nio.ByteBuffer
import
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17625
Jenkins, ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111275191
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -259,6 +290,46 @@ private[spark] class EventLoggingListener
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111275150
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -259,6 +290,46 @@ private[spark] class EventLoggingListener
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17625#discussion_r111274944
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/EventLoggingListener.scala ---
@@ -87,6 +88,10 @@ private[spark] class EventLoggingListener
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111233891
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111264930
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111271432
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -1080,6 +1122,25 @@ class DAGScheduler
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111269273
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -472,6 +472,47 @@ class DAGScheduler
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r111273237
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -168,6 +169,8 @@ private[spark] class TaskSetManager(
t.epoch
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin thanks for taking a look earlier, sorry it has taken me some time
to update this.
Things to note since last time:
1) Hive has seen been updated in
[HIVE-16231](https
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r110434977
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -1139,6 +1138,19 @@ class TaskSetManagerSuite extends
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/17533#discussion_r110434821
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -512,6 +521,51 @@ private[spark] class TaskSetManager
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
yeah, we definitely don't want to start logging more events. But it seems
like this info is already available -- taskEnd.taskMetrics.updatedBlocks
already has everything, doesn't it?
---
If your
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16354
closing this since on the jira we decided that a different approach is
needed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user squito closed the pull request at:
https://github.com/apache/spark/pull/16354
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
btw, anybody interested in looking at getting the memory to show up in the
history server as well? this issue we were discussing earlier:
>> AFAIK we don't record block update
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
merged to master. Thanks @jerryshao
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17471
this is basically the same as this old pr, right?
https://github.com/apache/spark/pull/15616
the last comment on there is a request for a test, which I think still
applies here. I also
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
lgtm, thanks for the updates
any more comments @tgravescs @CodingCat @ajbozarth @jsoltren?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830934
--- Diff: core/src/main/scala/org/apache/spark/status/api/v1/api.scala ---
@@ -111,7 +115,11 @@ class RDDDataDistribution private[spark](
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109832240
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -176,26 +178,51 @@ class StorageStatus(val blockManagerId:
BlockManagerId
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830395
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -115,8 +115,9 @@ private[spark] object ExecutorsPage {
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109830705
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -81,6 +115,11 @@ private[spark] object ExecutorsPage {
val
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109831142
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -35,7 +35,13 @@ import org.apache.spark.internal.Logging
* class
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
> AFAIK we don't record block update events in history server, so we could
not calculate the used memory from event log.
good point, sorry I had totally forgotten about. Seems l
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109184652
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r109015878
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,11 @@ class StorageStatusListener(conf: SparkConf
Github user squito commented on the issue:
https://github.com/apache/spark/pull/14617
@jerryshao sorry to be very particular about this, but can you add a test
case where the off heap and onheap memory *used* is non-zero? It looks like
you only changed the max memory. (You should
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
thanks @ueshin ... I am going to chat w/ some folks involved in that hive
patch, that was not my understanding conceptually of their patch. I heard that
there is a bug they need to fix so maybe its
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r108681155
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala
---
@@ -237,6 +238,24 @@ object DateTimeUtils
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/15332#discussion_r108682129
--- Diff:
sql/core/src/main/java/org/apache/spark/sql/execution/datasources/parquet/VectorizedColumnReader.java
---
@@ -362,7 +375,15 @@ private void
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17297
@sitalkedia This change is pretty contentious, there are lot of questions
about whether or not this is a good change. I don't think discussing this here
in github comments on a PR is the best form
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17208
Looks like the tests were manually killed (-9).
Thanks for catching that and fixing @liujianhuiouc
---
If your project is set up for it, you can reply to this email and have your
reply
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17208
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17297
btw I filed https://issues.apache.org/jira/browse/SPARK-20128 for the test
timeout -- fwiw I don't think its a problem w/ the test but a potential real
issue with the metrics system, though I don't
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17297
@sitalkedia how are you trying to run the test? Works fine for me on my
laptop on master. Note that the test is referencing a var which is only
defined if "spark.testing" is a syste
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17208
lgtm assuming tests pass
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17297
@sitalkedia I took a closer look -- I think this is from
"o.a.s.InternalAccumulatorSuite: 'internal accumulators in resubmitted
stages'". From the console output on jenkins, that was the
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin I've updated this to remove the conf entirely. So updating your
previous description, the new behavior can be described as:
when creating table:
* if a property
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r107940464
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -47,24 +50,30 @@ class StorageStatus(val blockManagerId: BlockManagerId
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r107939502
--- Diff:
core/src/main/scala/org/apache/spark/storage/StorageStatusListener.scala ---
@@ -74,8 +74,9 @@ class StorageStatusListener(conf: SparkConf
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r107940526
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -47,24 +50,30 @@ class StorageStatus(val blockManagerId: BlockManagerId
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/14617#discussion_r107939634
--- Diff: core/src/main/scala/org/apache/spark/storage/StorageUtils.scala
---
@@ -47,24 +50,30 @@ class StorageStatus(val blockManagerId: BlockManagerId
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
> Do you mean the default SQLConf.PARQUET_TABLE_INCLUDE_TIMEZONE should be
false for now?
If so, I agree with it.
I was proposing something more drastic -- eliminating that conf entir
Github user squito commented on the issue:
https://github.com/apache/spark/pull/16781
@ueshin thanks for taking a look. Yes, I think your summary is exactly
correct.
Your suggestion of using the session timezone makes a lot of sense. I can
update the pr accordingly. But I
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17276
no worries, I'm just not sure when to look again, with all the
notifications from your commits. Committers tend to think that something is
ready to review if its passing tests, so its helpful to add
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17276
@jinxing64 do you mind closing this pr for now (or marking as [WIP] at
least)?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user squito commented on the issue:
https://github.com/apache/spark/pull/17364
> Note that as the exception handler tries to close resources before
calling committer.abortTask(taskAttemptContext), without this patch a failing
releaseResources() means that abortTask() is
1201 - 1300 of 3521 matches
Mail list logo