Hello.

I am running Spark 2.3.0 via Yarn.  I have a Spark Streaming application
where the driver threw an uncaught out of memory exception:

19/01/31 13:00:59 ERROR Utils: Uncaught exception in thread
element-tracking-store-worker
java.lang.OutOfMemoryError: GC overhead limit exceeded
        at
org.apache.spark.util.kvstore.KVTypeInfo$MethodAccessor.get(KVTypeInfo.java:154)
        at
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.compare(InMemoryStore.java:248)
        at
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.lambda$iterator$0(InMemoryStore.java:203)
        at
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView$$Lambda$27/1691147907.compare(Unknown
Source)
        at java.util.TimSort.binarySort(TimSort.java:296)
        at java.util.TimSort.sort(TimSort.java:239)
        at java.util.Arrays.sort(Arrays.java:1512)
        at java.util.ArrayList.sort(ArrayList.java:1462)
        at java.util.Collections.sort(Collections.java:175)
        at
org.apache.spark.util.kvstore.InMemoryStore$InMemoryView.iterator(InMemoryStore.java:203)
        at
scala.collection.convert.Wrappers$JIterableWrapper.iterator(Wrappers.scala:54)
        at
scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at
org.apache.spark.status.AppStatusListener$$anonfun$org$apache$spark$status$AppStatusListener$$cleanupStages$1.apply(AppStatusListener.scala:894)
        at
org.apache.spark.status.AppStatusListener$$anonfun$org$apache$spark$status$AppStatusListener$$cleanupStages$1.apply(AppStatusListener.scala:874)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at org.apache.spark.status.AppStatusListener.org
$apache$spark$status$AppStatusListener$$cleanupStages(AppStatusListener.scala:874)
        at
org.apache.spark.status.AppStatusListener$$anonfun$3.apply$mcVJ$sp(AppStatusListener.scala:84)
        at
org.apache.spark.status.ElementTrackingStore$$anonfun$write$1$$anonfun$apply$1$$anonfun$apply$mcV$sp$1.apply(ElementTrackingStore.scala:109)
        at
org.apache.spark.status.ElementTrackingStore$$anonfun$write$1$$anonfun$apply$1$$anonfun$apply$mcV$sp$1.apply(ElementTrackingStore.scala:107)
        at scala.collection.immutable.List.foreach(List.scala:381)
        at
org.apache.spark.status.ElementTrackingStore$$anonfun$write$1$$anonfun$apply$1.apply$mcV$sp(ElementTrackingStore.scala:107)
        at
org.apache.spark.status.ElementTrackingStore$$anonfun$write$1$$anonfun$apply$1.apply(ElementTrackingStore.scala:105)
        at
org.apache.spark.status.ElementTrackingStore$$anonfun$write$1$$anonfun$apply$1.apply(ElementTrackingStore.scala:105)
        at org.apache.spark.util.Utils$.tryLog(Utils.scala:2001)
        at
org.apache.spark.status.ElementTrackingStore$$anon$1.run(ElementTrackingStore.scala:91)
        at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Despite the uncaught exception the Streaming application never terminated.
No new batches were started.  As a result my job did not process data for
some period of time (until our ancillary monitoring noticed the issue).

*Ask: What can we do to ensure that the driver is shut down when this type
of exception occurs?*

Regards,

Bryan Jeffrey

Reply via email to