[ https://issues.apache.org/jira/browse/SPARK-22166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16185722#comment-16185722 ]
吴志龙 commented on SPARK-22166: ----------------------------- jvm data spill to disk throw exeception, I add exector-memory is ok . maybe quict write disk or Adjust a small proportion ,to reduce the exception > java.lang.OutOfMemoryError: error while calling spill() > -------------------------------------------------------- > > Key: SPARK-22166 > URL: https://issues.apache.org/jira/browse/SPARK-22166 > Project: Spark > Issue Type: Bug > Components: SQL > Affects Versions: 2.2.0 > Environment: spark 2.2 > hadoop 2.6.0 > jdk 1.8 > Reporter: 吴志龙 > > ${SPARK_HOME}/bin/spark-sql --master=yarn --queue lx_etl --driver-memory 4g > --driver-java-options -XX:MaxMetaspaceSize=512m --num-executors 12 > --executor-memory 3g --hiveconf hive.cli.print.header=false --conf > spark.executor.extraJavaOptions=" -Xmn768m -XX:+UseG1GC > -XX:MaxMetaspaceSize=512m -XX:MaxGCPauseMillis=400 -XX:G1ReservePercent=30 > -XX:SoftRefLRUPolicyMSPerMB=0 -XX:InitiatingHeapOccupancyPercent=35" -e "" > java.lang.OutOfMemoryError: error while calling spill() on > org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@1b813200 : > /home/fqlhadoop/datas/hadoop/tmp-hadoop-biadmin/nm-local-dir/usercache/biadmin/appcache/application_1504095691482_250304/blockmgr-3347e81a-150c-4dee-94a7-727494bf4fe4/0c/temp_local_08a15a87-0d7b-4055-bae7-cc511e48dbd8 > +details > java.lang.OutOfMemoryError: error while calling spill() on > org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter@1b813200 : > /home/fqlhadoop/datas/hadoop/tmp-hadoop-biadmin/nm-local-dir/usercache/biadmin/appcache/application_1504095691482_250304/blockmgr-3347e81a-150c-4dee-94a7-727494bf4fe4/0c/temp_local_08a15a87-0d7b-4055-bae7-cc511e48dbd8 > at > org.apache.spark.memory.TaskMemoryManager.acquireExecutionMemory(TaskMemoryManager.java:161) > at > org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:245) > at > org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:272) > at > org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:272) > at > org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:121) > at > org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:378) > at > org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:402) > at > org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:109) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.sort_addToSorter$(Unknown > Source) > at > org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown > Source) > at > org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at > org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:377) > at > org.apache.spark.sql.execution.window.WindowExec$$anonfun$14$$anon$1.fetchNextRow(WindowExec.scala:301) > FetchFailed(null, shuffleId=3, mapId=-1, reduceId=24, message= +details > FetchFailed(null, shuffleId=3, mapId=-1, reduceId=24, message= > org.apache.spark.shuffle.MetadataFetchFailedException: Missing an output > location for shuffle 3 > at > org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:697) > at > org.apache.spark.MapOutputTracker$$anonfun$org$apache$spark$MapOutputTracker$$convertMapStatuses$2.apply(MapOutputTracker.scala:693) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at > org.apache.spark.MapOutputTracker$.org$apache$spark$MapOutputTracker$$convertMapStatuses(MapOutputTracker.scala:693) > at > org.apache.spark.MapOutputTracker.getMapSizesByExecutorId(MapOutputTracker.scala:147) > at > org.apache.spark.shuffle.BlockStoreShuffleReader.read(BlockStoreShuffleReader.scala:49) > at > org.apache.spark.sql.execution.ShuffledRowRDD.compute(ShuffledRowRDD.scala:169) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) > at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:287) -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org