[ 
https://issues.apache.org/jira/browse/SYSTEMML-1283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15874106#comment-15874106
 ] 

Matthias Boehm edited comment on SYSTEMML-1283 at 2/20/17 6:36 AM:
-------------------------------------------------------------------

It is not related - the issue is that we enter the branch of converting the 
input to a frame instead of matrix (see FrameRDDConverterUtils). Furthermore, 
it's just the GC limit - probably due to converting doubles - to double objects 
- to strings (default, given the unspecified frame schema). Anyway, if it would 
not crash with OOM it would fail on the first operation that is not supported 
over frames - this needs to be fixed at API level.


was (Author: mboehm7):
it is not related - the issue is that we enter the branch of converting the 
input to a frame instead of matrix (see FrameRDDConverterUtils).

> Out of memory error
> -------------------
>
>                 Key: SYSTEMML-1283
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1283
>             Project: SystemML
>          Issue Type: Bug
>            Reporter: Brendan Dwyer
>
> Possibly related to [SYSTEMML-1281]
> When a matrix X containing ~13,000 rows and ~30 unique values are passed into 
> the following DML scripts it errors out on my laptop but passes in my 5 node 
> cluster.
> {code}
>   #  # encode dml function for one hot encoding
>   encode_onehot = function(matrix[double] X) return(matrix[double] Y) {
>   N = nrow(X)
>   Y = table(seq(1, N, 1), X)
>   }
>   # a dummy read, which allows sysML to attach variables
>   X = read("") 
>   
>   col_idx = $onehot_index
>   
>   nc = ncol(X)
>   if (col_idx < 1 | col_idx > nc) {
>   stop("one hot index out of range")
>   }
>   Y = matrix(0, rows=1, cols=1)
>   oneHot = encode_onehot(X[,col_idx:col_idx])
>   if (col_idx == 1) {
>   if (col_idx < nc) {
>   X_tmp = X[, col_idx+1:nc]
>   Y = append(oneHot, X_tmp)
>   } else {
>   Y = oneHot
>   }
>   } else if (1 < col_idx & col_idx < nc) {
>   Y = append(append(X[,1:col_idx-1], oneHot), X[, col_idx+1:nc])
>   } else { # col_idx == nc
>   Y = append(X[,1:col_idx-1], oneHot)
>   }
>   # a dummy write, which allows sysML to attach varibles
>   write(Y, "") 
> {code}
> Error:
> {code}
> 17/02/17 16:57:35 ERROR Executor: Exception in task 0.0 in stage 63.0 (TID 
> 1739)
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.lang.Double.valueOf(Double.java:519)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_853$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1778)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1772)
>       at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:748)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:715)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 17/02/17 16:57:35 ERROR TaskSetManager: Task 0 in stage 63.0 failed 1 times; 
> aborting job
> 17/02/17 16:57:36 ERROR SparkUncaughtExceptionHandler: Uncaught exception in 
> thread Thread[Executor task launch worker-20,5,main]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.lang.Double.valueOf(Double.java:519)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_853$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1778)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1772)
>       at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:748)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:715)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> 17/02/17 16:57:36 ERROR RBackendHandler: executeScript on 117277 failed
> java.lang.reflect.InvocationTargetException
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>       at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>       at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>       at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>       at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>       at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
>       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
>       at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: 
> org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in program 
> block generated from statement block between lines 16 and 17 -- Error 
> evaluating instruction: SPARK°rangeReIndex°X- MATRIX- DOUBLE°1- SCALAR- INT- 
> true°_Var178- SCALAR- INT- false°9- SCALAR- INT- true°9- SCALAR- INT- 
> true°_mVar179- MATRIX- DOUBLE°MULTI_BLOCK
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:130)
>       at 
> org.apache.sysml.api.MLContext.executeUsingSimplifiedCompilationChain(MLContext.java:1655)
>       at 
> org.apache.sysml.api.MLContext.compileAndExecuteScript(MLContext.java:1520)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1469)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1455)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1413)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1419)
>       ... 36 more
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error 
> in program block generated from statement block between lines 16 and 17 -- 
> Error evaluating instruction: SPARK°rangeReIndex°X- MATRIX- DOUBLE°1- SCALAR- 
> INT- true°_Var178- SCALAR- INT- false°9- SCALAR- INT- true°9- SCALAR- INT- 
> true°_mVar179- MATRIX- DOUBLE°MULTI_BLOCK
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:320)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeInstructions(ProgramBlock.java:221)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.execute(ProgramBlock.java:168)
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:123)
>       ... 42 more
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 63.0 failed 1 times, most recent failure: Lost task 0.0 in 
> stage 63.0 (TID 1739, localhost, executor driver): 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.lang.Double.valueOf(Double.java:519)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_853$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1778)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1772)
>       at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:748)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:715)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
>       at scala.Option.foreach(Option.scala:257)
>       at 
> org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
>       at 
> org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
>       at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1918)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1931)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1944)
>       at org.apache.spark.SparkContext.runJob(SparkContext.scala:1958)
>       at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:935)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>       at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>       at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>       at org.apache.spark.rdd.RDD.collect(RDD.scala:934)
>       at 
> org.apache.spark.api.java.JavaRDDLike$class.collect(JavaRDDLike.scala:361)
>       at 
> org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
>       at 
> org.apache.sysml.runtime.controlprogram.context.SparkExecutionContext.toMatrixBlock(SparkExecutionContext.java:783)
>       at 
> org.apache.sysml.runtime.instructions.spark.MatrixIndexingSPInstruction.processInstruction(MatrixIndexingSPInstruction.java:151)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:290)
>       ... 45 more
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.lang.Double.valueOf(Double.java:519)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_853$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1778)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1772)
>       at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:748)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:715)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       ... 1 more
>  Show Traceback
>  
>  Rerun with Debug
>  Error: HydraR[sysml.execute]: DML returned error: Error in 
> handleErrors(returnStatus, conn): 
> org.apache.sysml.runtime.DMLRuntimeException: 
> org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error in program 
> block generated from statement block between lines 16 and 17 -- Error 
> evaluating instruction: SPARK°rangeReIndex°X- MATRIX- DOUBLE°1- SCALAR- INT- 
> true°_Var178- SCALAR- INT- false°9- SCALAR- INT- true°9- SCALAR- INT- 
> true°_mVar179- MATRIX- DOUBLE°MULTI_BLOCK
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:130)
>       at 
> org.apache.sysml.api.MLContext.executeUsingSimplifiedCompilationChain(MLContext.java:1655)
>       at 
> org.apache.sysml.api.MLContext.compileAndExecuteScript(MLContext.java:1520)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1469)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1455)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1413)
>       at org.apache.sysml.api.MLContext.executeScript(MLContext.java:1419)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at 
> org.apache.spark.api.r.RBackendHandler.handleMethodCall(RBackendHandler.scala:167)
>       at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:108)
>       at 
> org.apache.spark.api.r.RBackendHandler.channelRead0(RBackendHandler.scala:40)
>       at 
> io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:293)
>       at 
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:267)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:346)
>       at 
> io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:367)
>       at 
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:353)
>       at 
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)
>       at 
> io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:652)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:575)
>       at 
> io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:489)
>       at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:451)
>       at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:140)
>       at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)
>       at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.sysml.runtime.DMLRuntimeException: ERROR: Runtime error 
> in program block generated from statement block between lines 16 and 17 -- 
> Error evaluating instruction: SPARK°rangeReIndex°X- MATRIX- DOUBLE°1- SCALAR- 
> INT- true°_Var178- SCALAR- INT- false°9- SCALAR- INT- true°9- SCALAR- INT- 
> true°_mVar179- MATRIX- DOUBLE°MULTI_BLOCK
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeSingleInstruction(ProgramBlock.java:320)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.executeInstructions(ProgramBlock.java:221)
>       at 
> org.apache.sysml.runtime.controlprogram.ProgramBlock.execute(ProgramBlock.java:168)
>       at 
> org.apache.sysml.runtime.controlprogram.Program.execute(Program.java:123)
>       ... 42 more
> Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 0 in stage 63.0 failed 1 times, most recent failure: Lost task 0.0 in 
> stage 63.0 (TID 1739, localhost, executor driver): 
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>       at java.lang.Double.valueOf(Double.java:519)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply_853$(Unknown
>  Source)
>       at 
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(Unknown
>  Source)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1778)
>       at org.apache.spark.util.Utils$$anon$4.next(Utils.scala:1772)
>       at 
> scala.collection.convert.Wrappers$IteratorWrapper.next(Wrappers.scala:31)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:748)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.FrameRDDConverterUtils$DataFrameToBinaryBlockFunction.call(FrameRDDConverterUtils.java:715)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Driver stacktrace:
>       at 
> org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
>       at 
> org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
>       at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>       at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>       at 
> org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
>       at org.apache.spark.scheduler.DAGSchedul 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to