[ 
https://issues.apache.org/jira/browse/SYSTEMML-1898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janardhan reassigned SYSTEMML-1898:
-----------------------------------

    Assignee: Janardhan

> DataFrame to MatrixBlock Out of Bounds 
> ---------------------------------------
>
>                 Key: SYSTEMML-1898
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1898
>             Project: SystemML
>          Issue Type: Question
>         Environment: spark 2.1.0 with systemml-0.14.0
>            Reporter: Augusto
>            Assignee: Janardhan
>            Priority: Major
>              Labels: starter
>
> When I'm running systemml, with data set about 1000 instances, 30000 
> features(actually only 15 not zero feature per instance), the task always 
> failed with output below :
> [Stage 28:>                                                       (0 + 12) / 
> 12]17/09/08 10:21:33 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 
> 28.0 (TID 225, sd002021.skydata.com, executor 6): 
> java.lang.ArrayIndexOutOfBoundsException: 0
>       at 
> org.apache.sysml.runtime.matrix.data.SparseRow.append(SparseRow.java:215)
>       at 
> org.apache.sysml.runtime.matrix.data.SparseBlockMCSR.append(SparseBlockMCSR.java:253)
>       at 
> org.apache.sysml.runtime.matrix.data.MatrixBlock.appendValue(MatrixBlock.java:663)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtils$DataFrameToBinaryBlockFunction.call(RDDConverterUtils.java:1076)
>       at 
> org.apache.sysml.runtime.instructions.spark.utils.RDDConverterUtils$DataFrameToBinaryBlockFunction.call(RDDConverterUtils.java:1008)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$7$1.apply(JavaRDDLike.scala:186)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:796)
>       at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>       at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
>       at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
>       at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
>       at org.apache.spark.scheduler.Task.run(Task.scala:99)
>       at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
>       at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>       at java.lang.Thread.run(Thread.java:745)
> Can someone help, please? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to