Github user spmallette commented on the issue:

    https://github.com/apache/tinkerpop/pull/457
  
    Sorry to say I'm still getting errors. 
    
    ```text
    Results :
    
    Tests in error: 
      
GroupTestV3d0$Traversals>GroupTestV3d0.g_V_group_byXoutE_countX_byXnameX:139->AbstractGremlinTest.printTraversalForm:252
 ? IllegalState
      
SideEffectCapTest$Traversals>SideEffectCapTest.g_V_chooseXlabel_person__age_groupCountXaX__name_groupCountXbXX_capXa_bX:64->AbstractGremlinTest.printTraversalForm:252
 ? IllegalState
      
GroovyHasTest$Traversals>HasTest.g_V_hasXname_markoX:135->get_g_V_hasXname_markoX:54
 ? IllegalState
      
GroovyHasTest$Traversals>HasTest.g_V_hasXage_gt_30X:152->get_g_V_hasXage_gt_30X:95
 ? IllegalState
      
GroovyGroupCountTest$Traversals>GroupCountTest.g_V_filterXfalseX_groupCount:142->get_g_V_hasXnoX_groupCount:59
 ? IllegalState
    ```
    
    You might want to rebase this on the latest from tp31, but i'm not so sure 
that will help as my problems are still in Spark. Here's a sample of the 
stacktrace from one of the errors:
    
    ```text
    
g_V_filterXfalseX_groupCount(org.apache.tinkerpop.gremlin.process.traversal.step.sideEffect.GroovyGroupCountTest$Traversals)
  Time elapsed: 0.215 sec  <<< ERROR!
    java.lang.IllegalStateException: org.apache.spark.SparkException: Job 
aborted due to stage failure: Task 0 in stage 6870.0 failed 1 times, most 
recent failure: Lost task 0.0 in stage 6870.0 (TID 8212, localhost): 
java.io.IOException: Mkdirs failed to create 
/usr/src/tinkerpop/spark-gremlin/target/test-case-data/SparkHadoopGraphProvider/graph-provider-data/~reducing/_temporary/0/_temporary/attempt_201611051613_15726_r_000000_0
 (exists=false, cwd=file:/usr/src/tinkerpop/spark-gremlin)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:450)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
        at 
org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1135)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:530)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1030)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    
    Driver stacktrace:
        at 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
        at 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
        at 
org.apache.tinkerpop.gremlin.process.traversal.util.TraversalScriptHelper.compute(TraversalScriptHelper.java:46)
        at 
org.apache.tinkerpop.gremlin.process.traversal.util.TraversalScriptHelper.compute(TraversalScriptHelper.java:57)
        at 
org.codehaus.groovy.vmplugin.v7.IndyInterface.selectMethod(IndyInterface.java:228)
        at 
org.apache.tinkerpop.gremlin.process.traversal.step.sideEffect.GroovyGroupCountTest$Traversals.get_g_V_hasXnoX_groupCount(GroovyGroupCountTest.groovy:59)
    Caused by: org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 0 in stage 6870.0 failed 1 times, most recent failure: Lost task 
0.0 in stage 6870.0 (TID 8212, localhost): java.io.IOException: Mkdirs failed 
to create 
/usr/src/tinkerpop/spark-gremlin/target/test-case-data/SparkHadoopGraphProvider/graph-provider-data/~reducing/_temporary/0/_temporary/attempt_201611051613_15726_r_000000_0
 (exists=false, cwd=file:/usr/src/tinkerpop/spark-gremlin)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:450)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
        at 
org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1135)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:530)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1030)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    
    Driver stacktrace:
        at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1283)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1271)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1270)
        at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1270)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
        at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
        at scala.Option.foreach(Option.scala:236)
        at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1496)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1458)
        at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1447)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at 
org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1824)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1837)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1914)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1055)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:998)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:998)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:938)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:930)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:310)
        at 
org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:930)
        at 
org.apache.spark.api.java.JavaPairRDD.saveAsNewAPIHadoopFile(JavaPairRDD.scala:809)
        at 
org.apache.tinkerpop.gremlin.spark.structure.io.OutputFormatRDD.writeMemoryRDD(OutputFormatRDD.java:65)
        at 
org.apache.tinkerpop.gremlin.spark.process.computer.SparkGraphComputer.lambda$submitWithExecutor$1(SparkGraphComputer.java:271)
        at 
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    Caused by: java.io.IOException: Mkdirs failed to create 
/usr/src/tinkerpop/spark-gremlin/target/test-case-data/SparkHadoopGraphProvider/graph-provider-data/~reducing/_temporary/0/_temporary/attempt_201611051613_15726_r_000000_0
 (exists=false, cwd=file:/usr/src/tinkerpop/spark-gremlin)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:450)
        at 
org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:435)
        at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:909)
        at 
org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1135)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:273)
        at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:530)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getSequenceWriter(SequenceFileOutputFormat.java:64)
        at 
org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat.getRecordWriter(SequenceFileOutputFormat.java:75)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1030)
        at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1$$anonfun$12.apply(PairRDDFunctions.scala:1014)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
        at org.apache.spark.scheduler.Task.run(Task.scala:88)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
    ```
    
    could just be environmental in some way, but i don't seem to get this on 
the tp31 branch itself. I just had a clean build of that this morning. 
    
    @dkuppitz could you give this a shot? Also, recognize that exception i 
pasted? it seems familiar to me. did we ever diagnose that? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to