Re: If you use Spark 1.5 and disabled Tungsten mode ...

2015-11-05 Thread Sjoerd Mulder
Hi Reynold,

I had version 2.6.1 in my project which was provided by the fine folks
from spring-boot-dependencies.

Now have overridden it to 2.7.8 :)

Sjoerd

2015-11-01 8:22 GMT+01:00 Reynold Xin :

> Thanks for reporting it, Sjoerd. You might have a different version of
> Janino brought in from somewhere else.
>
> This should fix your problem: https://github.com/apache/spark/pull/9372
>
> Can you give it a try?
>
>
>
> On Tue, Oct 27, 2015 at 9:12 PM, Sjoerd Mulder 
> wrote:
>
>> No the job actually doesn't fail, but since our tests is generating all
>> these stacktraces i have disabled the tungsten mode just to be sure (and
>> don't have gazilion stacktraces in production).
>>
>> 2015-10-27 20:59 GMT+01:00 Josh Rosen :
>>
>>> Hi Sjoerd,
>>>
>>> Did your job actually *fail* or did it just generate many spurious
>>> exceptions? While the stacktrace that you posted does indicate a bug, I
>>> don't think that it should have stopped query execution because Spark
>>> should have fallen back to an interpreted code path (note the "Failed
>>> to generate ordering, fallback to interpreted" in the error message).
>>>
>>> On Tue, Oct 27, 2015 at 12:56 PM Sjoerd Mulder 
>>> wrote:
>>>
 I have disabled it because of it started generating ERROR's when
 upgrading from Spark 1.4 to 1.5.1

 2015-10-27T20:50:11.574+0100 ERROR TungstenSort.newOrdering() - Failed
 to generate ordering, fallback to interpreted
 java.util.concurrent.ExecutionException: java.lang.Exception: failed to
 compile: org.codehaus.commons.compiler.CompileException: Line 15, Column 9:
 Invalid character input "@" (character code 64)

 public SpecificOrdering
 generate(org.apache.spark.sql.catalyst.expressions.Expression[] expr) {
   return new SpecificOrdering(expr);
 }

 class SpecificOrdering extends
 org.apache.spark.sql.catalyst.expressions.codegen.BaseOrdering {

   private org.apache.spark.sql.catalyst.expressions.Expression[]
 expressions;



   public
 SpecificOrdering(org.apache.spark.sql.catalyst.expressions.Expression[]
 expr) {
 expressions = expr;

   }

   @Override
   public int compare(InternalRow a, InternalRow b) {
 InternalRow i = null;  // Holds current row being evaluated.

 i = a;
 boolean isNullA2;
 long primitiveA3;
 {
   /* input[2, LongType] */

   boolean isNull0 = i.isNullAt(2);
   long primitive1 = isNull0 ? -1L : (i.getLong(2));

   isNullA2 = isNull0;
   primitiveA3 = primitive1;
 }
 i = b;
 boolean isNullB4;
 long primitiveB5;
 {
   /* input[2, LongType] */

   boolean isNull0 = i.isNullAt(2);
   long primitive1 = isNull0 ? -1L : (i.getLong(2));

   isNullB4 = isNull0;
   primitiveB5 = primitive1;
 }
 if (isNullA2 && isNullB4) {
   // Nothing
 } else if (isNullA2) {
   return 1;
 } else if (isNullB4) {
   return -1;
 } else {
   int comp = (primitiveA3 > primitiveB5 ? 1 : primitiveA3 <
 primitiveB5 ? -1 : 0);
   if (comp != 0) {
 return -comp;
   }
 }

 return 0;
   }
 }

 at
 org.spark-project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
 at
 org.spark-project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
 at
 org.spark-project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
 at
 org.spark-project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
 at
 org.spark-project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
 at
 org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
 at
 org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
 at
 org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
 at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
 at
 org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
 at
 org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
 at
 org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.compile(CodeGenerator.scala:362)
 at
 org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:139)
 at
 org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37)
 at
 org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
 at
 

Re: If you use Spark 1.5 and disabled Tungsten mode ...

2015-11-01 Thread Reynold Xin
Thanks for reporting it, Sjoerd. You might have a different version of
Janino brought in from somewhere else.

This should fix your problem: https://github.com/apache/spark/pull/9372

Can you give it a try?



On Tue, Oct 27, 2015 at 9:12 PM, Sjoerd Mulder 
wrote:

> No the job actually doesn't fail, but since our tests is generating all
> these stacktraces i have disabled the tungsten mode just to be sure (and
> don't have gazilion stacktraces in production).
>
> 2015-10-27 20:59 GMT+01:00 Josh Rosen :
>
>> Hi Sjoerd,
>>
>> Did your job actually *fail* or did it just generate many spurious
>> exceptions? While the stacktrace that you posted does indicate a bug, I
>> don't think that it should have stopped query execution because Spark
>> should have fallen back to an interpreted code path (note the "Failed to
>> generate ordering, fallback to interpreted" in the error message).
>>
>> On Tue, Oct 27, 2015 at 12:56 PM Sjoerd Mulder 
>> wrote:
>>
>>> I have disabled it because of it started generating ERROR's when
>>> upgrading from Spark 1.4 to 1.5.1
>>>
>>> 2015-10-27T20:50:11.574+0100 ERROR TungstenSort.newOrdering() - Failed
>>> to generate ordering, fallback to interpreted
>>> java.util.concurrent.ExecutionException: java.lang.Exception: failed to
>>> compile: org.codehaus.commons.compiler.CompileException: Line 15, Column 9:
>>> Invalid character input "@" (character code 64)
>>>
>>> public SpecificOrdering
>>> generate(org.apache.spark.sql.catalyst.expressions.Expression[] expr) {
>>>   return new SpecificOrdering(expr);
>>> }
>>>
>>> class SpecificOrdering extends
>>> org.apache.spark.sql.catalyst.expressions.codegen.BaseOrdering {
>>>
>>>   private org.apache.spark.sql.catalyst.expressions.Expression[]
>>> expressions;
>>>
>>>
>>>
>>>   public
>>> SpecificOrdering(org.apache.spark.sql.catalyst.expressions.Expression[]
>>> expr) {
>>> expressions = expr;
>>>
>>>   }
>>>
>>>   @Override
>>>   public int compare(InternalRow a, InternalRow b) {
>>> InternalRow i = null;  // Holds current row being evaluated.
>>>
>>> i = a;
>>> boolean isNullA2;
>>> long primitiveA3;
>>> {
>>>   /* input[2, LongType] */
>>>
>>>   boolean isNull0 = i.isNullAt(2);
>>>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>>>
>>>   isNullA2 = isNull0;
>>>   primitiveA3 = primitive1;
>>> }
>>> i = b;
>>> boolean isNullB4;
>>> long primitiveB5;
>>> {
>>>   /* input[2, LongType] */
>>>
>>>   boolean isNull0 = i.isNullAt(2);
>>>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>>>
>>>   isNullB4 = isNull0;
>>>   primitiveB5 = primitive1;
>>> }
>>> if (isNullA2 && isNullB4) {
>>>   // Nothing
>>> } else if (isNullA2) {
>>>   return 1;
>>> } else if (isNullB4) {
>>>   return -1;
>>> } else {
>>>   int comp = (primitiveA3 > primitiveB5 ? 1 : primitiveA3 <
>>> primitiveB5 ? -1 : 0);
>>>   if (comp != 0) {
>>> return -comp;
>>>   }
>>> }
>>>
>>> return 0;
>>>   }
>>> }
>>>
>>> at
>>> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
>>> at
>>> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
>>> at
>>> org.spark-project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>>> at
>>> org.spark-project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
>>> at
>>> org.spark-project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
>>> at
>>> org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
>>> at
>>> org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
>>> at
>>> org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
>>> at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
>>> at
>>> org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
>>> at
>>> org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.compile(CodeGenerator.scala:362)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:139)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
>>> at
>>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:422)
>>> at
>>> org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:294)
>>> at org.apache.spark.sql.execution.TungstenSort.org
>>> $apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:131)
>>> at
>>> 

Re: If you use Spark 1.5 and disabled Tungsten mode ...

2015-10-27 Thread Sjoerd Mulder
No the job actually doesn't fail, but since our tests is generating all
these stacktraces i have disabled the tungsten mode just to be sure (and
don't have gazilion stacktraces in production).

2015-10-27 20:59 GMT+01:00 Josh Rosen :

> Hi Sjoerd,
>
> Did your job actually *fail* or did it just generate many spurious
> exceptions? While the stacktrace that you posted does indicate a bug, I
> don't think that it should have stopped query execution because Spark
> should have fallen back to an interpreted code path (note the "Failed to
> generate ordering, fallback to interpreted" in the error message).
>
> On Tue, Oct 27, 2015 at 12:56 PM Sjoerd Mulder 
> wrote:
>
>> I have disabled it because of it started generating ERROR's when
>> upgrading from Spark 1.4 to 1.5.1
>>
>> 2015-10-27T20:50:11.574+0100 ERROR TungstenSort.newOrdering() - Failed to
>> generate ordering, fallback to interpreted
>> java.util.concurrent.ExecutionException: java.lang.Exception: failed to
>> compile: org.codehaus.commons.compiler.CompileException: Line 15, Column 9:
>> Invalid character input "@" (character code 64)
>>
>> public SpecificOrdering
>> generate(org.apache.spark.sql.catalyst.expressions.Expression[] expr) {
>>   return new SpecificOrdering(expr);
>> }
>>
>> class SpecificOrdering extends
>> org.apache.spark.sql.catalyst.expressions.codegen.BaseOrdering {
>>
>>   private org.apache.spark.sql.catalyst.expressions.Expression[]
>> expressions;
>>
>>
>>
>>   public
>> SpecificOrdering(org.apache.spark.sql.catalyst.expressions.Expression[]
>> expr) {
>> expressions = expr;
>>
>>   }
>>
>>   @Override
>>   public int compare(InternalRow a, InternalRow b) {
>> InternalRow i = null;  // Holds current row being evaluated.
>>
>> i = a;
>> boolean isNullA2;
>> long primitiveA3;
>> {
>>   /* input[2, LongType] */
>>
>>   boolean isNull0 = i.isNullAt(2);
>>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>>
>>   isNullA2 = isNull0;
>>   primitiveA3 = primitive1;
>> }
>> i = b;
>> boolean isNullB4;
>> long primitiveB5;
>> {
>>   /* input[2, LongType] */
>>
>>   boolean isNull0 = i.isNullAt(2);
>>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>>
>>   isNullB4 = isNull0;
>>   primitiveB5 = primitive1;
>> }
>> if (isNullA2 && isNullB4) {
>>   // Nothing
>> } else if (isNullA2) {
>>   return 1;
>> } else if (isNullB4) {
>>   return -1;
>> } else {
>>   int comp = (primitiveA3 > primitiveB5 ? 1 : primitiveA3 <
>> primitiveB5 ? -1 : 0);
>>   if (comp != 0) {
>> return -comp;
>>   }
>> }
>>
>> return 0;
>>   }
>> }
>>
>> at
>> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
>> at
>> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
>> at
>> org.spark-project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>> at
>> org.spark-project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
>> at
>> org.spark-project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
>> at
>> org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
>> at
>> org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
>> at
>> org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
>> at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
>> at
>> org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
>> at
>> org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
>> at
>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.compile(CodeGenerator.scala:362)
>> at
>> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:139)
>> at
>> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37)
>> at
>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
>> at
>> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:422)
>> at
>> org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:294)
>> at org.apache.spark.sql.execution.TungstenSort.org
>> $apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:131)
>> at
>> org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
>> at
>> org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
>> at
>> org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:59)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
>> at
>> 

Re: If you use Spark 1.5 and disabled Tungsten mode ...

2015-10-27 Thread Sjoerd Mulder
I have disabled it because of it started generating ERROR's when upgrading
from Spark 1.4 to 1.5.1

2015-10-27T20:50:11.574+0100 ERROR TungstenSort.newOrdering() - Failed to
generate ordering, fallback to interpreted
java.util.concurrent.ExecutionException: java.lang.Exception: failed to
compile: org.codehaus.commons.compiler.CompileException: Line 15, Column 9:
Invalid character input "@" (character code 64)

public SpecificOrdering
generate(org.apache.spark.sql.catalyst.expressions.Expression[] expr) {
  return new SpecificOrdering(expr);
}

class SpecificOrdering extends
org.apache.spark.sql.catalyst.expressions.codegen.BaseOrdering {

  private org.apache.spark.sql.catalyst.expressions.Expression[]
expressions;



  public
SpecificOrdering(org.apache.spark.sql.catalyst.expressions.Expression[]
expr) {
expressions = expr;

  }

  @Override
  public int compare(InternalRow a, InternalRow b) {
InternalRow i = null;  // Holds current row being evaluated.

i = a;
boolean isNullA2;
long primitiveA3;
{
  /* input[2, LongType] */

  boolean isNull0 = i.isNullAt(2);
  long primitive1 = isNull0 ? -1L : (i.getLong(2));

  isNullA2 = isNull0;
  primitiveA3 = primitive1;
}
i = b;
boolean isNullB4;
long primitiveB5;
{
  /* input[2, LongType] */

  boolean isNull0 = i.isNullAt(2);
  long primitive1 = isNull0 ? -1L : (i.getLong(2));

  isNullB4 = isNull0;
  primitiveB5 = primitive1;
}
if (isNullA2 && isNullB4) {
  // Nothing
} else if (isNullA2) {
  return 1;
} else if (isNullB4) {
  return -1;
} else {
  int comp = (primitiveA3 > primitiveB5 ? 1 : primitiveA3 < primitiveB5
? -1 : 0);
  if (comp != 0) {
return -comp;
  }
}

return 0;
  }
}

at
org.spark-project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
at
org.spark-project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
at
org.spark-project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
at
org.spark-project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
at
org.spark-project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
at
org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
at
org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at
org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
at org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
at
org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.compile(CodeGenerator.scala:362)
at
org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:139)
at
org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37)
at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
at
org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:422)
at org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:294)
at org.apache.spark.sql.execution.TungstenSort.org
$apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:131)
at
org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
at
org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
at
org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:59)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


2015-10-14 21:00 GMT+02:00 Reynold Xin :

> Can you reply to this email and provide us with reasons why you disable it?
>
> Thanks.
>
>


Re: If you use Spark 1.5 and disabled Tungsten mode ...

2015-10-27 Thread Josh Rosen
Hi Sjoerd,

Did your job actually *fail* or did it just generate many spurious
exceptions? While the stacktrace that you posted does indicate a bug, I
don't think that it should have stopped query execution because Spark
should have fallen back to an interpreted code path (note the "Failed to
generate ordering, fallback to interpreted" in the error message).

On Tue, Oct 27, 2015 at 12:56 PM Sjoerd Mulder 
wrote:

> I have disabled it because of it started generating ERROR's when upgrading
> from Spark 1.4 to 1.5.1
>
> 2015-10-27T20:50:11.574+0100 ERROR TungstenSort.newOrdering() - Failed to
> generate ordering, fallback to interpreted
> java.util.concurrent.ExecutionException: java.lang.Exception: failed to
> compile: org.codehaus.commons.compiler.CompileException: Line 15, Column 9:
> Invalid character input "@" (character code 64)
>
> public SpecificOrdering
> generate(org.apache.spark.sql.catalyst.expressions.Expression[] expr) {
>   return new SpecificOrdering(expr);
> }
>
> class SpecificOrdering extends
> org.apache.spark.sql.catalyst.expressions.codegen.BaseOrdering {
>
>   private org.apache.spark.sql.catalyst.expressions.Expression[]
> expressions;
>
>
>
>   public
> SpecificOrdering(org.apache.spark.sql.catalyst.expressions.Expression[]
> expr) {
> expressions = expr;
>
>   }
>
>   @Override
>   public int compare(InternalRow a, InternalRow b) {
> InternalRow i = null;  // Holds current row being evaluated.
>
> i = a;
> boolean isNullA2;
> long primitiveA3;
> {
>   /* input[2, LongType] */
>
>   boolean isNull0 = i.isNullAt(2);
>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>
>   isNullA2 = isNull0;
>   primitiveA3 = primitive1;
> }
> i = b;
> boolean isNullB4;
> long primitiveB5;
> {
>   /* input[2, LongType] */
>
>   boolean isNull0 = i.isNullAt(2);
>   long primitive1 = isNull0 ? -1L : (i.getLong(2));
>
>   isNullB4 = isNull0;
>   primitiveB5 = primitive1;
> }
> if (isNullA2 && isNullB4) {
>   // Nothing
> } else if (isNullA2) {
>   return 1;
> } else if (isNullB4) {
>   return -1;
> } else {
>   int comp = (primitiveA3 > primitiveB5 ? 1 : primitiveA3 <
> primitiveB5 ? -1 : 0);
>   if (comp != 0) {
> return -comp;
>   }
> }
>
> return 0;
>   }
> }
>
> at
> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.getValue(AbstractFuture.java:306)
> at
> org.spark-project.guava.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:293)
> at
> org.spark-project.guava.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
> at
> org.spark-project.guava.util.concurrent.Uninterruptibles.getUninterruptibly(Uninterruptibles.java:135)
> at
> org.spark-project.guava.cache.LocalCache$Segment.getAndRecordStats(LocalCache.java:2410)
> at
> org.spark-project.guava.cache.LocalCache$Segment.loadSync(LocalCache.java:2380)
> at
> org.spark-project.guava.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
> at
> org.spark-project.guava.cache.LocalCache$Segment.get(LocalCache.java:2257)
> at org.spark-project.guava.cache.LocalCache.get(LocalCache.java:4000)
> at org.spark-project.guava.cache.LocalCache.getOrLoad(LocalCache.java:4004)
> at
> org.spark-project.guava.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4874)
> at
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.compile(CodeGenerator.scala:362)
> at
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:139)
> at
> org.apache.spark.sql.catalyst.expressions.codegen.GenerateOrdering$.create(GenerateOrdering.scala:37)
> at
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:425)
> at
> org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator.generate(CodeGenerator.scala:422)
> at
> org.apache.spark.sql.execution.SparkPlan.newOrdering(SparkPlan.scala:294)
> at org.apache.spark.sql.execution.TungstenSort.org
> $apache$spark$sql$execution$TungstenSort$$preparePartition$1(sort.scala:131)
> at
> org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
> at
> org.apache.spark.sql.execution.TungstenSort$$anonfun$doExecute$3.apply(sort.scala:169)
> at
> org.apache.spark.rdd.MapPartitionsWithPreparationRDD.compute(MapPartitionsWithPreparationRDD.scala:59)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:297)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:264)
> at 

If you use Spark 1.5 and disabled Tungsten mode ...

2015-10-14 Thread Reynold Xin
Can you reply to this email and provide us with reasons why you disable it?

Thanks.