[jira] [Resolved] (SPARK-29620) UnsafeKVExternalSorterSuite failure on bigendian system

2020-03-27 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani resolved SPARK-29620.
--
Resolution: Fixed

> UnsafeKVExternalSorterSuite failure on bigendian system
> ---
>
> Key: SPARK-29620
> URL: https://issues.apache.org/jira/browse/SPARK-29620
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 2.4.4
>Reporter: salamani
>Priority: Major
>
> {code}
> spark/sql/core# ../../build/mvn -Dtest=none 
> -DwildcardSuites=org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite 
> test
> {code}
> {code}
> UnsafeKVExternalSorterSuite:
> 12:24:24.305 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> - kv sorting key schema [] and value schema [] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (4) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  ...
> - kv sorting key schema [int] and value schema [] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  ...
> - kv sorting key schema [] and value schema [int] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalates

[jira] [Closed] (SPARK-29620) UnsafeKVExternalSorterSuite failure on bigendian system

2020-03-27 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani closed SPARK-29620.


Passed rerun in new system

> UnsafeKVExternalSorterSuite failure on bigendian system
> ---
>
> Key: SPARK-29620
> URL: https://issues.apache.org/jira/browse/SPARK-29620
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 2.4.4
>Reporter: salamani
>Priority: Major
>
> {code}
> spark/sql/core# ../../build/mvn -Dtest=none 
> -DwildcardSuites=org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite 
> test
> {code}
> {code}
> UnsafeKVExternalSorterSuite:
> 12:24:24.305 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
> native-hadoop library for your platform... using builtin-java classes where 
> applicable
> - kv sorting key schema [] and value schema [] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (4) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  ...
> - kv sorting key schema [int] and value schema [] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  ...
> - kv sorting key schema [] and value schema [int] *** FAILED ***
>  java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
>  at 
> org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at 
> org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalat

[jira] [Closed] (SPARK-30078) FlatMapGroupsWithStateSuite failure (big-endian)

2020-03-27 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani closed SPARK-30078.


> FlatMapGroupsWithStateSuite failure (big-endian)
> 
>
> Key: SPARK-30078
> URL: https://issues.apache.org/jira/browse/SPARK-30078
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.4
>Reporter: salamani
>Priority: Major
>  Labels: big-endian
> Attachments: FlatMapGroupsWithStateSuite.txt
>
>
> I have built Apache Spark v2.4.4 on Big Endian Platform with AdoptJDK OpenJ9 
> 1.8.0_202.
> My build is successful. However while running the scala tests of "Spark 
> Project SQL" module, I am facing failures at with 
> FlatMapGroupsWithStateSuite, Error Log as attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Resolved] (SPARK-30078) FlatMapGroupsWithStateSuite failure (big-endian)

2020-03-27 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani resolved SPARK-30078.
--
Resolution: Fixed

Passed on rerun in new system. 

> FlatMapGroupsWithStateSuite failure (big-endian)
> 
>
> Key: SPARK-30078
> URL: https://issues.apache.org/jira/browse/SPARK-30078
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.4
>Reporter: salamani
>Priority: Major
>  Labels: big-endian
> Attachments: FlatMapGroupsWithStateSuite.txt
>
>
> I have built Apache Spark v2.4.4 on Big Endian Platform with AdoptJDK OpenJ9 
> 1.8.0_202.
> My build is successful. However while running the scala tests of "Spark 
> Project SQL" module, I am facing failures at with 
> FlatMapGroupsWithStateSuite, Error Log as attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-30078) flatMapGroupsWithState failure

2019-11-29 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-30078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-30078:
-
Attachment: FlatMapGroupsWithStateSuite.txt

> flatMapGroupsWithState failure
> --
>
> Key: SPARK-30078
> URL: https://issues.apache.org/jira/browse/SPARK-30078
> Project: Spark
>  Issue Type: Bug
>  Components: SQL, Tests
>Affects Versions: 2.4.4
>Reporter: salamani
>Priority: Major
>  Labels: big-endian
> Attachments: FlatMapGroupsWithStateSuite.txt
>
>
> I have built Apache Spark v2.4.4 on Big Endian Platform with AdoptJDK OpenJ9 
> 1.8.0_202.
> My build is successful. However while running the scala tests of "Spark 
> Project SQL" module, I am facing failures at with 
> FlatMapGroupsWithStateSuite, Error Log as attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-30078) flatMapGroupsWithState failure

2019-11-29 Thread salamani (Jira)
salamani created SPARK-30078:


 Summary: flatMapGroupsWithState failure
 Key: SPARK-30078
 URL: https://issues.apache.org/jira/browse/SPARK-30078
 Project: Spark
  Issue Type: Bug
  Components: SQL, Tests
Affects Versions: 2.4.4
Reporter: salamani


I have built Apache Spark v2.4.4 on Big Endian Platform with AdoptJDK OpenJ9 
1.8.0_202.

My build is successful. However while running the scala tests of "Spark Project 
SQL" module, I am facing failures at with FlatMapGroupsWithStateSuite, Error 
Log as attached.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-29620) UnsafeKVExternalSorterSuite failure on bigendian system

2019-10-28 Thread salamani (Jira)


 [ 
https://issues.apache.org/jira/browse/SPARK-29620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-29620:
-
Description: 
spark/sql/core# ../../build/mvn -Dtest=none 
-DwildcardSuites=org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite test



UnsafeKVExternalSorterSuite:
12:24:24.305 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load 
native-hadoop library for your platform... using builtin-java classes where 
applicable
- kv sorting key schema [] and value schema [] *** FAILED ***
 java.lang.AssertionError: sizeInBytes (4) should be a multiple of 8
 at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
- kv sorting key schema [int] and value schema [] *** FAILED ***
 java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
 at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
- kv sorting key schema [] and value schema [int] *** FAILED ***
 java.lang.AssertionError: sizeInBytes (20) should be a multiple of 8
 at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter(UnsafeKVExternalSorterSuite.scala:145)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply$mcV$sp(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite$$anonfun$org$apache$spark$sql$execution$UnsafeKVExternalSorterSuite$$testKVSorter$1.apply(UnsafeKVExternalSorterSuite.scala:86)
 at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
 at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
 at org.scalatest.Transformer.apply(Transformer.scala:22)
 at org.scalatest.Transformer.apply(Transformer.scala:20)
 ...
- kv sorting key schema [int] and value schema 
[float,float,double,string,float] *** FAILED ***
 java.lang.AssertionError: sizeInBytes (2732) should be a multiple of 8
 at 
org.apache.spark.sql.catalyst.expressions.UnsafeRow.pointTo(UnsafeRow.java:168)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorter$KVSorterIterator.next(UnsafeKVExternalSorter.java:297)
 at 
org.apache.spark.sql.execution.UnsafeKVExternalSorterSuite.org$apache$spark$sql$executi

[jira] [Created] (SPARK-29620) UnsafeKVExternalSorterSuite failure on bigendian system

2019-10-28 Thread salamani (Jira)
salamani created SPARK-29620:


 Summary: UnsafeKVExternalSorterSuite failure on bigendian system
 Key: SPARK-29620
 URL: https://issues.apache.org/jira/browse/SPARK-29620
 Project: Spark
  Issue Type: Bug
  Components: Spark Shell
Affects Versions: 2.4.4
Reporter: salamani






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26983) Spark PassThroughSuite,ColumnVectorSuite failure on bigendian

2019-02-25 Thread salamani (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-26983:
-
Description: 
Following failures are observed in Spark Project SQL  on big endian system

PassThroughSuite :
 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***

 

ColumnVectorSuite:
 - CachedBatch long Apis
 - CachedBatch float Apis *** FAILED ***
 4.6006E-41 did not equal 1.0 (ColumnVectorSuite.scala:378)
 - CachedBatch double Apis *** FAILED ***
 3.03865E-319 did not equal 1.0 (ColumnVectorSuite.scala:402)
 Run completed in 8 seconds, 183 milliseconds.
 Total number of tests run: 21
 Suites: completed 2, aborted 0
 Tests: succeeded 19, failed 2, canceled 0, ignored 0, pending 0
 ** 
 *** 2 TESTS FAILED ***

  was:
Following failures are observed for PassThroughSuite in Spark Project SQL  on 
big endian system
 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***

Summary: Spark PassThroughSuite,ColumnVectorSuite failure on bigendian  
(was: Spark PassThroughSuite failure on bigendian)

> Spark PassThroughSuite,ColumnVectorSuite failure on bigendian
> -
>
> Key: SPARK-26983
> URL: https://issues.apache.org/jira/browse/SPARK-26983
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.2
>Reporter: salamani
>Priority: Major
> Fix For: 2.3.2
>
>
> Following failures are observed in Spark Project SQL  on big endian system
> PassThroughSuite :
>  - PassThrough with FLOAT: empty column for decompress()
>  - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
>  Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with FLOAT: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with DOUBLE: empty column
>  - PassThrough with DOUBLE: long random series
>  - PassThrough with DOUBLE: empty column for decompress()
>  - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
>  Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
> decoded double value (PassThroughEncodingSuite.scala:150)
>  - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
> (PassThroughEncodingSuite.scala:150)
>  Run completed in 9 seconds, 72 milliseconds.
>  Total number of tests run: 30
>  Suites: 

[jira] [Updated] (SPARK-26983) Spark PassThroughSuite failure on bigendian

2019-02-25 Thread salamani (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-26983:
-
Description: 
Following failures are observed for PassThroughSuite in Spark Project SQL  


 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***


  was:
Following failures are observed for PassThroughSuite in Spark Project SQL  

```
 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***
 ```


> Spark PassThroughSuite failure on bigendian
> ---
>
> Key: SPARK-26983
> URL: https://issues.apache.org/jira/browse/SPARK-26983
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.2
>Reporter: salamani
>Priority: Major
> Fix For: 2.3.2
>
>
> Following failures are observed for PassThroughSuite in Spark Project SQL  
>  - PassThrough with FLOAT: empty column for decompress()
>  - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
>  Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with FLOAT: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with DOUBLE: empty column
>  - PassThrough with DOUBLE: long random series
>  - PassThrough with DOUBLE: empty column for decompress()
>  - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
>  Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
> decoded double value (PassThroughEncodingSuite.scala:150)
>  - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
> (PassThroughEncodingSuite.scala:150)
>  Run completed in 9 seconds, 72 milliseconds.
>  Total number of tests run: 30
>  Suites: completed 2, aborted 0
>  Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
>  ** 
>  *** 4 TESTS FAILED ***



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26983) Spark PassThroughSuite failure on bigendian

2019-02-24 Thread salamani (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-26983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-26983:
-
Description: 
Following failures are observed for PassThroughSuite in Spark Project SQL  on 
big endian system
 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***

  was:
Following failures are observed for PassThroughSuite in Spark Project SQL  


 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***



> Spark PassThroughSuite failure on bigendian
> ---
>
> Key: SPARK-26983
> URL: https://issues.apache.org/jira/browse/SPARK-26983
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Core
>Affects Versions: 2.3.2
>Reporter: salamani
>Priority: Major
> Fix For: 2.3.2
>
>
> Following failures are observed for PassThroughSuite in Spark Project SQL  on 
> big endian system
>  - PassThrough with FLOAT: empty column for decompress()
>  - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
>  Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with FLOAT: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
> (PassThroughEncodingSuite.scala:146)
>  - PassThrough with DOUBLE: empty column
>  - PassThrough with DOUBLE: long random series
>  - PassThrough with DOUBLE: empty column for decompress()
>  - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
>  Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
> decoded double value (PassThroughEncodingSuite.scala:150)
>  - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
> ***
>  Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
> (PassThroughEncodingSuite.scala:150)
>  Run completed in 9 seconds, 72 milliseconds.
>  Total number of tests run: 30
>  Suites: completed 2, aborted 0
>  Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
>  ** 
>  *** 4 TESTS FAILED ***



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-26983) Spark PassThroughSuite failure on bigendian

2019-02-24 Thread salamani (JIRA)
salamani created SPARK-26983:


 Summary: Spark PassThroughSuite failure on bigendian
 Key: SPARK-26983
 URL: https://issues.apache.org/jira/browse/SPARK-26983
 Project: Spark
  Issue Type: Bug
  Components: Spark Core
Affects Versions: 2.3.2
Reporter: salamani
 Fix For: 2.3.2


Following failures are observed for PassThroughSuite in Spark Project SQL  

```
 - PassThrough with FLOAT: empty column for decompress()
 - PassThrough with FLOAT: long random series for decompress() *** FAILED ***
 Expected 0.10990685, but got -6.6357654E14 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with FLOAT: simple case with null for decompress() *** FAILED ***
 Expected 2.0, but got 9.0E-44 Wrong 0-th decoded float value 
(PassThroughEncodingSuite.scala:146)
 - PassThrough with DOUBLE: empty column
 - PassThrough with DOUBLE: long random series
 - PassThrough with DOUBLE: empty column for decompress()
 - PassThrough with DOUBLE: long random series for decompress() *** FAILED ***
 Expected 0.20634564007984624, but got 5.902392643940031E-230 Wrong 0-th 
decoded double value (PassThroughEncodingSuite.scala:150)
 - PassThrough with DOUBLE: simple case with null for decompress() *** FAILED 
***
 Expected 2.0, but got 3.16E-322 Wrong 0-th decoded double value 
(PassThroughEncodingSuite.scala:150)
 Run completed in 9 seconds, 72 milliseconds.
 Total number of tests run: 30
 Suites: completed 2, aborted 0
 Tests: succeeded 26, failed 4, canceled 0, ignored 0, pending 0
 ** 
 *** 4 TESTS FAILED ***
 ```



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-26876) Spark repl scala test failure on big-endian system

2019-02-13 Thread salamani (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-26876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

salamani updated SPARK-26876:
-
Attachment: repl_scala_issue.txt

> Spark repl scala test failure on big-endian system
> --
>
> Key: SPARK-26876
> URL: https://issues.apache.org/jira/browse/SPARK-26876
> Project: Spark
>  Issue Type: Bug
>  Components: Spark Shell
>Affects Versions: 2.3.2
>Reporter: salamani
>Priority: Major
> Attachments: repl_scala_issue.txt
>
>
> I have built the spark 2.3.2 from source on big endian system. I have 
> observed the following test failure on spark 2.3.2 repl scala on big-endian 
> system.Please find the log attached.
>  
> how to go about resolving this issues or are they known issues for big endian 
> platform. How important is this failure?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Created] (SPARK-26876) Spark repl scala test failure on big-endian system

2019-02-13 Thread salamani (JIRA)
salamani created SPARK-26876:


 Summary: Spark repl scala test failure on big-endian system
 Key: SPARK-26876
 URL: https://issues.apache.org/jira/browse/SPARK-26876
 Project: Spark
  Issue Type: Bug
  Components: Spark Shell
Affects Versions: 2.3.2
Reporter: salamani


I have built the spark 2.3.2 from source on big endian system. I have observed 
the following test failure on spark 2.3.2 repl scala on big-endian 
system.Please find the log attached.
 
how to go about resolving this issues or are they known issues for big endian 
platform. How important is this failure?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org