[spark] branch branch-3.0 updated: [SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float and double

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new cb6a0d0  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double
cb6a0d0 is described below

commit cb6a0d08cc020d9a2c19173c9023a9f5e565dd6c
Author: Tanel Kiis 
AuthorDate: Wed Sep 16 12:13:15 2020 +0900

[SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float 
and double

### What changes were proposed in this pull request?

The `LiteralGenerator` for float and double datatypes was supposed to yield 
special values (NaN, +-inf) among others, but the `Gen.chooseNum` method does 
not yield values that are outside the defined range. The `Gen.chooseNum` for a 
wide range of floats and doubles does not yield values in the "everyday" range 
as stated in https://github.com/typelevel/scalacheck/issues/113 .

There is an similar class `RandomDataGenerator` that is used in some other 
tests. Added `-0.0` and `-0.0f` as special values to there too.

These changes revealed an inconsistency with the equality check between 
`-0.0` and `0.0`.

### Why are the changes needed?

The `LiteralGenerator` is mostly used in the 
`checkConsistencyBetweenInterpretedAndCodegen` method in 
`MathExpressionsSuite`. This change would have caught the bug fixed in #29495 .

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Locally reverted #29495 and verified that the existing test cases caught 
the bug.

Closes #29515 from tanelk/SPARK-32688.

Authored-by: Tanel Kiis 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 6051755bfe23a0e4564bf19476ec34cd7fd6008d)
Signed-off-by: Takeshi Yamamuro 
---
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 6a5bdc4..3e2dc3f 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -260,10 +260,10 @@ object RandomDataGenerator {
   new MathContext(precision)).bigDecimal)
   case DoubleType => randomNumeric[Double](
 rand, r => longBitsToDouble(r.nextLong()), Seq(Double.MinValue, 
Double.MinPositiveValue,
-  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0))
+  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0, -0.0))
   case FloatType => randomNumeric[Float](
 rand, r => intBitsToFloat(r.nextInt()), Seq(Float.MinValue, 
Float.MinPositiveValue,
-  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f))
+  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f, -0.0f))
   case ByteType => randomNumeric[Byte](
 rand, _.nextInt().toByte, Seq(Byte.MinValue, Byte.MaxValue, 0.toByte))
   case IntegerType => randomNumeric[Int](
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
index d92eb01..c8e3b0e 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
@@ -68,16 +68,27 @@ object LiteralGenerator {
   lazy val longLiteralGen: Gen[Literal] =
 for { l <- Arbitrary.arbLong.arbitrary } yield Literal.create(l, LongType)
 
+  // The floatLiteralGen and doubleLiteralGen will 50% of the time yield 
arbitrary values
+  // and 50% of the time will yield some special values that are more likely 
to reveal
+  // corner cases. This behavior is similar to the integral value generators.
   lazy val floatLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Float.MinValue / 2, Float.MaxValue / 2,
-Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity)
+  f <- Gen.oneOf(
+Gen.oneOf(
+  Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.MinPositiveValue,
+  Float.MaxValue, -Float.MaxValue, 0.0f, -0.0f, 1.0f, -1.0f),
+Arbitrary.arbFloat.arbitrary
+  )
 } yield Literal.create(f, FloatType)
 
   lazy val doubleLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Double.MinValue / 2, Double.MaxValue / 2,
-Double.NaN, Double.PositiveInfinity, 

[spark] branch branch-3.0 updated: [SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float and double

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new cb6a0d0  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double
cb6a0d0 is described below

commit cb6a0d08cc020d9a2c19173c9023a9f5e565dd6c
Author: Tanel Kiis 
AuthorDate: Wed Sep 16 12:13:15 2020 +0900

[SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float 
and double

### What changes were proposed in this pull request?

The `LiteralGenerator` for float and double datatypes was supposed to yield 
special values (NaN, +-inf) among others, but the `Gen.chooseNum` method does 
not yield values that are outside the defined range. The `Gen.chooseNum` for a 
wide range of floats and doubles does not yield values in the "everyday" range 
as stated in https://github.com/typelevel/scalacheck/issues/113 .

There is an similar class `RandomDataGenerator` that is used in some other 
tests. Added `-0.0` and `-0.0f` as special values to there too.

These changes revealed an inconsistency with the equality check between 
`-0.0` and `0.0`.

### Why are the changes needed?

The `LiteralGenerator` is mostly used in the 
`checkConsistencyBetweenInterpretedAndCodegen` method in 
`MathExpressionsSuite`. This change would have caught the bug fixed in #29495 .

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Locally reverted #29495 and verified that the existing test cases caught 
the bug.

Closes #29515 from tanelk/SPARK-32688.

Authored-by: Tanel Kiis 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 6051755bfe23a0e4564bf19476ec34cd7fd6008d)
Signed-off-by: Takeshi Yamamuro 
---
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 6a5bdc4..3e2dc3f 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -260,10 +260,10 @@ object RandomDataGenerator {
   new MathContext(precision)).bigDecimal)
   case DoubleType => randomNumeric[Double](
 rand, r => longBitsToDouble(r.nextLong()), Seq(Double.MinValue, 
Double.MinPositiveValue,
-  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0))
+  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0, -0.0))
   case FloatType => randomNumeric[Float](
 rand, r => intBitsToFloat(r.nextInt()), Seq(Float.MinValue, 
Float.MinPositiveValue,
-  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f))
+  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f, -0.0f))
   case ByteType => randomNumeric[Byte](
 rand, _.nextInt().toByte, Seq(Byte.MinValue, Byte.MaxValue, 0.toByte))
   case IntegerType => randomNumeric[Int](
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
index d92eb01..c8e3b0e 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
@@ -68,16 +68,27 @@ object LiteralGenerator {
   lazy val longLiteralGen: Gen[Literal] =
 for { l <- Arbitrary.arbLong.arbitrary } yield Literal.create(l, LongType)
 
+  // The floatLiteralGen and doubleLiteralGen will 50% of the time yield 
arbitrary values
+  // and 50% of the time will yield some special values that are more likely 
to reveal
+  // corner cases. This behavior is similar to the integral value generators.
   lazy val floatLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Float.MinValue / 2, Float.MaxValue / 2,
-Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity)
+  f <- Gen.oneOf(
+Gen.oneOf(
+  Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.MinPositiveValue,
+  Float.MaxValue, -Float.MaxValue, 0.0f, -0.0f, 1.0f, -1.0f),
+Arbitrary.arbFloat.arbitrary
+  )
 } yield Literal.create(f, FloatType)
 
   lazy val doubleLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Double.MinValue / 2, Double.MaxValue / 2,
-Double.NaN, Double.PositiveInfinity, 

[spark] branch master updated (b46c730 -> 6051755)

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test
 add 6051755  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (b46c730 -> 6051755)

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test
 add 6051755  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float and double

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new cb6a0d0  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double
cb6a0d0 is described below

commit cb6a0d08cc020d9a2c19173c9023a9f5e565dd6c
Author: Tanel Kiis 
AuthorDate: Wed Sep 16 12:13:15 2020 +0900

[SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float 
and double

### What changes were proposed in this pull request?

The `LiteralGenerator` for float and double datatypes was supposed to yield 
special values (NaN, +-inf) among others, but the `Gen.chooseNum` method does 
not yield values that are outside the defined range. The `Gen.chooseNum` for a 
wide range of floats and doubles does not yield values in the "everyday" range 
as stated in https://github.com/typelevel/scalacheck/issues/113 .

There is an similar class `RandomDataGenerator` that is used in some other 
tests. Added `-0.0` and `-0.0f` as special values to there too.

These changes revealed an inconsistency with the equality check between 
`-0.0` and `0.0`.

### Why are the changes needed?

The `LiteralGenerator` is mostly used in the 
`checkConsistencyBetweenInterpretedAndCodegen` method in 
`MathExpressionsSuite`. This change would have caught the bug fixed in #29495 .

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Locally reverted #29495 and verified that the existing test cases caught 
the bug.

Closes #29515 from tanelk/SPARK-32688.

Authored-by: Tanel Kiis 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 6051755bfe23a0e4564bf19476ec34cd7fd6008d)
Signed-off-by: Takeshi Yamamuro 
---
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 6a5bdc4..3e2dc3f 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -260,10 +260,10 @@ object RandomDataGenerator {
   new MathContext(precision)).bigDecimal)
   case DoubleType => randomNumeric[Double](
 rand, r => longBitsToDouble(r.nextLong()), Seq(Double.MinValue, 
Double.MinPositiveValue,
-  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0))
+  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0, -0.0))
   case FloatType => randomNumeric[Float](
 rand, r => intBitsToFloat(r.nextInt()), Seq(Float.MinValue, 
Float.MinPositiveValue,
-  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f))
+  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f, -0.0f))
   case ByteType => randomNumeric[Byte](
 rand, _.nextInt().toByte, Seq(Byte.MinValue, Byte.MaxValue, 0.toByte))
   case IntegerType => randomNumeric[Int](
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
index d92eb01..c8e3b0e 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
@@ -68,16 +68,27 @@ object LiteralGenerator {
   lazy val longLiteralGen: Gen[Literal] =
 for { l <- Arbitrary.arbLong.arbitrary } yield Literal.create(l, LongType)
 
+  // The floatLiteralGen and doubleLiteralGen will 50% of the time yield 
arbitrary values
+  // and 50% of the time will yield some special values that are more likely 
to reveal
+  // corner cases. This behavior is similar to the integral value generators.
   lazy val floatLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Float.MinValue / 2, Float.MaxValue / 2,
-Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity)
+  f <- Gen.oneOf(
+Gen.oneOf(
+  Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.MinPositiveValue,
+  Float.MaxValue, -Float.MaxValue, 0.0f, -0.0f, 1.0f, -1.0f),
+Arbitrary.arbFloat.arbitrary
+  )
 } yield Literal.create(f, FloatType)
 
   lazy val doubleLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Double.MinValue / 2, Double.MaxValue / 2,
-Double.NaN, Double.PositiveInfinity, 

[spark] branch master updated (b46c730 -> 6051755)

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test
 add 6051755  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float and double

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new cb6a0d0  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double
cb6a0d0 is described below

commit cb6a0d08cc020d9a2c19173c9023a9f5e565dd6c
Author: Tanel Kiis 
AuthorDate: Wed Sep 16 12:13:15 2020 +0900

[SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float 
and double

### What changes were proposed in this pull request?

The `LiteralGenerator` for float and double datatypes was supposed to yield 
special values (NaN, +-inf) among others, but the `Gen.chooseNum` method does 
not yield values that are outside the defined range. The `Gen.chooseNum` for a 
wide range of floats and doubles does not yield values in the "everyday" range 
as stated in https://github.com/typelevel/scalacheck/issues/113 .

There is an similar class `RandomDataGenerator` that is used in some other 
tests. Added `-0.0` and `-0.0f` as special values to there too.

These changes revealed an inconsistency with the equality check between 
`-0.0` and `0.0`.

### Why are the changes needed?

The `LiteralGenerator` is mostly used in the 
`checkConsistencyBetweenInterpretedAndCodegen` method in 
`MathExpressionsSuite`. This change would have caught the bug fixed in #29495 .

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Locally reverted #29495 and verified that the existing test cases caught 
the bug.

Closes #29515 from tanelk/SPARK-32688.

Authored-by: Tanel Kiis 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 6051755bfe23a0e4564bf19476ec34cd7fd6008d)
Signed-off-by: Takeshi Yamamuro 
---
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 6a5bdc4..3e2dc3f 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -260,10 +260,10 @@ object RandomDataGenerator {
   new MathContext(precision)).bigDecimal)
   case DoubleType => randomNumeric[Double](
 rand, r => longBitsToDouble(r.nextLong()), Seq(Double.MinValue, 
Double.MinPositiveValue,
-  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0))
+  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0, -0.0))
   case FloatType => randomNumeric[Float](
 rand, r => intBitsToFloat(r.nextInt()), Seq(Float.MinValue, 
Float.MinPositiveValue,
-  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f))
+  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f, -0.0f))
   case ByteType => randomNumeric[Byte](
 rand, _.nextInt().toByte, Seq(Byte.MinValue, Byte.MaxValue, 0.toByte))
   case IntegerType => randomNumeric[Int](
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
index d92eb01..c8e3b0e 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
@@ -68,16 +68,27 @@ object LiteralGenerator {
   lazy val longLiteralGen: Gen[Literal] =
 for { l <- Arbitrary.arbLong.arbitrary } yield Literal.create(l, LongType)
 
+  // The floatLiteralGen and doubleLiteralGen will 50% of the time yield 
arbitrary values
+  // and 50% of the time will yield some special values that are more likely 
to reveal
+  // corner cases. This behavior is similar to the integral value generators.
   lazy val floatLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Float.MinValue / 2, Float.MaxValue / 2,
-Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity)
+  f <- Gen.oneOf(
+Gen.oneOf(
+  Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.MinPositiveValue,
+  Float.MaxValue, -Float.MaxValue, 0.0f, -0.0f, 1.0f, -1.0f),
+Arbitrary.arbFloat.arbitrary
+  )
 } yield Literal.create(f, FloatType)
 
   lazy val doubleLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Double.MinValue / 2, Double.MaxValue / 2,
-Double.NaN, Double.PositiveInfinity, 

[spark] branch master updated (b46c730 -> 6051755)

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test
 add 6051755  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (108c4c8 -> b46c730)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created
 add b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch branch-3.0 updated: [SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float and double

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a commit to branch branch-3.0
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.0 by this push:
 new cb6a0d0  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double
cb6a0d0 is described below

commit cb6a0d08cc020d9a2c19173c9023a9f5e565dd6c
Author: Tanel Kiis 
AuthorDate: Wed Sep 16 12:13:15 2020 +0900

[SPARK-32688][SQL][TEST] Add special values to LiteralGenerator for float 
and double

### What changes were proposed in this pull request?

The `LiteralGenerator` for float and double datatypes was supposed to yield 
special values (NaN, +-inf) among others, but the `Gen.chooseNum` method does 
not yield values that are outside the defined range. The `Gen.chooseNum` for a 
wide range of floats and doubles does not yield values in the "everyday" range 
as stated in https://github.com/typelevel/scalacheck/issues/113 .

There is an similar class `RandomDataGenerator` that is used in some other 
tests. Added `-0.0` and `-0.0f` as special values to there too.

These changes revealed an inconsistency with the equality check between 
`-0.0` and `0.0`.

### Why are the changes needed?

The `LiteralGenerator` is mostly used in the 
`checkConsistencyBetweenInterpretedAndCodegen` method in 
`MathExpressionsSuite`. This change would have caught the bug fixed in #29495 .

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

Locally reverted #29495 and verified that the existing test cases caught 
the bug.

Closes #29515 from tanelk/SPARK-32688.

Authored-by: Tanel Kiis 
Signed-off-by: Takeshi Yamamuro 
(cherry picked from commit 6051755bfe23a0e4564bf19476ec34cd7fd6008d)
Signed-off-by: Takeshi Yamamuro 
---
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)

diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 6a5bdc4..3e2dc3f 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -260,10 +260,10 @@ object RandomDataGenerator {
   new MathContext(precision)).bigDecimal)
   case DoubleType => randomNumeric[Double](
 rand, r => longBitsToDouble(r.nextLong()), Seq(Double.MinValue, 
Double.MinPositiveValue,
-  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0))
+  Double.MaxValue, Double.PositiveInfinity, Double.NegativeInfinity, 
Double.NaN, 0.0, -0.0))
   case FloatType => randomNumeric[Float](
 rand, r => intBitsToFloat(r.nextInt()), Seq(Float.MinValue, 
Float.MinPositiveValue,
-  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f))
+  Float.MaxValue, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.NaN, 0.0f, -0.0f))
   case ByteType => randomNumeric[Byte](
 rand, _.nextInt().toByte, Seq(Byte.MinValue, Byte.MaxValue, 0.toByte))
   case IntegerType => randomNumeric[Int](
diff --git 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
index d92eb01..c8e3b0e 100644
--- 
a/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
+++ 
b/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/expressions/LiteralGenerator.scala
@@ -68,16 +68,27 @@ object LiteralGenerator {
   lazy val longLiteralGen: Gen[Literal] =
 for { l <- Arbitrary.arbLong.arbitrary } yield Literal.create(l, LongType)
 
+  // The floatLiteralGen and doubleLiteralGen will 50% of the time yield 
arbitrary values
+  // and 50% of the time will yield some special values that are more likely 
to reveal
+  // corner cases. This behavior is similar to the integral value generators.
   lazy val floatLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Float.MinValue / 2, Float.MaxValue / 2,
-Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity)
+  f <- Gen.oneOf(
+Gen.oneOf(
+  Float.NaN, Float.PositiveInfinity, Float.NegativeInfinity, 
Float.MinPositiveValue,
+  Float.MaxValue, -Float.MaxValue, 0.0f, -0.0f, 1.0f, -1.0f),
+Arbitrary.arbFloat.arbitrary
+  )
 } yield Literal.create(f, FloatType)
 
   lazy val doubleLiteralGen: Gen[Literal] =
 for {
-  f <- Gen.chooseNum(Double.MinValue / 2, Double.MaxValue / 2,
-Double.NaN, Double.PositiveInfinity, 

[spark] branch master updated (b46c730 -> 6051755)

2020-09-15 Thread yamamuro
This is an automated email from the ASF dual-hosted git repository.

yamamuro pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test
 add 6051755  [SPARK-32688][SQL][TEST] Add special values to 
LiteralGenerator for float and double

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/RandomDataGenerator.scala|  4 ++--
 .../sql/catalyst/expressions/LiteralGenerator.scala   | 19 +++
 2 files changed, 17 insertions(+), 6 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (108c4c8 -> b46c730)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created
 add b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (108c4c8 -> b46c730)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created
 add b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (108c4c8 -> b46c730)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created
 add b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (108c4c8 -> b46c730)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created
 add b46c730  [SPARK-32704][SQL][TESTS][FOLLOW-UP] Check any physical rule 
instead of a specific rule in the test

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (888b343 -> 108c4c8)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config
 add 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (888b343 -> 108c4c8)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config
 add 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (888b343 -> 108c4c8)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config
 add 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (888b343 -> 108c4c8)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config
 add 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (888b343 -> 108c4c8)

2020-09-15 Thread gurwls223
This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config
 add 108c4c8  [SPARK-32481][SQL][TESTS][FOLLOW-UP] Skip the test if trash 
directory cannot be created

No new revisions were added by this update.

Summary of changes:
 .../test/scala/org/apache/spark/sql/execution/command/DDLSuite.scala   | 3 +++
 1 file changed, 3 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6f36db1 -> 888b343)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py
 add 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 10 ++
 .../spark/sql/execution/DataSourceScanExec.scala   |  2 +-
 .../sql/execution/datasources/v2/FileScan.scala|  2 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 23 ++
 4 files changed, 35 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6f36db1 -> 888b343)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py
 add 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 10 ++
 .../spark/sql/execution/DataSourceScanExec.scala   |  2 +-
 .../sql/execution/datasources/v2/FileScan.scala|  2 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 23 ++
 4 files changed, 35 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6f36db1 -> 888b343)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py
 add 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 10 ++
 .../spark/sql/execution/DataSourceScanExec.scala   |  2 +-
 .../sql/execution/datasources/v2/FileScan.scala|  2 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 23 ++
 4 files changed, 35 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6f36db1 -> 888b343)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py
 add 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 10 ++
 .../spark/sql/execution/DataSourceScanExec.scala   |  2 +-
 .../sql/execution/datasources/v2/FileScan.scala|  2 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 23 ++
 4 files changed, 35 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (6f36db1 -> 888b343)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py
 add 888b343  [SPARK-32827][SQL] Add spark.sql.maxMetadataStringLength 
config

No new revisions were added by this update.

Summary of changes:
 .../org/apache/spark/sql/internal/SQLConf.scala| 10 ++
 .../spark/sql/execution/DataSourceScanExec.scala   |  2 +-
 .../sql/execution/datasources/v2/FileScan.scala|  2 +-
 .../spark/sql/FileBasedDataSourceSuite.scala   | 23 ++
 4 files changed, 35 insertions(+), 2 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (316242b -> 6f36db1)

2020-09-15 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server
 add 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/dataframe.py | 7 ---
 python/pyspark/storagelevel.py  | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (316242b -> 6f36db1)

2020-09-15 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server
 add 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/dataframe.py | 7 ---
 python/pyspark/storagelevel.py  | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (316242b -> 6f36db1)

2020-09-15 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server
 add 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/dataframe.py | 7 ---
 python/pyspark/storagelevel.py  | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (316242b -> 6f36db1)

2020-09-15 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server
 add 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/dataframe.py | 7 ---
 python/pyspark/storagelevel.py  | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (316242b -> 6f36db1)

2020-09-15 Thread srowen
This is an automated email from the ASF dual-hosted git repository.

srowen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server
 add 6f36db1  [SPARK-31448][PYTHON] Fix storage level used in persist() in 
dataframe.py

No new revisions were added by this update.

Summary of changes:
 python/pyspark/sql/dataframe.py | 7 ---
 python/pyspark/storagelevel.py  | 1 +
 2 files changed, 5 insertions(+), 3 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (99384d1 -> 316242b)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`
 add 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server

No new revisions were added by this update.

Summary of changes:
 .../SparkThriftServerProtocolVersionsSuite.scala   | 147 -
 1 file changed, 140 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (99384d1 -> 316242b)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`
 add 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server

No new revisions were added by this update.

Summary of changes:
 .../SparkThriftServerProtocolVersionsSuite.scala   | 147 -
 1 file changed, 140 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (99384d1 -> 316242b)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`
 add 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server

No new revisions were added by this update.

Summary of changes:
 .../SparkThriftServerProtocolVersionsSuite.scala   | 147 -
 1 file changed, 140 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (99384d1 -> 316242b)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`
 add 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server

No new revisions were added by this update.

Summary of changes:
 .../SparkThriftServerProtocolVersionsSuite.scala   | 147 -
 1 file changed, 140 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (99384d1 -> 316242b)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`
 add 316242b  [SPARK-32874][SQL][TEST] Enhance result set meta data check 
for execute statement operation with thrift server

No new revisions were added by this update.

Summary of changes:
 .../SparkThriftServerProtocolVersionsSuite.scala   | 147 -
 1 file changed, 140 insertions(+), 7 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8baab1 -> 99384d1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options
 add 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/rpc/netty/Inbox.scala | 20 
 .../org/apache/spark/rpc/netty/InboxSuite.scala  | 13 +
 2 files changed, 33 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8baab1 -> 99384d1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options
 add 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/rpc/netty/Inbox.scala | 20 
 .../org/apache/spark/rpc/netty/InboxSuite.scala  | 13 +
 2 files changed, 33 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8baab1 -> 99384d1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options
 add 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/rpc/netty/Inbox.scala | 20 
 .../org/apache/spark/rpc/netty/InboxSuite.scala  | 13 +
 2 files changed, 33 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8baab1 -> 99384d1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options
 add 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/rpc/netty/Inbox.scala | 20 
 .../org/apache/spark/rpc/netty/InboxSuite.scala  | 13 +
 2 files changed, 33 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (c8baab1 -> 99384d1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options
 add 99384d1  [SPARK-32738][CORE] Should reduce the number of active 
threads if fatal error happens in `Inbox.process`

No new revisions were added by this update.

Summary of changes:
 .../scala/org/apache/spark/rpc/netty/Inbox.scala | 20 
 .../org/apache/spark/rpc/netty/InboxSuite.scala  | 13 +
 2 files changed, 33 insertions(+)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d8a0d85 -> c8baab1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d8a0d85  [SPARK-32884][TESTS] Mark TPCDSQuery*Suite as ExtendedSQLTest
 add c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options

No new revisions were added by this update.

Summary of changes:
 project/MimaExcludes.scala |  5 ++-
 .../scala/org/apache/spark/sql/SparkSession.scala  | 42 +-
 .../sql/internal/BaseSessionStateBuilder.scala |  6 +++-
 .../apache/spark/sql/internal/SessionState.scala   |  7 ++--
 .../org/apache/spark/sql/test/TestSQLContext.scala |  9 ++---
 .../spark/sql/hive/HiveSessionStateBuilder.scala   | 12 ---
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  9 ++---
 7 files changed, 55 insertions(+), 35 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d8a0d85 -> c8baab1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d8a0d85  [SPARK-32884][TESTS] Mark TPCDSQuery*Suite as ExtendedSQLTest
 add c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options

No new revisions were added by this update.

Summary of changes:
 project/MimaExcludes.scala |  5 ++-
 .../scala/org/apache/spark/sql/SparkSession.scala  | 42 +-
 .../sql/internal/BaseSessionStateBuilder.scala |  6 +++-
 .../apache/spark/sql/internal/SessionState.scala   |  7 ++--
 .../org/apache/spark/sql/test/TestSQLContext.scala |  9 ++---
 .../spark/sql/hive/HiveSessionStateBuilder.scala   | 12 ---
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  9 ++---
 7 files changed, 55 insertions(+), 35 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d8a0d85 -> c8baab1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d8a0d85  [SPARK-32884][TESTS] Mark TPCDSQuery*Suite as ExtendedSQLTest
 add c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options

No new revisions were added by this update.

Summary of changes:
 project/MimaExcludes.scala |  5 ++-
 .../scala/org/apache/spark/sql/SparkSession.scala  | 42 +-
 .../sql/internal/BaseSessionStateBuilder.scala |  6 +++-
 .../apache/spark/sql/internal/SessionState.scala   |  7 ++--
 .../org/apache/spark/sql/test/TestSQLContext.scala |  9 ++---
 .../spark/sql/hive/HiveSessionStateBuilder.scala   | 12 ---
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  9 ++---
 7 files changed, 55 insertions(+), 35 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d8a0d85 -> c8baab1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d8a0d85  [SPARK-32884][TESTS] Mark TPCDSQuery*Suite as ExtendedSQLTest
 add c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options

No new revisions were added by this update.

Summary of changes:
 project/MimaExcludes.scala |  5 ++-
 .../scala/org/apache/spark/sql/SparkSession.scala  | 42 +-
 .../sql/internal/BaseSessionStateBuilder.scala |  6 +++-
 .../apache/spark/sql/internal/SessionState.scala   |  7 ++--
 .../org/apache/spark/sql/test/TestSQLContext.scala |  9 ++---
 .../spark/sql/hive/HiveSessionStateBuilder.scala   | 12 ---
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  9 ++---
 7 files changed, 55 insertions(+), 35 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org



[spark] branch master updated (d8a0d85 -> c8baab1)

2020-09-15 Thread wenchen
This is an automated email from the ASF dual-hosted git repository.

wenchen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git.


from d8a0d85  [SPARK-32884][TESTS] Mark TPCDSQuery*Suite as ExtendedSQLTest
 add c8baab1  [SPARK-32879][SQL] Refactor SparkSession initial options

No new revisions were added by this update.

Summary of changes:
 project/MimaExcludes.scala |  5 ++-
 .../scala/org/apache/spark/sql/SparkSession.scala  | 42 +-
 .../sql/internal/BaseSessionStateBuilder.scala |  6 +++-
 .../apache/spark/sql/internal/SessionState.scala   |  7 ++--
 .../org/apache/spark/sql/test/TestSQLContext.scala |  9 ++---
 .../spark/sql/hive/HiveSessionStateBuilder.scala   | 12 ---
 .../org/apache/spark/sql/hive/test/TestHive.scala  |  9 ++---
 7 files changed, 55 insertions(+), 35 deletions(-)


-
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org