Re: unsubscribe

2020-06-27 Thread Wesley Peng
please send an empty email to: user-unsubscr...@spark.apache.org to 
unsubscribe yourself from the list.



Sri Kris wrote:
Sent from Mail  for 
Windows 10




-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Spark 3.0 almost 1000 times slower to read json than Spark 2.4

2020-06-27 Thread Sanjeev Mishra
I have large amount of json files that Spark can read in 36 seconds but
Spark 3.0 takes almost 33 minutes to read the same. On closer analysis,
looks like Spark 3.0 is choosing different DAG than Spark 2.0. Does anyone
have any idea what is going on? Is there any configuration problem with
Spark 3.0.

Here are the details:

*Spark 2.4*

Summary Metrics for 2203 Completed Tasks

MetricMin25th percentileMedian75th percentileMax
Duration 0.0 ms 0.0 ms 0.0 ms 1.0 ms 62.0 ms
GC Time 0.0 ms 0.0 ms 0.0 ms 0.0 ms 11.0 ms
Showing 1 to 2 of 2 entries
 Aggregated Metrics by Executor
Show 204060100All entries
Search:
Executor IDLogsAddressTask TimeTotal TasksFailed TasksKilled TasksSucceeded
TasksBlacklisted
driver 10.0.0.8:49159 36 s 2203 0 0 2203 false


*Spark 3.0*

Summary Metrics for 8 Completed Tasks

MetricMin25th percentileMedian75th percentileMax
Duration 3.8 min 4.0 min 4.1 min 4.4 min 5.0 min
GC Time 3 s 3 s 3 s 4 s 4 s
Input Size / Records 15.6 MiB / 51028 16.2 MiB / 53303 16.8 MiB / 55259 17.8
MiB / 58148 20.2 MiB / 71624
Showing 1 to 3 of 3 entries
 Aggregated Metrics by Executor
Show 204060100All entries
Search:
Executor IDLogsAddressTask TimeTotal TasksFailed TasksKilled TasksSucceeded
TasksBlacklistedInput Size / Records
driver 10.0.0.8:50224 33 min 8 0 0 8 false 136.1 MiB / 451999


The DAG is also different
Spark 2.0 DAG

[image: Screenshot 2020-06-27 16.30.26.png]

Spark 3.0 DAG

[image: Screenshot 2020-06-27 16.32.32.png]


Re: unsubscribe

2020-06-27 Thread Jeff Evans
That is not how you unsubscribe.  See here for instructions:
https://gist.github.com/jeff303/ba1906bb7bcb2f2501528a8bb1521b8e


On Sat, Jun 27, 2020, 6:08 PM Sri Kris  wrote:

>
>
>
>
> Sent from Mail  for
> Windows 10
>
>
>


unsubscribe

2020-06-27 Thread Sri Kris


Sent from Mail for Windows 10



Spark 3.0.0 spark.read.json never completes

2020-06-27 Thread Sanjeev Mishra
HI all,

I have huge amount of json files that Spark 2.4 can easily finish reading
but Spark 3.0.0 never competes. I am running both Spark 2 and Spark 3 on Mac


Re: When is a Bigint a long and when is a long a long

2020-06-27 Thread Anwar AliKhan
OK Thanks

On Sat, 27 Jun 2020, 17:36 Sean Owen,  wrote:

> It does not return a DataFrame. It returns Dataset[Long].
> You do not need to collect(). See my email.
>
> On Sat, Jun 27, 2020, 11:33 AM Anwar AliKhan 
> wrote:
>
>> So the range function actually returns BigInt (Spark SQL type)
>> and the fact Dataset[Long] and printSchema are displaying (toString())
>> Long instead of BigInt needs looking into.
>>
>> Putting that to one side
>>
>> My issue with using collect() to get around the casting of elements
>> returned
>> by range is,  I read some literature which says the collect() returns all
>> the data to the driver
>> and so can likely cause Out Of memory error.
>>
>> Question:
>> Is it correct that collect() behaves that way and can cause Out of memory
>> error ?
>>
>> Obviously it will be better to use  .map for casting because then the
>> work is being done by workers.
>> spark.range(10).map(_.toLong),reduce(_+_)
>> 
>>
>>
>> On Sat, 27 Jun 2020, 15:42 Sean Owen,  wrote:
>>
>>> There are several confusing things going on here. I think this is part
>>> of the explanation, not 100% sure:
>>>
>>> 'bigint' is the Spark SQL type of an 8-byte long. 'long' is the type
>>> of a JVM primitive. Both are the same, conceptually, but represented
>>> differently internally as they are logically somewhat different ideas.
>>>
>>> The first thing I'm not sure about is why the toString of
>>> Dataset[Long] reports a 'bigint' and printSchema() reports 'long'.
>>> That might be a (cosmetic) bug.
>>>
>>> Second, in Scala 2.12, its SAM support causes calls to reduce() and
>>> other methods, using an Object type, to be ambiguous, because Spark
>>> has long since had Java-friendly overloads that support a SAM
>>> interface for Java callers. Those weren't removed to avoid breakage,
>>> at the cost of having to explicitly tell it what overload you want.
>>> (They are equivalent)
>>>
>>> This is triggered because range() returns java.lang.Longs, not long
>>> primitives (i.e. scala.Long). I assume that is to make it versatile
>>> enough to use in Java too, and because it's hard to write an overload
>>> (would have to rename it)
>>>
>>> But that means you trigger the SAM overload issue.
>>>
>>> Anything you do that makes this a Dataset[scala.Long] resolves it, as
>>> it is no longer ambiguous (Java-friendly Object-friendly overload does
>>> not apply). For example:
>>>
>>> spark.range(10).map(_.toLong).reduce(_+_)
>>>
>>> If you collect(), you still have an Array[java.lang.Long]. But Scala
>>> implicits and conversions make .reduce(_+_) work fine on that; there
>>> is no "Java-friendly" overload in the way.
>>>
>>> Normally all of this just works and you can ignore these differences.
>>> This is a good example of a corner case in which it's inconvenient,
>>> because of the old Java-friendly overloads. This is by design though.
>>>
>>> On Sat, Jun 27, 2020 at 8:29 AM Anwar AliKhan 
>>> wrote:
>>> >
>>> > As you know I have been puzzling over this issue :
>>> > How come spark.range(100).reduce(_+_)
>>> > worked in earlier spark version but not with the most recent versions.
>>> >
>>> > well,
>>> >
>>> > When you first create a dataset, by default the column "id" datatype
>>> is  [BigInt],
>>> > It is a bit like a coin Long on one side and bigint on the other side.
>>> >
>>> > scala> val myrange = spark.range(1,100)
>>> > myrange: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>>> >
>>> > The Spark framework error message after parsing the reduce(_+_) method
>>> confirms this
>>> > and moreover stresses its constraints of expecting data  type long as
>>> parameter argument(s).
>>> >
>>> > scala> myrange.reduce(_+_)
>>> > :26: error: overloaded method value reduce with alternatives:
>>> >   (func:
>>> org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long
>>> 
>>> >   (func: (java.lang.Long, java.lang.Long) =>
>>> java.lang.Long)java.lang.Long
>>> >  cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
>>> >myrange.reduce(_+_)
>>> >^
>>> >
>>> > But if you ask the printSchema method it disagrees with both of the
>>> above and says the column "id" data is Long.
>>> > scala> range100.printSchema()
>>> > root
>>> >  |-- id: long (nullable = false)
>>> >
>>> > If I ask the collect() method, the collect() method  agrees with
>>> printSchema() that the datatype of column "id" is  Long and not BigInt.
>>> >
>>> > scala> range100.collect()
>>> > res10: Array[Long] = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
>>> 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
>>> 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
>>> 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
>>> 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88,
>>> 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)
>>> >
>>> > To settle the dispute between the methods and get 

Re: When is a Bigint a long and when is a long a long

2020-06-27 Thread Sean Owen
It does not return a DataFrame. It returns Dataset[Long].
You do not need to collect(). See my email.

On Sat, Jun 27, 2020, 11:33 AM Anwar AliKhan 
wrote:

> So the range function actually returns BigInt (Spark SQL type)
> and the fact Dataset[Long] and printSchema are displaying (toString())
> Long instead of BigInt needs looking into.
>
> Putting that to one side
>
> My issue with using collect() to get around the casting of elements
> returned
> by range is,  I read some literature which says the collect() returns all
> the data to the driver
> and so can likely cause Out Of memory error.
>
> Question:
> Is it correct that collect() behaves that way and can cause Out of memory
> error ?
>
> Obviously it will be better to use  .map for casting because then the work
> is being done by workers.
> spark.range(10).map(_.toLong),reduce(_+_)
> 
>
>
> On Sat, 27 Jun 2020, 15:42 Sean Owen,  wrote:
>
>> There are several confusing things going on here. I think this is part
>> of the explanation, not 100% sure:
>>
>> 'bigint' is the Spark SQL type of an 8-byte long. 'long' is the type
>> of a JVM primitive. Both are the same, conceptually, but represented
>> differently internally as they are logically somewhat different ideas.
>>
>> The first thing I'm not sure about is why the toString of
>> Dataset[Long] reports a 'bigint' and printSchema() reports 'long'.
>> That might be a (cosmetic) bug.
>>
>> Second, in Scala 2.12, its SAM support causes calls to reduce() and
>> other methods, using an Object type, to be ambiguous, because Spark
>> has long since had Java-friendly overloads that support a SAM
>> interface for Java callers. Those weren't removed to avoid breakage,
>> at the cost of having to explicitly tell it what overload you want.
>> (They are equivalent)
>>
>> This is triggered because range() returns java.lang.Longs, not long
>> primitives (i.e. scala.Long). I assume that is to make it versatile
>> enough to use in Java too, and because it's hard to write an overload
>> (would have to rename it)
>>
>> But that means you trigger the SAM overload issue.
>>
>> Anything you do that makes this a Dataset[scala.Long] resolves it, as
>> it is no longer ambiguous (Java-friendly Object-friendly overload does
>> not apply). For example:
>>
>> spark.range(10).map(_.toLong).reduce(_+_)
>>
>> If you collect(), you still have an Array[java.lang.Long]. But Scala
>> implicits and conversions make .reduce(_+_) work fine on that; there
>> is no "Java-friendly" overload in the way.
>>
>> Normally all of this just works and you can ignore these differences.
>> This is a good example of a corner case in which it's inconvenient,
>> because of the old Java-friendly overloads. This is by design though.
>>
>> On Sat, Jun 27, 2020 at 8:29 AM Anwar AliKhan 
>> wrote:
>> >
>> > As you know I have been puzzling over this issue :
>> > How come spark.range(100).reduce(_+_)
>> > worked in earlier spark version but not with the most recent versions.
>> >
>> > well,
>> >
>> > When you first create a dataset, by default the column "id" datatype
>> is  [BigInt],
>> > It is a bit like a coin Long on one side and bigint on the other side.
>> >
>> > scala> val myrange = spark.range(1,100)
>> > myrange: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>> >
>> > The Spark framework error message after parsing the reduce(_+_) method
>> confirms this
>> > and moreover stresses its constraints of expecting data  type long as
>> parameter argument(s).
>> >
>> > scala> myrange.reduce(_+_)
>> > :26: error: overloaded method value reduce with alternatives:
>> >   (func:
>> org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long
>> 
>> >   (func: (java.lang.Long, java.lang.Long) =>
>> java.lang.Long)java.lang.Long
>> >  cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
>> >myrange.reduce(_+_)
>> >^
>> >
>> > But if you ask the printSchema method it disagrees with both of the
>> above and says the column "id" data is Long.
>> > scala> range100.printSchema()
>> > root
>> >  |-- id: long (nullable = false)
>> >
>> > If I ask the collect() method, the collect() method  agrees with
>> printSchema() that the datatype of column "id" is  Long and not BigInt.
>> >
>> > scala> range100.collect()
>> > res10: Array[Long] = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12,
>> 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
>> 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
>> 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69,
>> 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88,
>> 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)
>> >
>> > To settle the dispute between the methods and get the collect() to
>> "show me the money" I  called the collect() to pass its return type to
>> reduce(_+_).
>> >
>> > "Here is the money"
>> > scala> range100.collect().reduce(_+_)
>> 

Re: When is a Bigint a long and when is a long a long

2020-06-27 Thread Anwar AliKhan
So the range function actually returns BigInt (Spark SQL type)
and the fact Dataset[Long] and printSchema are displaying (toString())
Long instead of BigInt needs looking into.

Putting that to one side

My issue with using collect() to get around the casting of elements returned
by range is,  I read some literature which says the collect() returns all
the data to the driver
and so can likely cause Out Of memory error.

Question:
Is it correct that collect() behaves that way and can cause Out of memory
error ?

Obviously it will be better to use  .map for casting because then the work
is being done by workers.
spark.range(10).map(_.toLong),reduce(_+_)



On Sat, 27 Jun 2020, 15:42 Sean Owen,  wrote:

> There are several confusing things going on here. I think this is part
> of the explanation, not 100% sure:
>
> 'bigint' is the Spark SQL type of an 8-byte long. 'long' is the type
> of a JVM primitive. Both are the same, conceptually, but represented
> differently internally as they are logically somewhat different ideas.
>
> The first thing I'm not sure about is why the toString of
> Dataset[Long] reports a 'bigint' and printSchema() reports 'long'.
> That might be a (cosmetic) bug.
>
> Second, in Scala 2.12, its SAM support causes calls to reduce() and
> other methods, using an Object type, to be ambiguous, because Spark
> has long since had Java-friendly overloads that support a SAM
> interface for Java callers. Those weren't removed to avoid breakage,
> at the cost of having to explicitly tell it what overload you want.
> (They are equivalent)
>
> This is triggered because range() returns java.lang.Longs, not long
> primitives (i.e. scala.Long). I assume that is to make it versatile
> enough to use in Java too, and because it's hard to write an overload
> (would have to rename it)
>
> But that means you trigger the SAM overload issue.
>
> Anything you do that makes this a Dataset[scala.Long] resolves it, as
> it is no longer ambiguous (Java-friendly Object-friendly overload does
> not apply). For example:
>
> spark.range(10).map(_.toLong).reduce(_+_)
>
> If you collect(), you still have an Array[java.lang.Long]. But Scala
> implicits and conversions make .reduce(_+_) work fine on that; there
> is no "Java-friendly" overload in the way.
>
> Normally all of this just works and you can ignore these differences.
> This is a good example of a corner case in which it's inconvenient,
> because of the old Java-friendly overloads. This is by design though.
>
> On Sat, Jun 27, 2020 at 8:29 AM Anwar AliKhan 
> wrote:
> >
> > As you know I have been puzzling over this issue :
> > How come spark.range(100).reduce(_+_)
> > worked in earlier spark version but not with the most recent versions.
> >
> > well,
> >
> > When you first create a dataset, by default the column "id" datatype is
> [BigInt],
> > It is a bit like a coin Long on one side and bigint on the other side.
> >
> > scala> val myrange = spark.range(1,100)
> > myrange: org.apache.spark.sql.Dataset[Long] = [id: bigint]
> >
> > The Spark framework error message after parsing the reduce(_+_) method
> confirms this
> > and moreover stresses its constraints of expecting data  type long as
> parameter argument(s).
> >
> > scala> myrange.reduce(_+_)
> > :26: error: overloaded method value reduce with alternatives:
> >   (func:
> org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long
> 
> >   (func: (java.lang.Long, java.lang.Long) =>
> java.lang.Long)java.lang.Long
> >  cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
> >myrange.reduce(_+_)
> >^
> >
> > But if you ask the printSchema method it disagrees with both of the
> above and says the column "id" data is Long.
> > scala> range100.printSchema()
> > root
> >  |-- id: long (nullable = false)
> >
> > If I ask the collect() method, the collect() method  agrees with
> printSchema() that the datatype of column "id" is  Long and not BigInt.
> >
> > scala> range100.collect()
> > res10: Array[Long] = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
> 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
> 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
> 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,
> 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
> 90, 91, 92, 93, 94, 95, 96, 97, 98, 99)
> >
> > To settle the dispute between the methods and get the collect() to "show
> me the money" I  called the collect() to pass its return type to
> reduce(_+_).
> >
> > "Here is the money"
> > scala> range100.collect().reduce(_+_)
> > res11: Long = 4950
> >
> > The collect() and printSchema methods could be implying  there is no
> difference between a Long or  a BingInt.
> >
> > Questions :  These return type  differentials, are they  by design  or
> an oversight  bug ?
> > Questions :  Why the change from earlier 

Re: When is a Bigint a long and when is a long a long

2020-06-27 Thread Sean Owen
There are several confusing things going on here. I think this is part
of the explanation, not 100% sure:

'bigint' is the Spark SQL type of an 8-byte long. 'long' is the type
of a JVM primitive. Both are the same, conceptually, but represented
differently internally as they are logically somewhat different ideas.

The first thing I'm not sure about is why the toString of
Dataset[Long] reports a 'bigint' and printSchema() reports 'long'.
That might be a (cosmetic) bug.

Second, in Scala 2.12, its SAM support causes calls to reduce() and
other methods, using an Object type, to be ambiguous, because Spark
has long since had Java-friendly overloads that support a SAM
interface for Java callers. Those weren't removed to avoid breakage,
at the cost of having to explicitly tell it what overload you want.
(They are equivalent)

This is triggered because range() returns java.lang.Longs, not long
primitives (i.e. scala.Long). I assume that is to make it versatile
enough to use in Java too, and because it's hard to write an overload
(would have to rename it)

But that means you trigger the SAM overload issue.

Anything you do that makes this a Dataset[scala.Long] resolves it, as
it is no longer ambiguous (Java-friendly Object-friendly overload does
not apply). For example:

spark.range(10).map(_.toLong).reduce(_+_)

If you collect(), you still have an Array[java.lang.Long]. But Scala
implicits and conversions make .reduce(_+_) work fine on that; there
is no "Java-friendly" overload in the way.

Normally all of this just works and you can ignore these differences.
This is a good example of a corner case in which it's inconvenient,
because of the old Java-friendly overloads. This is by design though.

On Sat, Jun 27, 2020 at 8:29 AM Anwar AliKhan  wrote:
>
> As you know I have been puzzling over this issue :
> How come spark.range(100).reduce(_+_)
> worked in earlier spark version but not with the most recent versions.
>
> well,
>
> When you first create a dataset, by default the column "id" datatype is  
> [BigInt],
> It is a bit like a coin Long on one side and bigint on the other side.
>
> scala> val myrange = spark.range(1,100)
> myrange: org.apache.spark.sql.Dataset[Long] = [id: bigint]
>
> The Spark framework error message after parsing the reduce(_+_) method 
> confirms this
> and moreover stresses its constraints of expecting data  type long as 
> parameter argument(s).
>
> scala> myrange.reduce(_+_)
> :26: error: overloaded method value reduce with alternatives:
>   (func: 
> org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long
>  
>   (func: (java.lang.Long, java.lang.Long) => java.lang.Long)java.lang.Long
>  cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
>myrange.reduce(_+_)
>^
>
> But if you ask the printSchema method it disagrees with both of the above and 
> says the column "id" data is Long.
> scala> range100.printSchema()
> root
>  |-- id: long (nullable = false)
>
> If I ask the collect() method, the collect() method  agrees with 
> printSchema() that the datatype of column "id" is  Long and not BigInt.
>
> scala> range100.collect()
> res10: Array[Long] = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
> 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 
> 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 
> 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 
> 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 
> 91, 92, 93, 94, 95, 96, 97, 98, 99)
>
> To settle the dispute between the methods and get the collect() to "show me 
> the money" I  called the collect() to pass its return type to reduce(_+_).
>
> "Here is the money"
> scala> range100.collect().reduce(_+_)
> res11: Long = 4950
>
> The collect() and printSchema methods could be implying  there is no 
> difference between a Long or  a BingInt.
>
> Questions :  These return type  differentials, are they  by design  or an 
> oversight  bug ?
> Questions :  Why the change from earlier version to later version ?
> Question   : Will you be updating the reduce(_+_)  method ?
>
> When it comes to creating a dataset using toDs there is no dispute,
> all the methods agree that it is neither a BigInt or a Long but an int even 
> integer.
>
> scala> val dataset = Seq(1, 2, 3).toDS()
> dataset: org.apache.spark.sql.Dataset[Int] = [value: int]
>
> scala> dataset.collect()
> res29: Array[Int] = Array(1, 2, 3)
>
> scala> dataset.printSchema()
> root
>  |-- value: integer (nullable = false)
>
> scala> dataset.show()
> +-+
> |value|
> +-+
> |1|
> |2|
> |3|
> +-+
>
> scala> dataset.reduce(_+_)
> res7: Int = 6
>

-
To unsubscribe e-mail: user-unsubscr...@spark.apache.org



Distributed Anomaly Detection using MIDAS

2020-06-27 Thread Shivin Srivastava
Hi All,

I have recently been exploring MIDAS: an algorithm for Streaming Anomaly
Detection. A production level parallel and distributed implementation of
MIDAS should be quite useful to the industry. I feel that Spark is very
well-suited for the same as MIDAS deals with streaming data. If anyone is
interested to contribute/collaborate, please let me know. Currently, there
exist C++, Python, Ruby, Rust, R, and Golang implementations.

MIDAS repository: https://github.com/bhatiasiddharth/MIDAS
MIDAS paper: https://www.comp.nus.edu.sg/~sbhatia/assets/pdf/midas.pdf

Thanks!
Shivin


When is a Bigint a long and when is a long a long

2020-06-27 Thread Anwar AliKhan
*As you know I have been puzzling over this issue :*
*How come spark.range(100).reduce(_+_)*

*worked in earlier spark version but not with the most recent versions.*

*well,*

*When you first create a dataset, by default the column "id" datatype is
[BigInt],*
*It is a bit like a coin Long on one side and bigint on the other side.*

scala> val myrange = spark.range(1,100)
myrange: org.apache.spark.sql.Dataset[Long] = [id: bigint]

*The Spark framework error message after parsing the reduce(_+_) method
confirms this*

*and moreover stresses its constraints of expecting data  type long as
parameter argument(s).*

scala> myrange.reduce(_+_)
:26: error: overloaded method value reduce with alternatives:
  (func:
org.apache.spark.api.java.function.ReduceFunction[java.lang.Long])java.lang.Long

  (func: (java.lang.Long, java.lang.Long) => java.lang.Long)java.lang.Long
 cannot be applied to ((java.lang.Long, java.lang.Long) => scala.Long)
   myrange.reduce(_+_)
   ^


*But if you ask the printSchema method it disagrees with both of the above
and says the column "id" data is Long.*scala> range100.printSchema()
root
 |-- id: long (nullable = false)


*If I ask the collect() method, the collect() method  agrees with
printSchema() that the datatype of column "id" is  Long and not BigInt.*

scala> range100.collect()
res10: Array[Long] = Array(0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,
14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70,
71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99)

*To settle the dispute between the methods and get the collect() to "show
me the money" I  called the collect() to pass its return type to
reduce(_+_).*


*"Here is the money"*
scala> range100.collect().reduce(_+_)
res11: Long = 4950

*The collect() and printSchema methods could be implying  there is no
difference between a Long or  a BingInt.*

*Questions :  These return type  differentials, are they  by design  or an
oversight  bug ?*
*Questions :  Why the change from earlier version to later version ?*
*Question   : Will you be updating the reduce(_+_)  method ?*


*When it comes to creating a dataset using toDs there is no dispute,*

*all the methods agree that it is neither a BigInt or a Long but an int
even integer.*

scala> val dataset = Seq(1, 2, 3).toDS()
dataset: org.apache.spark.sql.Dataset[Int] = [value: int]

scala> dataset.collect()
res29: Array[Int] = Array(1, 2, 3)

scala> dataset.printSchema()
root
 |-- value: integer (nullable = false)

scala> dataset.show()
+-+
|value|
+-+
|1|
|2|
|3|
+-+

scala> dataset.reduce(_+_)
res7: Int = 6