[jira] [Commented] (SPARK-21246) Unexpected Data Type conversion from LONG to BIGINT

2017-06-28 Thread Yuming Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16067620#comment-16067620
 ] 

Yuming Wang commented on SPARK-21246:
-

{{Seq(3)}} should be {{Seq(3L)}}, This works for me:
{code:java}
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
val schemaString = "name"
val lstVals = Seq(3L)
val rowRdd = sc.parallelize(lstVals).map(x => Row( x ))
rowRdd.collect()
// Generate the schema based on the string of schema
val fields = schemaString.split(" ").map(fieldName => StructField(fieldName, 
LongType, nullable = true))
val schema = StructType(fields)
print(schema)
val peopleDF = spark.createDataFrame(rowRdd, schema)
peopleDF.show()
{code}


> Unexpected Data Type conversion from LONG to BIGINT
> ---
>
> Key: SPARK-21246
> URL: https://issues.apache.org/jira/browse/SPARK-21246
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.1
> Environment: Using Zeppelin Notebook or Spark Shell
>Reporter: Monica Raj
>
> The unexpected conversion occurred when creating a data frame out of an 
> existing data collection. The following code can be run in zeppelin notebook 
> to reproduce the bug:
> import org.apache.spark.sql.types._
> import org.apache.spark.sql.Row
> val schemaString = "name"
> val lstVals = Seq(3)
> val rowRdd = sc.parallelize(lstVals).map(x => Row( x ))
> rowRdd.collect()
> // Generate the schema based on the string of schema
> val fields = schemaString.split(" ")
> .map(fieldName => StructField(fieldName, LongType, nullable = true))
> val schema = StructType(fields)
> print(schema)
> val peopleDF = sqlContext.createDataFrame(rowRdd, schema)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-21246) Unexpected Data Type conversion from LONG to BIGINT

2017-06-29 Thread Monica Raj (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-21246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16068553#comment-16068553
 ] 

Monica Raj commented on SPARK-21246:


Thanks for your response. I also tried with Seq(3) as Seq(3L), however I had 
changed this back during the course of trying other options. I should also 
mention that we are running Zeppelin 0.6.0. I tried running the code you 
provided and still got the following output:

import org.apache.spark.sql.types._
import org.apache.spark.sql.Row
schemaString: String = name
lstVals: Seq[Long] = List(3)
rowRdd: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = 
MapPartitionsRDD[30] at map at :59
res20: Array[org.apache.spark.sql.Row] = Array([3])
fields: Array[org.apache.spark.sql.types.StructField] = 
Array(StructField(name,LongType,true))
schema: org.apache.spark.sql.types.StructType = 
StructType(StructField(name,LongType,true))
StructType(StructField(name,LongType,true))peopleDF: 
org.apache.spark.sql.DataFrame = [name: bigint]
++
|name|
++
|   3|
++

> Unexpected Data Type conversion from LONG to BIGINT
> ---
>
> Key: SPARK-21246
> URL: https://issues.apache.org/jira/browse/SPARK-21246
> Project: Spark
>  Issue Type: Bug
>  Components: SQL
>Affects Versions: 1.6.1
> Environment: Using Zeppelin Notebook or Spark Shell
>Reporter: Monica Raj
>
> The unexpected conversion occurred when creating a data frame out of an 
> existing data collection. The following code can be run in zeppelin notebook 
> to reproduce the bug:
> import org.apache.spark.sql.types._
> import org.apache.spark.sql.Row
> val schemaString = "name"
> val lstVals = Seq(3)
> val rowRdd = sc.parallelize(lstVals).map(x => Row( x ))
> rowRdd.collect()
> // Generate the schema based on the string of schema
> val fields = schemaString.split(" ")
> .map(fieldName => StructField(fieldName, LongType, nullable = true))
> val schema = StructType(fields)
> print(schema)
> val peopleDF = sqlContext.createDataFrame(rowRdd, schema)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org