My colleagues use scala and I use python. 

They save a hive table ,which has doubletype columns. However there's no
double in python.

When I use /pipline.fit(dataframe)/, there occured an error:

java.lang.ClassCastException:  [Ljava.lang.Object: cnnot be cast to
java.lang.Double......

I guess it's because python doesn't have double type. So I cast these
columns to float using codes below:

/dateframe=dateframe.withColumn(dateframe.columns[0],dateframe[0].cast("float"))/

Then when I print the schema of the dataframe, there's no doubletype
columns.

However, when the programme goes into /pipline.fit(dataframe)/, the
ClassCastException still exists.

Why? How can I really modify the column type of a dateframe?



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/object-cannot-be-cast-to-Double-using-pipline-with-pyspark-tp27462.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to