then collect it in
pyspark, the bigints are stored and integers in python.
(The problem is if I write it back to another table, I detect the hive type
programmatically from the python type, so it turns those columns to
integers)
Is that intended this way or a bug?
thanks,
--
View
hi all,
I have just come across a problem where I have a table that has a few bigint
columns, it seems if I read that table into a dataframe then collect it in
pyspark, the bigints are stored and integers in python.
(The problem is if I write it back to another table, I detect the hive type