Hello,
My ETL uses sparksql to generate parquet files which are served through
Thriftserver using hive ql.
It especially defines a schema programmatically since the schema can be only
known at runtime. 
With spark 1.2.1, it worked fine (followed
https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema).

I am trying to migrate into spark 1.3.0, but the API are confusing. 
I am not sure if the example of
https://spark.apache.org/docs/latest/sql-programming-guide.html#programmatically-specifying-the-schema
is still valid on Spark1.3.0?
For example, DataType.StringType is not there any more. 
Instead, I found DataTypes.StringType etc. So, I migrated as below and it
builds fine. 
But at runtime, it throws Exception.

I appreciate any help.
Thanks,
Okehee

== Exception thrown 
java.lang.reflect.InvocationTargetException
scala.reflect.NameTransformer$.LOCAL_SUFFIX_STRING()Ljava/lang/String;
java.lang.NoSuchMethodError:
scala.reflect.NameTransformer$.LOCAL_SUFFIX_STRING()Ljava/lang/String;

==== my code's snippet
import org.apache.spark.sql.types.DataTypes;
DataTypes.createStructField(property, DataTypes.IntegerType, true)



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Generating-a-schema-in-Spark-1-3-failed-while-using-DataTypes-tp22362.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to