Hi, everyone

    I have some questions about creating a datasource table.
    In HiveExternalCatalog.createDataSourceTable,
newSparkSQLSpecificMetastoreTable will replace the table schema with
EMPTY_DATA_SCHEMA and table.partitionSchema.
    So,Why we use EMPTY_DATA_SCHEMA? Why not declare schema in other way?
    There are a lot of datasource tables that don't have partitionSchema, so
they will be replaced as EMPTY_DATA_SCHEMA?
    Even if Spark itself can parse, what if the user views the table
information from the Hive side?

Any one can help me?
thanks.



--
Sent from: http://apache-spark-developers-list.1001551.n3.nabble.com/

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to