Greetings, Falcon-dev. I've a n00b question about Falcon's support for HCatalog partition-import. My (incomplete) understanding is that the implementation copies data alongside the serialized metadata, and resolves the partition-schema on the target cluster.
1. I couldn't find the Falcon code that exports the HCat/Hive metadata to HDFS. I expected that org.apache.hadoop.hive.ql.parse.EximUtil might be used for this, but there's no reference to this class in the Falcon master branch. Might I please enquire where/how that's done? A pointer to code would be ideal, thanks. 2. The table/partition metadata might currently be serialized to HDFS in thrift (I'll have to check). Does Falcon currently assume that the Hive versions running on the source and target clusters are compatible? (i.e. That the metadata can be imported on the target?) Thanks, Mithun
