Spark uses HiveContext to access Hive tables on the same Hadoop cluster
that both Hive and Spark are running.
Let us look at an example of code below that used Spark as an ETL tool to
get data from an Oracle table though JDBC and store it in a Hive ORC table
// 1) create Spark conf first
val
HCatalog was built as an interface to allow tools such as Pig and MapReduce
to access Hive tabular data, for both read and write. In more recent
versions of Hive, HCatalog has not been updated to support the newest
features, such as reading or writing transactional data or, in Hive 3.x,
accessing
Hi,
we have some confusion about hive :1、what is the difference between hcatalog and hiveserver2 ,does hiveserver2 rely on hcatalog ?2、what is the layer of hcatalog and hiverserver2 in the whole Hive Architecture ?3、how does spark sql read hive tables , through hcatalog