Ok, solved it.
I had to specify mapreduce.input.carboninputformat.databaseName as well as
mapreduce.input.carboninputformat.tableName in serde properties... did not
find this in the documentation by in the code itself
(org/apache/carbondata/hadoop/api/CarbonInputFormat.java)
Thanks anyway.
Yann
Hello,
I would really appreciate your help on this error
InvalidConfigurationException: Database name is not set.
As anyone tried to read Carbondata from Hive on Azure?
I don't know whether this problem comes from Carbondata itself, the SerDe
properties have the dbName defined, however the
Hello,
I have created a carbondata table from Spark 2.2.1 on Azure (Hive 1.2.1) via
CarbonSession.
The Spark code looks like this :
val carbon = SparkSession.builder().config("spark.sql.warehouse.dir",
warehouse).config("spark.sql.crossJoin.enabled",
Hi xuchuanyin
Thank you for your reply.
Raghu found the problem with my code.
I had to remove import spark.implicits._ ad use import carbon.implicits._
instead.
Thanks,
Yann
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Hi Raghu (Sraghunandan),
Thank you for your answer (use of carbon implicit vs spark implicit).
It is now working as expected.
Yann
--
Sent from:
http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/
Hello,
I am trying to create a carbon data table from a Spark Data Frame, however I
am getting an error with the (automatic create table statement)
I run this code on spark-shell (passing the carbon data assembly jar file
for 1.4.0 as well as master branch), on Azure HDInsight cluster with
Hello,
I am trying to create a carbon data table from a Spark Data Frame, however I
am getting an error with the (automatic create table statement)
I run this code on spark-shell (passing the carbon data assembly jar file
for 1.4.0 as well as master branch), on Azure HDInsight cluster with spark
Hello,
I am trying to create a carbon data table from a Spark Data Frame, however I
am getting an error with the (automatic create table statement)
I run this code on spark-shell (passing the carbon data assembly jar file
for 1.4.0 as well as master branch), on Azure HDInsight cluster with