Hi All,

I am getting below error while trying out Carbondata with Spark 1.6.2 /
Hadoop 2.6.5 / Carbondata 1.

./bin/spark-shell --jars
carbonlib/carbondata_2.10-1.1.0-incubating-SNAPSHOT-shade-hadoop2.2.0.jar
scala> import org.apache.spark.sql.CarbonContext
scala> val cc = new CarbonContext(sc)
scala> cc.sql("CREATE TABLE IF NOT EXISTS t1 (id string, name string, city
string, age Int) STORED BY 'carbondata'")
scala> cc.sql("LOAD DATA INPATH '/home/cduser/spark/sample.csv' INTO TABLE
t1")
INFO  05-02 14:57:22,346 - main Query [LOAD DATA INPATH
'/HOME/CDUSER/SPARK/SAMPLE.CSV' INTO TABLE T1]
INFO  05-02 14:57:37,411 - Table MetaData Unlocked Successfully after data
load
java.lang.RuntimeException: Table is locked for updation. Please try after
some time
        at scala.sys.package$.error(package.scala:27)
        at
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:360)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
        at
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)


I followed the docs at
https://github.com/apache/incubator-carbondata/blob/master/docs/installation-guide.md#installing-and-configuring-carbondata-on-standalone-spark-cluster
and
https://github.com/apache/incubator-carbondata/blob/master/docs/quick-start-guide.md
to install carbondata.

While creating the table, I observed below WARN msg in the log :

main Query [CREATE TABLE DEFAULT.T1 USING CARBONDATA OPTIONS (TABLENAME
"DEFAULT.T1", TABLEPATH "/HOME/CDUSER/CARBON.STORE/DEFAULT/T1") ]

WARN  05-02 14:34:30,656 - Couldn't find corresponding Hive SerDe for data
source provider carbondata. Persisting data source relation `default`.`t1`
into Hive metastore in Spark SQL specific format, which is NOT compatible
with Hive.
INFO  05-02 14:34:30,755 - 0: create_table: Table(tableName:t1,
dbName:default, owner:cduser, createTime:1486290870, lastAccessTime:0,
retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:col,
type:array<string>, comment:from deserializer)], location:null,
inputFormat:org.apache.hadoop.mapred.SequenceFileInputFormat,
outputFormat:org.apache.hadoop.hive.ql.io.HiveSequenceFileOutputFormat,
compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null,
serializationLib:org.apache.hadoop.hive.serde2.MetadataTypedColumnsetSerDe,
parameters:{tablePath=/home/cduser/carbon.store/default/t1,
serialization.format=1, tableName=default.t1}), bucketCols:[], sortCols:[],
parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[],
skewedColValueLocationMaps:{})), partitionKeys:[],
parameters:{EXTERNAL=TRUE, spark.sql.sources.provider=carbondata},
viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE,
privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null,
rolePrivileges:null))


Appreciate any help in resolving this.

Thanks,
Sanoj

Reply via email to