Re: [Discussion] Support String Trim For Table Level or Col Level

2016-10-19 Thread Qingqing Zhou
On Mon, Oct 17, 2016 at 1:59 AM, 向志强  wrote:
> trim the data when data loading.
>

This looks like a query processor (e.g., Spark) 's functionality instead
of Carbon's. User can even request some other transformations, like a
complicate SELECTE statement, while loading the data. One way to do this
is using INSERT statement. For example:

INSERT INTO target_table (SELECT trim(a), trim(b) FROM
your_mapped_csv_table) ;

Regards,
Qingqing


load data error

2016-10-19 Thread 仲景武

when run command (thrift sever):

jdbc:hive2://taonongyuan.com:10099/default> load data 
inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;


throw exception:

Driver stacktrace: (state=,code=0)
0: jdbc:hive2://taonongyuan.com:10099/default> load 
data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname 
/name001:9000/carbondata/sample.csv from 
hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. 
(state=,code=0)
0: jdbc:hive2://taonongyuan.com:10099/default> load 
data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 
(TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: 
hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta,
 expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at 
org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
at 
org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
at 
org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
at 
org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
at 
org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
at 
org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
at 
org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
at 
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:362)
at 
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



在 2016年10月19日,下午4:55,仲景武 
mailto:zhongjin...@shhxzq.com>> 写道:


hi, all

I have installed carbonate succeed  following the document 
“https://cwiki.apache.org/confluence/display/CARBONDATA/“

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table 
test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does 
not exist: /home/bigdata/bigdata/carbondata/sample.csv
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD

[GitHub] incubator-carbondata pull request #251: [CARBONDATA-302]Added Writer process...

2016-10-19 Thread ravipesala
GitHub user ravipesala opened a pull request:

https://github.com/apache/incubator-carbondata/pull/251

[CARBONDATA-302]Added Writer processor step for dataloading.

Add DataWriterProcessorStep which reads the data from sort processor step 
and apply mdk generator on key and creates carbondata files.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ravipesala/incubator-carbondata 
datawriter-step

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/251.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #251


commit 9edbfdd48a9f35c0296703475a01f8cc7b02f8fc
Author: ravipesala 
Date:   2016-10-20T02:41:46Z

Added Writer processor step for dataloading.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-330) Fix compiler warnings - Java related

2016-10-19 Thread Aniket Adnaik (JIRA)
Aniket Adnaik created CARBONDATA-330:


 Summary: Fix compiler warnings - Java related
 Key: CARBONDATA-330
 URL: https://issues.apache.org/jira/browse/CARBONDATA-330
 Project: CarbonData
  Issue Type: Improvement
  Components: build, core
Affects Versions: 0.2.0-incubating
Reporter: Aniket Adnaik
Priority: Trivial
 Fix For: 0.2.0-incubating


Fix java compiler warnings and code cleanup.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-carbondata pull request #249: [CARBONDATA-329] constant final clas...

2016-10-19 Thread abhishekgiri38
GitHub user abhishekgiri38 opened a pull request:

https://github.com/apache/incubator-carbondata/pull/249

[CARBONDATA-329] constant final class changed to interface

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [x] Make sure the PR title is formatted like:
   `[CARBONDATA-] Description of pull request`
 - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [x] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [x] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.txt).
 - [ ] Testing done
 
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- What manual testing you have done?
- Any additional information to help reviewers in testing this 
change.
 
 - [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
 
---



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/abhishekgiri38/incubator-carbondata master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/249.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #249


commit e38b01929b16871294effc6eeab37f787e136f00
Author: abhishekgiri38 
Date:   2016-10-19T18:55:21Z

constant file's final class changed to interface




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-329) constant final class changed to interface

2016-10-19 Thread abhishek (JIRA)
abhishek  created CARBONDATA-329:


 Summary: constant final class changed to interface
 Key: CARBONDATA-329
 URL: https://issues.apache.org/jira/browse/CARBONDATA-329
 Project: CarbonData
  Issue Type: Improvement
  Components: core
Reporter: abhishek 


Constant file's is final class and it is now changed to interface. Implicitly 
fields are static and final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] incubator-carbondata pull request #248: [CARBONDATA-328] Improve Code and Fi...

2016-10-19 Thread PKOfficial
GitHub user PKOfficial opened a pull request:

https://github.com/apache/incubator-carbondata/pull/248

[CARBONDATA-328] Improve Code and Fix Warnings [Spark]

Removed some compliation warnings.
Replace pattern matching for boolean to IF-ELSE.
Improved code according to scala standards.


**Please provide details on** 
**- Whether new unit test cases have been added or why no new tests are 
required?**
Not required because no change in functionality.
**- What manual testing you have done?**
Run basic Commands in Beeline.
**- Any additional information to help reviewers in testing this change.**
No

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/PKOfficial/incubator-carbondata 
improved-code-spark

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-carbondata/pull/248.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #248


commit 64f169a324e6210fd156b6b5a6acc0916c201616
Author: Prabhat Kashyap 
Date:   2016-10-19T16:54:47Z

Improved spark module code.
* Removed some compliation warnings.
* Replace pattern matching for boolean to IF-ELSE.
* Improved code according to scala standards.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

2016-10-19 Thread 仲景武
when run command (thrift sever):

jdbc:hive2://taonongyuan.com:10099/default> load data 
inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;


throw exception:

Driver stacktrace: (state=,code=0)
0: jdbc:hive2://taonongyuan.com:10099/default> load 
data inpath 'hdfs:///name001:9000/carbondata/sample.csv' into table test_table3;
Error: java.lang.IllegalArgumentException: Pathname 
/name001:9000/carbondata/sample.csv from 
hdfs:/name001:9000/carbondata/sample.csv is not a valid DFS filename. 
(state=,code=0)
0: jdbc:hive2://taonongyuan.com:10099/default> load 
data inpath 'hdfs://name001:9000/carbondata/sample.csv' into table test_table3;
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0 in stage 5.0 failed 4 times, most recent failure: Lost task 0.3 in stage 5.0 
(TID 18, data002): java.lang.IllegalArgumentException: Wrong FS: 
hdfs://name001:9000/user/hive/warehouse/carbon.store/default/test_table3/Metadata/fdd8c8c4-5cdd-4542-aab1-785be20b9f36.dictmeta,
 expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at 
org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at 
org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:529)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:747)
at 
org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:524)
at 
org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:409)
at 
org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:140)
at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:341)
at 
org.apache.carbondata.core.datastorage.store.impl.FileFactory.getDataInputStream(FileFactory.java:146)
at org.apache.carbondata.core.reader.ThriftReader.open(ThriftReader.java:79)
at 
org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.openThriftReader(CarbonDictionaryMetadataReaderImpl.java:181)
at 
org.apache.carbondata.core.reader.CarbonDictionaryMetadataReaderImpl.readLastEntryOfDictionaryMetaChunk(CarbonDictionaryMetadataReaderImpl.java:128)
at 
org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.readLastChunkFromDictionaryMetadataFile(AbstractDictionaryCache.java:129)
at 
org.apache.carbondata.core.cache.dictionary.AbstractDictionaryCache.checkAndLoadDictionaryData(AbstractDictionaryCache.java:204)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.getDictionary(ReverseDictionaryCache.java:181)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:69)
at 
org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCache.get(ReverseDictionaryCache.java:40)
at 
org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:508)
at 
org.apache.carbondata.spark.load.CarbonLoaderUtil.getDictionary(CarbonLoaderUtil.java:514)
at 
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD$$anon$1.(CarbonGlobalDictionaryRDD.scala:362)
at 
org.apache.carbondata.spark.rdd.CarbonGlobalDictionaryGenerateRDD.compute(CarbonGlobalDictionaryRDD.scala:293)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)



在 2016年10月19日,下午4:55,仲景武 
mailto:zhongjin...@shhxzq.com>> 写道:


hi, all

I have installed carbonate succeed  following the document 
“https://cwiki.apache.org/confluence/display/CARBONDATA/“

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table 
test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does 
not exist: /home/bigdata/bigdata/carbondata/sample.csv
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.sca

[GitHub] incubator-carbondata pull request #233: [CARBONDATA-296]1.Add CSVInputFormat...

2016-10-19 Thread QiangCai
Github user QiangCai commented on a diff in the pull request:

https://github.com/apache/incubator-carbondata/pull/233#discussion_r84068389
  
--- Diff: 
hadoop/src/test/java/org/apache/carbondata/hadoop/csv/CSVInputFormatTest.java 
---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+package org.apache.carbondata.hadoop.csv;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileOutputStream;
+import java.io.IOException;
+
+import org.apache.carbondata.hadoop.io.StringArrayWritable;
+
+import junit.framework.TestCase;
+import org.junit.Assert;
+import org.junit.Test;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.io.NullWritable;
+import org.apache.hadoop.io.compress.BZip2Codec;
+import org.apache.hadoop.io.compress.CompressionOutputStream;
+import org.apache.hadoop.io.compress.GzipCodec;
+import org.apache.hadoop.io.compress.Lz4Codec;
+import org.apache.hadoop.io.compress.SnappyCodec;
+import org.apache.hadoop.mapreduce.Job;
+import org.apache.hadoop.mapreduce.Mapper;
+import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
+import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
+
+public class CSVInputFormatTest extends TestCase {
+
+  /**
+   * generate compressed files, no need to call this method.
+   * @throws Exception
+   */
+  public void testGenerateCompressFiles() throws Exception {
+String pwd = new File("src/test/resources").getCanonicalPath();
+String inputFile = pwd + "/data.csv";
+FileInputStream input = new FileInputStream(inputFile);
+Configuration conf = new Configuration();
+
+// .gz
+String outputFile = pwd + "/data.csv.gz";
+FileOutputStream output = new FileOutputStream(outputFile);
+GzipCodec gzip = new GzipCodec();
+gzip.setConf(conf);
+CompressionOutputStream outputStream = gzip.createOutputStream(output);
+int i = -1;
+while ((i = input.read()) != -1) {
+  outputStream.write(i);
+}
+outputStream.close();
+input.close();
+
+// .bz2
+input = new FileInputStream(inputFile);
+outputFile = pwd + "/data.csv.bz2";
+output = new FileOutputStream(outputFile);
+BZip2Codec bzip2 = new BZip2Codec();
+bzip2.setConf(conf);
+outputStream = bzip2.createOutputStream(output);
+i = -1;
+while ((i = input.read()) != -1) {
+  outputStream.write(i);
+}
+outputStream.close();
+input.close();
+
+// .snappy
+input = new FileInputStream(inputFile);
+outputFile = pwd + "/data.csv.snappy";
+output = new FileOutputStream(outputFile);
+SnappyCodec snappy = new SnappyCodec();
+snappy.setConf(conf);
+outputStream = snappy.createOutputStream(output);
+i = -1;
+while ((i = input.read()) != -1) {
+  outputStream.write(i);
+}
+outputStream.close();
+input.close();
+
+//.lz4
+input = new FileInputStream(inputFile);
+outputFile = pwd + "/data.csv.lz4";
+output = new FileOutputStream(outputFile);
+Lz4Codec lz4 = new Lz4Codec();
+lz4.setConf(conf);
+outputStream = lz4.createOutputStream(output);
+i = -1;
+while ((i = input.read()) != -1) {
+  outputStream.write(i);
+}
+outputStream.close();
+input.close();
+
+  }
+
+  /**
+   * CSVCheckMapper check the content of csv files.
+   */
+  public static class CSVCheckMapper extends Mapper {
+@Override
+protected void map(NullWritable key, StringArrayWritable value, 
Context context)
+throws IOException, InterruptedException {
+  String[] columns = value.get();
+  int id = Integer.parseInt(columns[0]);
+  int salary = Integer.parseInt(columns[6]);
+  Assert

Create table with reserve keywords created successfully

2016-10-19 Thread Harmeet Singh
Hey Team,

I am trying to create a table using reserve keywords in *carbondata* like
int, string etc. But i am expecting an error after executing the query, but
table create successful. Below is the detail: 

0: jdbc:hive2://127.0.0.1:1> create table int (int int, string string)
stored by 'carbondata';
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.726 seconds)
0: jdbc:hive2://127.0.0.1:1> desc int;
+---++--+--+
| col_name  | data_type  | comment  |
+---++--+--+
| string| string |  |
| int   | bigint |  |
+---++--+--+

But if I using hive, Hive throw an exception as below: 

hive> create table measure(age int, name string);
OK
Time taken: 10.923 seconds
hive> create table int (int int, string string);
FailedPredicateException(identifier,{useSQL11ReservedKeywordsForIdentifier()}?)
at
org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.identifier(HiveParser_IdentifiersParser.java:10924)
at
org.apache.hadoop.hive.ql.parse.HiveParser.identifier(HiveParser.java:45850)
at
org.apache.hadoop.hive.ql.parse.HiveParser_FromClauseParser.tableName(HiveParser_FromClauseParser.java:4813)
at
org.apache.hadoop.hive.ql.parse.HiveParser.tableName(HiveParser.java:45886)
at
org.apache.hadoop.hive.ql.parse.HiveParser.createTableStatement(HiveParser.java:5029)
at
org.apache.hadoop.hive.ql.parse.HiveParser.ddlStatement(HiveParser.java:2640)
at
org.apache.hadoop.hive.ql.parse.HiveParser.execStatement(HiveParser.java:1650)
at
org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1109)
at 
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:202)
at 
org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:396)
at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308)
at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122)
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1170)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
at 
org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376)
at 
org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
FAILED: ParseException line 1:13 Failed to recognize predicate 'int'. Failed
rule: 'identifier' in table name

This seems like, carbon data have issue, because reserver words are not
allow for using. 

I already try this with table_name and column_name. Both side this is
creating an table and column. 



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Create-table-with-reserve-keywords-created-successfully-tp2049.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-328) Improve Code and Fix Warnings

2016-10-19 Thread Prabhat Kashyap (JIRA)
Prabhat Kashyap created CARBONDATA-328:
--

 Summary: Improve Code and Fix Warnings
 Key: CARBONDATA-328
 URL: https://issues.apache.org/jira/browse/CARBONDATA-328
 Project: CarbonData
  Issue Type: Improvement
Reporter: Prabhat Kashyap
Priority: Trivial


Remove compiler warning and improve the existing code according to the 
standards. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CARBONDATA-327) Drop Daabase unexpected behaviour.

2016-10-19 Thread Harmeet Singh (JIRA)
Harmeet Singh created CARBONDATA-327:


 Summary: Drop Daabase unexpected behaviour. 
 Key: CARBONDATA-327
 URL: https://issues.apache.org/jira/browse/CARBONDATA-327
 Project: CarbonData
  Issue Type: Bug
Reporter: Harmeet Singh


Hey team, I am creating a database as below:

0: jdbc:hive2://127.0.0.1:1> create database Test;
+-+--+
| result  |
+-+--+
+-+--+

After creating an database i am using that database using below command:

0: jdbc:hive2://127.0.0.1:1> use Test;
+-+--+
| result  |
+-+--+
+-+--+

After That, i am drop the database as below:

0: jdbc:hive2://127.0.0.1:1> drop database test;
+-+--+
| result  |
+-+--+
+-+--+

The database drop successfully. I am expecting, after that the carbon data 
automatically switch to the "default" database. But when i trying to execute 
command "show tables" the result return nothing as below :

0: jdbc:hive2://127.0.0.1:1> show tables;
++--+--+
| tableName  | isTemporary  |
++--+--+
++--+--+
No rows selected (0.019 seconds)

But my default database contains some table as below:
0: jdbc:hive2://127.0.0.1:1> use default;
+-+--+
| result  |
+-+--+
+-+--+
No rows selected (0.024 seconds)
0: jdbc:hive2://127.0.0.1:1> show tables;
++--+--+
| tableName  | isTemporary  |
++--+--+
| one| false|
| two| false|
++--+--+
2 rows selected (0.013 seconds)

If I am following all above step on Hive, Hive gave us an error on show tables 
after drop the database as below:

hive> drop database test;
OK
Time taken: 0.628 seconds
hive> show databases;
OK
default
Time taken: 0.022 seconds, Fetched: 1 row(s)
hive> show tables;
FAILED: SemanticException [Error 10072]: Database does not exist: test 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Create table with columns contains spaces in name.

2016-10-19 Thread Harmeet Singh
Hey Liang, 

Sure I will be fix that issue.



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Create-table-with-columns-contains-spaces-in-name-tp2030p2044.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


[jira] [Created] (CARBONDATA-326) Creates wrong table on 'create table like'

2016-10-19 Thread Prabhat Kashyap (JIRA)
Prabhat Kashyap created CARBONDATA-326:
--

 Summary: Creates wrong table on 'create table like' 
 Key: CARBONDATA-326
 URL: https://issues.apache.org/jira/browse/CARBONDATA-326
 Project: CarbonData
  Issue Type: Bug
Reporter: Prabhat Kashyap


I'm trying to create a table like my old table but it is not creating as 
expected.

0: jdbc:hive2://localhost:1> CREATE TABLE mainTable(id INT, name STRING) 
STORED BY 'carbondata';
+-+--+
| Result  |
+-+--+
+-+--+
No rows selected (0.206 seconds)
0: jdbc:hive2://localhost:1> DESC mainTable;
+---++--+--+
| col_name  | data_type  | comment  |
+---++--+--+
| name  | string |  |
| id| bigint |  |
+---++--+--+
2 rows selected (0.056 seconds)


Above one is my mainTable and I wants to create copiedTable from it but 
everytime it is show something like:

0: jdbc:hive2://localhost:1> CREATE TABLE copiedTable LIKE mainTable;
+-+--+
| result  |
+-+--+
+-+--+
No rows selected (0.101 seconds)
0: jdbc:hive2://localhost:1> DESC copiedTable;
+---+++--+
| col_name  |   data_type|  comment   |
+---+++--+
| col   | array  | from deserializer  |
+---+++--+
1 row selected (0.022 seconds)

0: jdbc:hive2://localhost:1> LOAD DATA LOCAL INPATH 
'hdfs://localhost:54310/user/hduser/datafiles/data.csv' INTO TABLE copiedTable 
OPTIONS('DELIMITER'=',');
Error: java.lang.RuntimeException: Data loading failed. table not found: 
knoldus.copiedtable (state=,code=0)

0: jdbc:hive2://localhost:1> select * from copiedTable;
+--+--+
| col  |
+--+--+
+--+--+
No rows selected (0.11 seconds)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Create table like old table.

2016-10-19 Thread ravipesala
Carbon did not implement 'create table like'  feature. But it should not
create wrong table if we use 'like' command, either it should throw error
saying it does not support or it should create right table. Its an issue,
please raise a jira.



--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Create-table-like-old-table-tp2037p2041.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

2016-10-19 Thread 向志强
hi, jingwu,

Now, Carbon dose not support load data from local, pls put the file into
HDFS and test it again.

Lionx

2016-10-19 16:55 GMT+08:00 仲景武 :

>
> hi, all
>
> I have installed carbonate succeed  following the document “
> https://cwiki.apache.org/confluence/display/CARBONDATA/“
>
> but when load data into carbonate table  throws exception:
>
>
> run command:
> cc.sql("load data local inpath '../carbondata/sample.csv' into table
> test_table")
>
> errors:
>
> org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path
> does not exist: /home/bigdata/bigdata/carbondata/sample.csv
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> singleThreadedListStatus(FileInputFormat.java:321)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> listStatus(FileInputFormat.java:264)
> at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.
> getSplits(FileInputFormat.java:385)
> at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(
> MapPartitionsRDD.scala:35)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
> at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
> at scala.Option.getOrElse(Option.scala:120)
> at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
> at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:111)
> at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
> at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine$
> lzycompute(CarbonCsvRelation.scala:181)
> at com.databricks.spark.csv.CarbonCsvRelation.firstLine(
> CarbonCsvRelation.scala:176)
> at com.databricks.spark.csv.CarbonCsvRelation.inferSchema(
> CarbonCsvRelation.scala:144)
> at com.databricks.spark.csv.CarbonCsvRelation.(
> CarbonCsvRelation.scala:74)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:142)
> at com.databricks.spark.csv.newapi.DefaultSource.
> createRelation(DefaultSource.scala:44)
> at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(
> ResolvedDataSource.scala:158)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
> at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(
> GlobalDictionaryUtil.scala:386)
> at org.apache.carbondata.spark.util.GlobalDictionaryUtil$.
> generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
> at org.apache.spark.sql.execution.command.LoadTable.
> run(carbonTableSchema.scala:1170)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult$lzycompute(commands.scala:58)
> at org.apache.spark.sql.execution.ExecutedCommand.
> sideEffectResult(commands.scala:56)
> at org.apache.spark.sql.execution.ExecutedCommand.
> doExecute(commands.scala:70)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:132)
> at org.apache.spark.sql.execution.SparkPlan$$anonfun$
> execute$5.apply(SparkPlan.scala:130)
> at org.apache.spark.rdd.RDDOperationScope$.withScope(
> RDDOperationScope.scala:150)
> at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
> at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(
> QueryExecution.scala:55)
> at org.apache.spark.sql.execution.QueryExecution.
> toRdd(QueryExecution.scala:55)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:145)
> at org.apache.spark.sql.DataFrame.(DataFrame.scala:130)
> at org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.(
> CarbonDataFrameRDD.scala:23)
> at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.
> (:42)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<
> init>(:47)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:49)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:51)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:53)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:55)
> at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:57)
> at $iwC$$iwC$$iwC$$iwC$$iwC.(:59)
> at $iwC$$iwC$$iwC$$iwC.(:61)
> at $iwC$$iwC$$iwC.(:63)
> at $iwC$$iwC.(:65)
> at $iwC.(:67)
> at (:69)
> at .(:73)
> at .()
> at .(:7)
> at .()
> at $print()
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:57)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflec

Re: Create table with columns contains spaces in name.

2016-10-19 Thread Liang Chen
Hi Harmeet

Thank you reported this issue.

Would you like to fix this issue? 

Regards
Liang


Harmeet Singh wrote
> Thanks ravi, I will be raise on Jira.





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Create-table-with-columns-contains-spaces-in-name-tp2030p2039.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Re: carbondata org.apache.thrift.TBaseHelper.hashCode(segment_id); 问题

2016-10-19 Thread 仲景武

hi, all

I have installed carbonate succeed  following the document 
“https://cwiki.apache.org/confluence/display/CARBONDATA/“

but when load data into carbonate table  throws exception:


run command:
cc.sql("load data local inpath '../carbondata/sample.csv' into table 
test_table")

errors:

org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does 
not exist: /home/bigdata/bigdata/carbondata/sample.csv
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:321)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:264)
at 
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:385)
at org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:120)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at 
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:239)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:237)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:237)
at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1307)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.take(RDD.scala:1302)
at 
com.databricks.spark.csv.CarbonCsvRelation.firstLine$lzycompute(CarbonCsvRelation.scala:181)
at 
com.databricks.spark.csv.CarbonCsvRelation.firstLine(CarbonCsvRelation.scala:176)
at 
com.databricks.spark.csv.CarbonCsvRelation.inferSchema(CarbonCsvRelation.scala:144)
at com.databricks.spark.csv.CarbonCsvRelation.(CarbonCsvRelation.scala:74)
at 
com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:142)
at 
com.databricks.spark.csv.newapi.DefaultSource.createRelation(DefaultSource.scala:44)
at 
org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:109)
at 
org.apache.carbondata.spark.util.GlobalDictionaryUtil$.loadDataFrame(GlobalDictionaryUtil.scala:386)
at 
org.apache.carbondata.spark.util.GlobalDictionaryUtil$.generateGlobalDictionary(GlobalDictionaryUtil.scala:767)
at 
org.apache.spark.sql.execution.command.LoadTable.run(carbonTableSchema.scala:1170)
at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at 
org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:145)
at org.apache.spark.sql.DataFrame.(DataFrame.scala:130)
at 
org.apache.carbondata.spark.rdd.CarbonDataFrameRDD.(CarbonDataFrameRDD.scala:23)
at org.apache.spark.sql.CarbonContext.sql(CarbonContext.scala:137)
at 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:42)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:47)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:49)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:51)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:53)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:55)
at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:57)
at $iwC$$iwC$$iwC$$iwC$$iwC.(:59)
at $iwC$$iwC$$iwC$$iwC.(:61)
at $iwC$$iwC$$iwC.(:63)
at $iwC$$iwC.(:65)
at $iwC.(:67)
at (:69)
at .(:73)
at .()
at .(:7)
at .()
at $print()
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
at org.apache.spark.repl.SparkIMain.interpret

Create table like old table.

2016-10-19 Thread prabhatkashyap
Hello I'm trying to create a table like my old table but it is not creating
as expected.



Above one is my *mainTable* and I wants to create *copiedTable* from it but
everytime it is show something like:






--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Create-table-like-old-table-tp2037.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.


Drop Daabase seems unexpected behaviour.

2016-10-19 Thread Harmeet Singh
Hey team, I am creating a database as below: 

0: jdbc:hive2://127.0.0.1:1> create database Test;
+-+--+
| result  |
+-+--+
+-+--+

After creating an database i am using that database using below command: 

0: jdbc:hive2://127.0.0.1:1> use Test;
+-+--+
| result  |
+-+--+
+-+--+

After That, i am drop the database as below: 

0: jdbc:hive2://127.0.0.1:1> drop database test;
+-+--+
| result  |
+-+--+
+-+--+

The database drop successfully. I am expecting, after that the carbon data
automatically switch to the "default" database. But when i trying to execute
command "show tables" the result return nothing as below : 

0: jdbc:hive2://127.0.0.1:1> show tables;
++--+--+
| tableName  | isTemporary  |
++--+--+
++--+--+
No rows selected (0.019 seconds)

But my default database contains some table as below: 
0: jdbc:hive2://127.0.0.1:1> use default;
+-+--+
| result  |
+-+--+
+-+--+
No rows selected (0.024 seconds)
0: jdbc:hive2://127.0.0.1:1> show tables;
++--+--+
| tableName  | isTemporary  |
++--+--+
| one| false|
| two| false|
++--+--+
2 rows selected (0.013 seconds)

If I am following all above step on Hive, Hive gave us an error on show
tables after drop the database as below: 

hive> drop database test;
OK
Time taken: 0.628 seconds
hive> show databases;
OK
default
Time taken: 0.022 seconds, Fetched: 1 row(s)
hive> show tables;
FAILED: SemanticException [Error 10072]: Database does not exist: test

Please confirm this is an Issue or Carbon Data behavior. 





--
View this message in context: 
http://apache-carbondata-mailing-list-archive.1130556.n5.nabble.com/Drop-Daabase-seems-unexpected-behaviour-tp2036.html
Sent from the Apache CarbonData Mailing List archive mailing list archive at 
Nabble.com.