[ 
https://issues.apache.org/jira/browse/CARBONDATA-1142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

吴志龙 updated CARBONDATA-1142:
----------------------------
    Description: 
1、CREATE TABLE IF NOT EXISTS dp_tmp.order_detail ( id BIGINT, order_code 
STRING, sales_area_id INT, sales_id INT, order_inputer INT, pro_type STRING, 
currency INT, exchange_rate DECIMAL, unit_cost_price DECIMAL, 
unit_selling_price DECIMAL, order_num INTEGER, order_amount DECIMAL, 
order_discount DOUBLE, order_account_amount DECIMAL, order_time TIMESTAMP, 
delivery_channel INT, delivery_address STRING, recipients STRING, contact 
STRING, delivery_date DATE, comments STRING ) STORED BY 'carbondata' 
TBLPROPERTIES ( 'COLUMN_GROUPS' = '(recipients,contact)', 'DICTIONARY_EXCLUDE' 
= 'comments', 'DICTIONARY_INCLUDE' = 'sales_area_id,sales_id', 
'NO_INVERTED_INDEX' = 'id,order_code' )
2、load data inpath 'hdfs://hacluster/data/carbondata/csv/order_detail_1.csv' 
intotable dp_tmp.order_detail OPTIONS 
('DELIMITER'=',','fileheader'='id,order_code,sales_area_id,sales_id,order_inputer,pro_type,currency,exchange_rate,unit_cost_price,unit_selling_price,order_num,order_amount,order_discount,order_account_amount,order_time,delivery_channel,delivery_address,recipients,contact,delivery_date,comments')
3、spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4;
Error in query: 
missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)

== SQL ==
SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4
--------------^^^

spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1;
Error in query: 
missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)

== SQL ==
SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1
--------------^^^

spark-sql> DELETE SEGMENTS FROM TABLE dp_tmp.order_detail WHERE STARTTIME 
BEFORE '2017-06-08 12:05:06';
Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
17/06/08 10:36:27 ERROR DeleteResourceProcessor: Usage: delete 
[FILE|JAR|ARCHIVE] <value> [<value>]*
4、spark-sql> select count(1) from dp_tmp.order_detail;
14937665  
5、spark-sql> insert overwrite table dp_tmp.order_detail select * from 
dp_tmp.order_detail_orc;
6、spark-sql> select count(1) from dp_tmp.order_detail;
34937665 

problem:

1、show SEGMENTS ,DELETE SEGMENTS:There is a problem with this grammar
2、insert overwrite :Should be covered, not added


  was:
1、CREATE TABLE IF NOT EXISTS dp_tmp.order_detail ( id BIGINT, order_code 
STRING, sales_area_id INT, sales_id INT, order_inputer INT, pro_type STRING, 
currency INT, exchange_rate DECIMAL, unit_cost_price DECIMAL, 
unit_selling_price DECIMAL, order_num INTEGER, order_amount DECIMAL, 
order_discount DOUBLE, order_account_amount DECIMAL, order_time TIMESTAMP, 
delivery_channel INT, delivery_address STRING, recipients STRING, contact 
STRING, delivery_date DATE, comments STRING ) STORED BY 'carbondata' 
TBLPROPERTIES ( 'COLUMN_GROUPS' = '(recipients,contact)', 'DICTIONARY_EXCLUDE' 
= 'comments', 'DICTIONARY_INCLUDE' = 'sales_area_id,sales_id', 
'NO_INVERTED_INDEX' = 'id,order_code' )
2、load data inpath 'hdfs://hacluster/data/carbondata/csv/order_detail_1.csv' 
overwrite table dp_tmp.order_detail OPTIONS 
('DELIMITER'=',','fileheader'='id,order_code,sales_area_id,sales_id,order_inputer,pro_type,currency,exchange_rate,unit_cost_price,unit_selling_price,order_num,order_amount,order_discount,order_account_amount,order_time,delivery_channel,delivery_address,recipients,contact,delivery_date,comments')
3、spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4;
Error in query: 
missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)

== SQL ==
SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4
--------------^^^

spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1;
Error in query: 
missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)

== SQL ==
SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1
--------------^^^

spark-sql> DELETE SEGMENTS FROM TABLE dp_tmp.order_detail WHERE STARTTIME 
BEFORE '2017-06-08 12:05:06';
Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
17/06/08 10:36:27 ERROR DeleteResourceProcessor: Usage: delete 
[FILE|JAR|ARCHIVE] <value> [<value>]*
4、spark-sql> select count(1) from dp_tmp.order_detail;
14937665  
5、spark-sql> insert overwrite table dp_tmp.order_detail select * from 
dp_tmp.order_detail_orc;
6、spark-sql> select count(1) from dp_tmp.order_detail;
34937665 

problem:

1、show SEGMENTS ,DELETE SEGMENTS:There is a problem with this grammar
2、insert overwrite :Should be covered, not added



> carbandata spark-sql grammar problem
> ------------------------------------
>
>                 Key: CARBONDATA-1142
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1142
>             Project: CarbonData
>          Issue Type: Bug
>          Components: sql
>    Affects Versions: 1.1.0
>         Environment: spark 2.1 + hadoop 2.6
>            Reporter: 吴志龙
>         Attachments: a620e2b7-245e-456c-9c2d-b00bd1a29400.png
>
>
> 1、CREATE TABLE IF NOT EXISTS dp_tmp.order_detail ( id BIGINT, order_code 
> STRING, sales_area_id INT, sales_id INT, order_inputer INT, pro_type STRING, 
> currency INT, exchange_rate DECIMAL, unit_cost_price DECIMAL, 
> unit_selling_price DECIMAL, order_num INTEGER, order_amount DECIMAL, 
> order_discount DOUBLE, order_account_amount DECIMAL, order_time TIMESTAMP, 
> delivery_channel INT, delivery_address STRING, recipients STRING, contact 
> STRING, delivery_date DATE, comments STRING ) STORED BY 'carbondata' 
> TBLPROPERTIES ( 'COLUMN_GROUPS' = '(recipients,contact)', 
> 'DICTIONARY_EXCLUDE' = 'comments', 'DICTIONARY_INCLUDE' = 
> 'sales_area_id,sales_id', 'NO_INVERTED_INDEX' = 'id,order_code' )
> 2、load data inpath 'hdfs://hacluster/data/carbondata/csv/order_detail_1.csv' 
> intotable dp_tmp.order_detail OPTIONS 
> ('DELIMITER'=',','fileheader'='id,order_code,sales_area_id,sales_id,order_inputer,pro_type,currency,exchange_rate,unit_cost_price,unit_selling_price,order_num,order_amount,order_discount,order_account_amount,order_time,delivery_channel,delivery_address,recipients,contact,delivery_date,comments')
> 3、spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4;
> Error in query: 
> missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)
> == SQL ==
> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 4
> --------------^^^
> spark-sql> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1;
> Error in query: 
> missing 'FUNCTIONS' at 'FOR'(line 1, pos 14)
> == SQL ==
> SHOW SEGMENTS FOR TABLE dp_tmp.order_detail LIMIT 1
> --------------^^^
> spark-sql> DELETE SEGMENTS FROM TABLE dp_tmp.order_detail WHERE STARTTIME 
> BEFORE '2017-06-08 12:05:06';
> Usage: delete [FILE|JAR|ARCHIVE] <value> [<value>]*
> 17/06/08 10:36:27 ERROR DeleteResourceProcessor: Usage: delete 
> [FILE|JAR|ARCHIVE] <value> [<value>]*
> 4、spark-sql> select count(1) from dp_tmp.order_detail;
> 14937665  
> 5、spark-sql> insert overwrite table dp_tmp.order_detail select * from 
> dp_tmp.order_detail_orc;
> 6、spark-sql> select count(1) from dp_tmp.order_detail;
> 34937665 
> problem:
> 1、show SEGMENTS ,DELETE SEGMENTS:There is a problem with this grammar
> 2、insert overwrite :Should be covered, not added



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Reply via email to