[jira] [Created] (CARBONDATA-4342) Desc Column Shows new Column added, even though alter add column operation failed

2022-06-21 Thread Indhumathi (Jira)
Indhumathi created CARBONDATA-4342:
--

 Summary: Desc Column Shows new Column added, even though alter add 
column operation failed
 Key: CARBONDATA-4342
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4342
 Project: CarbonData
  Issue Type: Bug
Reporter: Indhumathi






--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Updated] (CARBONDATA-4342) Desc Column Shows new Column added, even though alter add column operation failed

2022-06-21 Thread Indhumathi (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Indhumathi updated CARBONDATA-4342:
---
Description: 
# Create table and add new column.
 # If Alter add column failed in the final step, then the revert operation is 
unsuccessful

 

> Desc Column Shows new Column added, even though alter add column operation 
> failed
> -
>
> Key: CARBONDATA-4342
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4342
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Indhumathi
>Priority: Minor
>
> # Create table and add new column.
>  # If Alter add column failed in the final step, then the revert operation is 
> unsuccessful
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[jira] [Closed] (CARBONDATA-4340) Load & Insert Overwrite Fails after executing Clean files on Partition Table.

2022-06-21 Thread PURUJIT CHAUGULE (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

PURUJIT CHAUGULE closed CARBONDATA-4340.

Resolution: Duplicate

> Load & Insert Overwrite Fails after executing Clean files on Partition Table.
> -
>
> Key: CARBONDATA-4340
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4340
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 2.1.0, 2.2.0, 2.1.1
>Reporter: PURUJIT CHAUGULE
>Priority: Minor
>
> *Scenario 1: (LOAD OVERWRITE)*
> _*Load Overwrite Fails after execution of Clean Files on partition table.*_
> *Steps:*
> drop table if exists uniqdata_part;
> CREATE TABLE uniqdata_part(CUST_NAME string,ACTIVE_EMUI_VERSION string, DOB 
> timestamp, DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10), DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, INTEGER_COLUMN1 
> int) PARTITIONED BY(CUST_ID int) STORED AS carbondata;
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_part PARTITION (CUST_ID='9001') 
> OPTIONS('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> delete from table uniqdata_part where SEGMENT.ID IN(0,4);
> clean files for table uniqdata_part options('force'='true'):
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' OVERWRITE into 
> table uniqdata_part PARTITION (CUST_ID='9001') OPTIONS 
> ('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE');
> *ERROR:*
> Error: org.apache.hive.service.cli.HiveSQLException: Error running query: 
> java.lang.RuntimeException: DataLoad failure: null
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:387)
>         at 
> org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$3(SparkExecuteStatementOperation.scala:276)
>         at 
> scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)

[jira] [Resolved] (CARBONDATA-4341) Drop Index Fails after TABLE RENAME

2022-06-21 Thread Indhumathi (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Indhumathi resolved CARBONDATA-4341.

Fix Version/s: 2.3.1
   Resolution: Fixed

>  Drop Index Fails after TABLE RENAME
> 
>
> Key: CARBONDATA-4341
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4341
> Project: CarbonData
>  Issue Type: Bug
>Reporter: SHREELEKHYA GAMPA
>Priority: Major
> Fix For: 2.3.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Drop Index Fails after TABLE RENAME
> [Steps] :-
> From spark beeline the queries are executed.
> drop table if exists uniqdata; CREATE TABLE uniqdata(CUST_ID int ,CUST_NAME 
> string,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double, 
> INTEGER_COLUMN1 int) STORED AS carbondata; LOAD DATA INPATH 
> 'hdfs://hacluster/chetan/2000_UniqData.csv' into table uniqdata OPTIONS 
> ('FILEHEADER'='CUST_ID,CUST_NAME ,ACTIVE_EMUI_VERSION,DOB,DOJ, 
> BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1, 
> Double_COLUMN2,INTEGER_COLUMN1','BAD_RECORDS_ACTION'='FORCE'); create index 
> uniq2_index on table uniqdata(CUST_NAME) as 'carbondata'; alter table 
> uniqdata rename to uniqdata_i; drop index if exists uniq2_index on uniqdata_i;
> [Expected Result] :- Drop Index should be success after TABLE RENAME
> [Actual Issue]:- Drop Index Fails after TABLE RENAME
> Error message: Table or view 'uniqdata_i' not found in database 'default';



--
This message was sent by Atlassian Jira
(v8.20.7#820007)


[GitHub] [carbondata] zhangboren93 opened a new issue, #4281: [SDK Optimization] Multiple SimpleDateFormat initialization in CarbonReader

2022-06-21 Thread GitBox


zhangboren93 opened a new issue, #4281:
URL: https://github.com/apache/carbondata/issues/4281

   I found that reading carbon files from CarbonReader takes long time in 
"SimpleDateFormat.", see attached file for output of 
profiling. 
   
   
https://github.com/apache/carbondata/blob/4b8846d1e6737e7db8a96014818c067c8c253d1f/sdk/sdk/src/main/java/org/apache/carbondata/sdk/file/CarbonReader.java#L207
   
   I wonder if it is OK if we add some lazy initialization to SimpleDateFormat 
in the class, and if so should it support multi-threading.
   
   [profile.zip](https://github.com/apache/carbondata/files/8954954/profile.zip)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@carbondata.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org