[jira] [Resolved] (CARBONDATA-4213) Prepriming for update operation fails with Index server

2021-06-18 Thread Indhumathi Muthu Murugesh (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Indhumathi Muthu Murugesh resolved CARBONDATA-4213.
---
Fix Version/s: 2.2.0
   Resolution: Fixed

> Prepriming for update operation fails with Index server
> ---
>
> Key: CARBONDATA-4213
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4213
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Vikram Ahuja
>Priority: Major
> Fix For: 2.2.0
>
>
> sql("DROP TABLE IF EXISTS source111")
> sql("create table source111(a int, b string) stored as carbondata")
> sql("insert into source111 select 1, 2")
> sql("update source111 set (a) = (a+9) where b!= 70")
>  
> Cache is not updated in the index server after update command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (CARBONDATA-4205) MINOR compaction getting triggered by it self while inserting data to a table

2021-06-18 Thread SHREELEKHYA GAMPA (Jira)


[ 
https://issues.apache.org/jira/browse/CARBONDATA-4205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17365535#comment-17365535
 ] 

SHREELEKHYA GAMPA commented on CARBONDATA-4205:
---

Hi, can you share the carbon configuration set?

Pls check for carbon.enable.auto.load.merge and 
carbon.compaction.level.threshold properties. When 
carbon.enable.auto.load.merge is set to true, compaction will be automatically 
triggered once data load completes.

 

> MINOR compaction getting triggered by it self while inserting data to a table
> -
>
> Key: CARBONDATA-4205
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4205
> Project: CarbonData
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.0.1
> Environment: apache carbondata 2.0.1, hadoop 2.7.2, spark 2.4.5
>Reporter: suyash yadav
>Priority: Major
>
> Hi Team we have created a table and also created a timeseries MV on it. Later 
> we tried to insert a some data from other table to this newly created table 
> but we have observed that while inserting ...MINOR compaction on the MV is 
> getting triggered by it self. It doesn't happen for all the insert but 
> whnever we insert 6 to 7th hour data and then 14 to 15 hour datathe MINOR 
> compaction gets triggered. Could you tell us why the MINOR compaction is 
> getting triggered by it self.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-4212) Update Fails with Unsupported Complex types exception, even if table doesnt have complex column

2021-06-18 Thread Akash R Nilugal (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved CARBONDATA-4212.
-
Fix Version/s: 2.2.0
   Resolution: Fixed

> Update Fails with Unsupported Complex types exception, even if table doesnt 
> have complex column
> ---
>
> Key: CARBONDATA-4212
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4212
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Indhumathi Muthu Murugesh
>Priority: Minor
> Fix For: 2.2.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> drop table if exists iud.zerorows;
> create table iud.zerorows (c1 string,c2 int,c3 string,c5 string) STORED AS 
> carbondata;
> insert into iud.zerorows select 'a',1,'aa','b';
> update iud.zerorows up_TAble set(up_table.c1)=('abc') where up_TABLE.c2=1;
>  
> Exception:
> ANTLR Tool version 4.7 used for code generation does not match the current 
> runtime version 4.8ANTLR Runtime version 4.7 used for parser compilation does 
> not match the current runtime version 4.8ANTLR Tool version 4.7 used for code 
> generation does not match the current runtime version 4.8ANTLR Runtime 
> version 4.7 used for parser compilation does not match the current runtime 
> version 4.8org.apache.spark.sql.catalyst.parser.ParseException: 
> mismatched input 'update' expecting \{'(', 'SELECT', 'FROM', 'ADD', 'DESC', 
> 'WITH', 'VALUES', 'CREATE', 'TABLE', 'INSERT', 'DELETE', 'DESCRIBE', 
> 'EXPLAIN', 'SHOW', 'USE', 'DROP', 'ALTER', 'MAP', 'SET', 'RESET', 'START', 
> 'COMMIT', 'ROLLBACK', 'REDUCE', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 
> 'DFS', 'TRUNCATE', 'ANALYZE', 'LIST', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 
> 'MSCK', 'EXPORT', 'IMPORT', 'LOAD'}(line 1, pos 0)
> == SQL ==
> update iud.zerorows up_TAble set(up_table.c1)=('abc') where up_TABLE.c2=1
> ^^^
> at 
> org.apache.spark.sql.catalyst.parser.ParseException.withCommand(ParseDriver.scala:239)
>  at 
> org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:115)
>  at 
> org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:48)
>  at 
> org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:69)
>  at 
> org.apache.spark.sql.parser.CarbonExtensionSqlParser.parsePlan(CarbonExtensionSqlParser.scala:60)
>  at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:642)
>  at 
> org.apache.spark.sql.test.SparkTestQueryExecutor.sql(SparkTestQueryExecutor.scala:37)
>  at org.apache.spark.sql.test.util.QueryTest.sql(QueryTest.scala:121)
>  at 
> org.apache.carbondata.spark.testsuite.iud.UpdateCarbonTableTestCase$$anonfun$61.apply$mcV$sp(UpdateCarbonTableTestCase.scala:1185)
>  at 
> org.apache.carbondata.spark.testsuite.iud.UpdateCarbonTableTestCase$$anonfun$61.apply(UpdateCarbonTableTestCase.scala:1181)
>  at 
> org.apache.carbondata.spark.testsuite.iud.UpdateCarbonTableTestCase$$anonfun$61.apply(UpdateCarbonTableTestCase.scala:1181)
>  at 
> org.scalatest.Transformer$$anonfun$apply$1.apply$mcV$sp(Transformer.scala:22)
>  at org.scalatest.OutcomeOf$class.outcomeOf(OutcomeOf.scala:85)
>  at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>  at org.scalatest.Transformer.apply(Transformer.scala:22)
>  at org.scalatest.Transformer.apply(Transformer.scala:20)
>  at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:166)
>  at 
> org.apache.spark.sql.test.util.CarbonFunSuite.withFixture(CarbonFunSuite.scala:41)
>  at 
> org.scalatest.FunSuiteLike$class.invokeWithFixture$1(FunSuiteLike.scala:163)
>  at 
> org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>  at 
> org.scalatest.FunSuiteLike$$anonfun$runTest$1.apply(FunSuiteLike.scala:175)
>  at org.scalatest.SuperEngine.runTestImpl(Engine.scala:306)
>  at org.scalatest.FunSuiteLike$class.runTest(FunSuiteLike.scala:175)
>  at org.scalatest.FunSuite.runTest(FunSuite.scala:1555)
>  at 
> org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>  at 
> org.scalatest.FunSuiteLike$$anonfun$runTests$1.apply(FunSuiteLike.scala:208)
>  at 
> org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:413)
>  at 
> org.scalatest.SuperEngine$$anonfun$traverseSubNodes$1$1.apply(Engine.scala:401)
>  at scala.collection.immutable.List.foreach(List.scala:381)
>  at org.scalatest.SuperEngine.traverseSubNodes$1(Engine.scala:401)
>  at 
> org.scalatest.SuperEngine.org$scalatest$SuperEngine$$runTestsInBranch(Engine.scala:396)
>  at org.scalatest.SuperEngine.runTestsImpl(Engine.scala:483)
>  at org.scalatest.FunSuiteLike$class.runTests(FunSuiteLike.scala:208)
>  at org.scalatest.FunSuite.runTests(FunSuite.scala:1555)
>  at org.scalatest.Suite$class.run(Suite.scala:1424)
>  at 
> org.scalatest.FunSuite.org$sc

[jira] [Created] (CARBONDATA-4216) Exception during alter add struct enabling local dictionary

2021-06-18 Thread Akshay (Jira)
Akshay created CARBONDATA-4216:
--

 Summary: Exception during alter add struct enabling local 
dictionary
 Key: CARBONDATA-4216
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4216
 Project: CarbonData
  Issue Type: Bug
  Components: spark-integration
Reporter: Akshay


Array Index Out Of Bounds exception while adding struct when local dictionary 
is enabled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-4215) When carbon.enable.vector.reader=false and upon adding a parquet segment through alter add segments in a carbon table , we are getting error in count(*)

2021-06-18 Thread Prasanna Ravichandran (Jira)
Prasanna Ravichandran created CARBONDATA-4215:
-

 Summary: When carbon.enable.vector.reader=false and upon adding a 
parquet segment through alter add segments in a carbon table , we are getting 
error in count(*)
 Key: CARBONDATA-4215
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4215
 Project: CarbonData
  Issue Type: Bug
Affects Versions: 2.1.1
 Environment: 3 node FI
Reporter: Prasanna Ravichandran


When carbon.enable.vector.reader=false and upon adding a parquet segment 
through alter add segments in a carbon table , we are getting error in count(*).

 

Test queries:

--set carbon.enable.vector.reader=false in carbon.properties;
use default;
drop table if exists uniqdata;
CREATE TABLE uniqdata (cust_id int,cust_name String,active_emui_version string, 
dob timestamp, doj timestamp, bigint_column1 bigint,bigint_column2 
bigint,decimal_column1 decimal(30,10), decimal_column2 
decimal(36,36),double_column1 double, double_column2 double,integer_column1 
int) stored as carbondata;
load data inpath 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table 
uniqdata 
options('fileheader'='cust_id,cust_name,active_emui_version,dob,doj,bigint_column1,bigint_column2,decimal_column1,decimal_column2,double_column1,double_column2,integer_column1','bad_records_action'='force');

drop table if exists uniqdata_parquet;
CREATE TABLE uniqdata_parquet (cust_id int,cust_name String,active_emui_version 
string, dob timestamp, doj timestamp, bigint_column1 bigint,bigint_column2 
bigint,decimal_column1 decimal(30,10), decimal_column2 
decimal(36,36),double_column1 double, double_column2 double,integer_column1 
int) stored as parquet;
insert into uniqdata_parquet select * from uniqdata;
create database if not exists test;
use test;

CREATE TABLE uniqdata (cust_id int,cust_name String,active_emui_version string, 
dob timestamp, doj timestamp, bigint_column1 bigint,bigint_column2 
bigint,decimal_column1 decimal(30,10), decimal_column2 
decimal(36,36),double_column1 double, double_column2 double,integer_column1 
int) stored as carbondata;
load data inpath 'hdfs://hacluster/user/prasanna/2000_UniqData.csv' into table 
uniqdata 
options('fileheader'='cust_id,cust_name,active_emui_version,dob,doj,bigint_column1,bigint_column2,decimal_column1,decimal_column2,double_column1,double_column2,integer_column1','bad_records_action'='force');

Alter table uniqdata add segment options 
('path'='hdfs://hacluster/user/hive/warehouse/uniqdata_parquet','format'='parquet');
 select count(*) from uniqdata; -- throwing error class cast exception;

 

Error Log traces:

java.lang.ClassCastException: org.apache.spark.sql.vectorized.ColumnarBatch 
cannot be cast to org.apache.spark.sql.catalyst.InternalRow
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown
 Source)
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
 at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scala:584)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
 at 
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:132)
 at 
org.apache.spark.shuffle.ShuffleWriteProcessor.write(ShuffleWriteProcessor.scala:58)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
 at org.apache.spark.scheduler.Task.run(Task.scala:123)
 at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:413)
 at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1551)
 at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:419)
 at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
 at java.lang.Thread.run(Thread.java:748)
2021-06-19 13:50:59,035 | WARN | task-result-getter-2 | Lost task 0.0 in stage 
4.0 (TID 28, localhost, executor driver): java.lang.ClassCastException: 
org.apache.spark.sql.vectorized.ColumnarBatch cannot be cast to 
org.apache.spark.sql.catalyst.InternalRow
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.agg_doAggregateWithoutKey_0$(Unknown
 Source)
 at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown
 Source)
 at 
org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
 at 
org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$13$$anon$1.hasNext(WholeStageCodegenExec.scal

[jira] [Created] (CARBONDATA-4214) inserting NULL value when timestamp value received from FROM_UNIXTIME(0)

2021-06-18 Thread Mahesh Raju Somalaraju (Jira)
Mahesh Raju Somalaraju created CARBONDATA-4214:
--

 Summary: inserting NULL value when timestamp value received from 
FROM_UNIXTIME(0)
 Key: CARBONDATA-4214
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4214
 Project: CarbonData
  Issue Type: Bug
Reporter: Mahesh Raju Somalaraju


inserting NULL value when timestamp value received from FROM_UNIXTIME(0)

issue reproduce steps

create table if not exists time_carbon1(time1 timestamp) stored as carbondata
insert into time_carbon1 select from_unixtime(0)
select count(*) from time_carbon1



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-4208) Wrong Exception received for complex child long string columns

2021-06-18 Thread Akash R Nilugal (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akash R Nilugal resolved CARBONDATA-4208.
-
Fix Version/s: 2.2.0
   Resolution: Fixed

> Wrong Exception received for complex child long string columns
> --
>
> Key: CARBONDATA-4208
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4208
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Mahesh Raju Somalaraju
>Priority: Minor
> Fix For: 2.2.0
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> Wrong Exception received for complex child long string columns
>  
> reproduce steps:
> sql("create table complex2 (a int, arr1 array) " +
>  "stored as carbondata TBLPROPERTIES('LONG_STRING_COLUMNS'='arr1.val')")
>  
> this case we should receive complex columns string columns will not support 
> long string exception message but receiving column not found in table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)