[GitHub] [carbondata] Karan980 opened a new pull request #4093: [CARBONDATA-4126] Concurrent compaction failed with load on table.

2021-02-10 Thread GitBox


Karan980 opened a new pull request #4093:
URL: https://github.com/apache/carbondata/pull/4093


### Why is this PR needed?
   Concurrent compaction was failing when run in parallel with load. During 
load we acquire SegmentLock for a particular segment, and when this same lock 
we try to acquire during compaction, we were not able to acquire this lock and 
compaction fails.
 
### What changes were proposed in this PR?
   Skipped compaction for segments for which we are not able to acquire the 
SegmentLock instead of throwing the exception.
   
   
### Does this PR introduce any user interface change?
- No
   
### Is any new testcase added?
- No
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (CARBONDATA-4129) Class cast exception when array, struct , binary and string type data tried to be merged using merge SQL command

2021-02-10 Thread Chetan Bhat (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chetan Bhat updated CARBONDATA-4129:

Summary: Class cast exception when array, struct , binary and string type 
data tried to be merged using merge SQL command  (was: Class cast exception 
when array, struct , binary and string type data tried to be merged)

> Class cast exception when array, struct , binary and string type data tried 
> to be merged using merge SQL command
> 
>
> Key: CARBONDATA-4129
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4129
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-query
>Affects Versions: 2.1.0
> Environment: Spark 2.4.5
>Reporter: Chetan Bhat
>Priority: Major
>
>  
> *Scenario 1 : - merge command with insertion on string with expression 
> **throws error.**. Also insert into binary with expression throws error.*
> drop table if exists A;
> drop table if exists B;
> CREATE TABLE A(id Int, name string, description string,address string, note 
> string) stored as carbondata 
> tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
>  
> CREATE TABLE B(id Int, name string, description string,address string, note 
> string) stored as carbondata 
> tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
>  
> insert into A select 
> 1,"name1A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into A select 
> 2,"name2A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into A select 
> 3,"name3A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into A select 
> 4,"name4A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into A select 
> 5,"name5A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into B select 
> 1,"name1B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into B select 
> 2,"name2B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into B select 
> 3,"name3B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into B select 
> 6,"name4B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> insert into B select 
> 7,"name5B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
> MERGE INTO A USING B ON A.ID=B.ID WHEN NOT MATCHED AND B.ID=7 THEN INSERT 
> (A.ID,A.name,A.description ,A.address, A.note) VALUES 
> (B.ID,B.name+'10',B.description ,B.address,'test-string');
> 0: jdbc:hive2://linux-63:22550/> MERGE INTO A USING B ON A.ID=B.ID WHEN NOT 
> MATCHED AND B.ID=7 THEN INSERT (A.ID,A.name,A.description ,A.address, A.note) 
> VALUES (B.ID,B.name+'10',B.description ,B.address,'test-string');
> Error: org.apache.spark.SparkException: Job aborted due to stage failure: 
> Task 4 in stage 3813.0 failed 4 times, most recent failure: Lost task 4.3 in 
> stage 3813.0 (TID 23528, linux-63, executor 5): java.lang.ClassCastException: 
> org.apache.spark.sql.types.StringType$ cannot be cast to 
> org.apache.spark.sql.types.NumericType
>  at 
> org.apache.spark.sql.catalyst.util.TypeUtils$.getNumeric(TypeUtils.scala:58)
>  at 
> org.apache.spark.sql.catalyst.expressions.Add.numeric$lzycompute(arithmetic.scala:166)
>  at 
> org.apache.spark.sql.catalyst.expressions.Add.numeric(arithmetic.scala:166)
>  at 
> org.apache.spark.sql.catalyst.expressions.Add.nullSafeEval(arithmetic.scala:172)
>  at 
> org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:486)
>  at 
> org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:92)
>  at 
> org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:66)
>  at 
> org.apache.spark.sql.execution.command.mutation.merge.MergeProjection.apply(MergeProjection.scala:54)
>  at 
> org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1$$anonfun$next$1.apply(CarbonMergeDataSetCommand.scala:341)
>  at 
> org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1$$anonfun$next$1.apply(CarbonMergeDataSetCommand.scala:338)
>  at scala.collection.immutable.List.foreach(List.scala:392)
>  at 
> org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1.next(CarbonMergeD

[jira] [Created] (CARBONDATA-4129) Class cast exception when array, struct , binary and string type data tried to be merged

2021-02-10 Thread Chetan Bhat (Jira)
Chetan Bhat created CARBONDATA-4129:
---

 Summary: Class cast exception when array, struct , binary and 
string type data tried to be merged
 Key: CARBONDATA-4129
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4129
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 2.1.0
 Environment: Spark 2.4.5
Reporter: Chetan Bhat


 

*Scenario 1 : - merge command with insertion on string with expression **throws 
error.**. Also insert into binary with expression throws error.*

drop table if exists A;
drop table if exists B;
CREATE TABLE A(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 
CREATE TABLE B(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 

insert into A select 
1,"name1A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
2,"name2A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
3,"name3A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
4,"name4A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
5,"name5A","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";

insert into B select 
1,"name1B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
2,"name2B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
3,"name3B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
6,"name4B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
7,"name5B","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";

MERGE INTO A USING B ON A.ID=B.ID WHEN NOT MATCHED AND B.ID=7 THEN INSERT 
(A.ID,A.name,A.description ,A.address, A.note) VALUES 
(B.ID,B.name+'10',B.description ,B.address,'test-string');

0: jdbc:hive2://linux-63:22550/> MERGE INTO A USING B ON A.ID=B.ID WHEN NOT 
MATCHED AND B.ID=7 THEN INSERT (A.ID,A.name,A.description ,A.address, A.note) 
VALUES (B.ID,B.name+'10',B.description ,B.address,'test-string');
Error: org.apache.spark.SparkException: Job aborted due to stage failure: Task 
4 in stage 3813.0 failed 4 times, most recent failure: Lost task 4.3 in stage 
3813.0 (TID 23528, linux-63, executor 5): java.lang.ClassCastException: 
org.apache.spark.sql.types.StringType$ cannot be cast to 
org.apache.spark.sql.types.NumericType
 at org.apache.spark.sql.catalyst.util.TypeUtils$.getNumeric(TypeUtils.scala:58)
 at 
org.apache.spark.sql.catalyst.expressions.Add.numeric$lzycompute(arithmetic.scala:166)
 at org.apache.spark.sql.catalyst.expressions.Add.numeric(arithmetic.scala:166)
 at 
org.apache.spark.sql.catalyst.expressions.Add.nullSafeEval(arithmetic.scala:172)
 at 
org.apache.spark.sql.catalyst.expressions.BinaryExpression.eval(Expression.scala:486)
 at 
org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:92)
 at 
org.apache.spark.sql.catalyst.expressions.InterpretedMutableProjection.apply(Projection.scala:66)
 at 
org.apache.spark.sql.execution.command.mutation.merge.MergeProjection.apply(MergeProjection.scala:54)
 at 
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1$$anonfun$next$1.apply(CarbonMergeDataSetCommand.scala:341)
 at 
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1$$anonfun$next$1.apply(CarbonMergeDataSetCommand.scala:338)
 at scala.collection.immutable.List.foreach(List.scala:392)
 at 
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1.next(CarbonMergeDataSetCommand.scala:338)
 at 
org.apache.spark.sql.execution.command.mutation.merge.CarbonMergeDataSetCommand$$anonfun$processIUD$1$$anon$1.next(CarbonMergeDataSetCommand.scala:319)
 at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:463)
 at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
 at 
org.apache.spark.sql.execution.columnar.CachedRDDBuilder$$anonfun$1$$anon$1.hasNext(InMemoryRelation.scala:125)
 at 
org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:221)
 at 
org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:299)
 at 
org.apache.spark.storage.BlockMa

[jira] [Created] (CARBONDATA-4128) Merge SQL command fails with different case for column name

2021-02-10 Thread Chetan Bhat (Jira)
Chetan Bhat created CARBONDATA-4128:
---

 Summary: Merge SQL command fails with different case for column 
name
 Key: CARBONDATA-4128
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4128
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 2.1.0
 Environment: Spark 2.4.5
Reporter: Chetan Bhat


Steps:-

drop table if exists A;
drop table if exists B;
CREATE TABLE A(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 
insert into A select 
1,"name1","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
2,"name2","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
3,"name3","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
4,"name4","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into A select 
5,"name5","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";


CREATE TABLE B(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 
insert into B select 
1,"name1","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
2,"name2","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
3,"name3","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
6,"name4","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into B select 
7,"name5","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
--merge
MERGE INTO A USING B ON A.id=B.id WHEN MATCHED THEN DELETE;

 

Issue :- Merge SQL command fails with different case for column name

0: jdbc:hive2://linux-63:22550/> MERGE INTO A USING B ON A.id=B.id WHEN MATCHED 
THEN DELETE;
Error: org.apache.spark.sql.AnalysisException: == Spark Parser: 
org.apache.spark.sql.hive.FISqlParser ==

mismatched input 'MERGE' expecting \{'(', 'SELECT', 'FROM', 'ADD', 'DESC', 
'EMPOWER', 'WITH', 'VALUES', 'CREATE', 'TABLE', 'INSERT', 'DELETE', 'DESCRIBE', 
'EXPLAIN', 'SHOW', 'USE', 'DROP', 'ALTER', 'MAP', 'SET', 'RESET', 'START', 
'COMMIT', 'ROLLBACK', 'REDUCE', 'REFRESH', 'CLEAR', 'CACHE', 'UNCACHE', 'DFS', 
'TRUNCATE', 'ANALYZE', 'LIST', 'REVOKE', 'GRANT', 'LOCK', 'UNLOCK', 'MSCK', 
'EXPORT', 'IMPORT', 'LOAD', 'HEALTHCHECK'}(line 1, pos 0)

== SQL ==
MERGE INTO A USING B ON A.id=B.id WHEN MATCHED THEN DELETE
^^^

== Carbon Parser: org.apache.spark.sql.parser.CarbonExtensionSpark2SqlParser ==
[1.1] failure: identifier matching regex (?i)EXPLAIN expected

MERGE INTO A USING B ON A.id=B.id WHEN MATCHED THEN DELETE
^;
== Antlr Parser: org.apache.spark.sql.parser.CarbonAntlrParser ==
org.apache.spark.sql.parser.CarbonSqlBaseParser$ValueExpressionDefaultContext 
cannot be cast to 
org.apache.spark.sql.parser.CarbonSqlBaseParser$ComparisonContext; 
(state=,code=0)
0: jdbc:hive2://linux-63:22550/>



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-4127) Merge SQL command not working with different table names

2021-02-10 Thread Chetan Bhat (Jira)
Chetan Bhat created CARBONDATA-4127:
---

 Summary: Merge SQL command not working with different table names
 Key: CARBONDATA-4127
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4127
 Project: CarbonData
  Issue Type: Bug
  Components: data-query
Affects Versions: 2.1.0
 Environment: Spark 2.4.5
Reporter: Chetan Bhat


Steps:-

CREATE TABLE lsc1(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 
insert into lsc1 select 
1,"name1","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc1 select 
2,"name2","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc1 select 
3,"name3","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc1 select 
4,"name4","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc1 select 
5,"name5","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";


CREATE TABLE lsc2(id Int, name string, description string,address string, note 
string) stored as carbondata 
tblproperties('long_string_columns'='description,note','table_blocksize'='1','SORT_SCOPE'='global_sort','table_page_size_inmb'='1');
 
insert into lsc2 select 
1,"name1","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc2 select 
2,"name2","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc2 select 
3,"name3","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc2 select 
6,"name4","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";
insert into lsc2 select 
7,"name5","asasfdfdfdsf","tutyutyuty","6867898980909099-0-0-0878676565454545465768798";

 

Issue :- merge fails with parse error

0: jdbc:hive2://linux-63:22550/> MERGE INTO lsc1 USING lsc2 ON lsc1.ID=lsc2.ID 
WHEN MATCHED THEN DELETE;
*Error: 
org.apache.carbondata.common.exceptions.sql.MalformedCarbonCommandException: 
Parse failed! (state=,code=0)*
*0: jdbc:hive2://linux-63:22550/>*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (CARBONDATA-4126) Concurrent Compaction fails with Load on table with SI

2021-02-10 Thread Chetan Bhat (Jira)
Chetan Bhat created CARBONDATA-4126:
---

 Summary: Concurrent Compaction fails with Load on table with SI
 Key: CARBONDATA-4126
 URL: https://issues.apache.org/jira/browse/CARBONDATA-4126
 Project: CarbonData
  Issue Type: Bug
  Components: data-load
Affects Versions: 2.1.0
 Environment: Spark 2.4.5
Reporter: Chetan Bhat


[Steps] :-

Create table, load data and create SI.

create table brinjal (imei string,AMSize string,channelsId string,ActiveCountry 
string, Activecity string,gamePointId double,deviceInformationId 
double,productionDate Timestamp,deliveryDate timestamp,deliverycharge double) 
stored as carbondata TBLPROPERTIES('table_blocksize'='1');

LOAD DATA INPATH 'hdfs://hacluster/chetan/vardhandaterestruct.csv' INTO TABLE 
brinjal OPTIONS('DELIMITER'=',', 'QUOTECHAR'= 
'"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge');

create index indextable1 ON TABLE brinjal (AMSize) AS 'carbondata';

 

>From one terminal load data to table and other terminal perform minor and 
>major compaction on the table concurrently for some time.

LOAD DATA INPATH 'hdfs://hacluster/chetan/vardhandaterestruct.csv' INTO TABLE 
brinjal OPTIONS('DELIMITER'=',', 'QUOTECHAR'= 
'"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'= 
'imei,deviceInformationId,AMSize,channelsId,ActiveCountry,Activecity,gamePointId,productionDate,deliveryDate,deliverycharge');

alter table brinjal compact 'minor';

alter table brinjal compact 'major';

 

[Expected Result] :-  Concurrent Compaction should be success with Load on 
table with SI

 

[Actual Issue] : - Concurrent Compaction fails with Load on table with SI

*0: jdbc:hive2://linux-32:22550/> alter table brinjal compact 'major';*

*Error: org.apache.spark.sql.AnalysisException: Compaction failed. Please check 
logs for more info. Exception in compaction Failed to acquire lock on segment 
2, during compaction of table test.brinjal; (state=,code=0)*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (CARBONDATA-4122) Support Writing Flink Stage data into Hdfs file system

2021-02-10 Thread Ajantha Bhat (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajantha Bhat resolved CARBONDATA-4122.
--
Fix Version/s: 2.2.0
   Resolution: Fixed

> Support Writing Flink Stage data into Hdfs file system
> --
>
> Key: CARBONDATA-4122
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4122
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Indhumathi Muthu Murugesh
>Priority: Major
> Fix For: 2.2.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [carbondata] asfgit closed pull request #4090: [CARBONDATA-4122] Use CarbonFile API instead of java File API for Flink CarbonLocalWriter

2021-02-10 Thread GitBox


asfgit closed pull request #4090:
URL: https://github.com/apache/carbondata/pull/4090


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] ajantha-bhat commented on pull request #4090: [CARBONDATA-4122] Use CarbonFile API instead of java File API for Flink CarbonLocalWriter

2021-02-10 Thread GitBox


ajantha-bhat commented on pull request #4090:
URL: https://github.com/apache/carbondata/pull/4090#issuecomment-776719317


   LGTM



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4087: [CARBONDATA-4125] SI compatability issue fix

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4087:
URL: https://github.com/apache/carbondata/pull/4087#issuecomment-776648606


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbon_PR_Builder_2.4.5/3691/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4087: [CARBONDATA-4125] SI compatability issue fix

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4087:
URL: https://github.com/apache/carbondata/pull/4087#issuecomment-776646101


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbonPRBuilder2.3/5452/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4081: [WIP]Secondary Index based pruning without spark query plan modification

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4081:
URL: https://github.com/apache/carbondata/pull/4081#issuecomment-776642321


   Build Failed  with Spark 2.3.4, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbonPRBuilder2.3/5451/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4081: [WIP]Secondary Index based pruning without spark query plan modification

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4081:
URL: https://github.com/apache/carbondata/pull/4081#issuecomment-776634120


   Build Failed  with Spark 2.4.5, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbon_PR_Builder_2.4.5/3690/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4090: [CARBONDATA-4122] Use CarbonFile API instead of java File API for Flink CarbonLocalWriter

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4090:
URL: https://github.com/apache/carbondata/pull/4090#issuecomment-776532783


   Build Success with Spark 2.4.5, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbon_PR_Builder_2.4.5/3689/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [carbondata] CarbonDataQA2 commented on pull request #4090: [CARBONDATA-4122] Use CarbonFile API instead of java File API for Flink CarbonLocalWriter

2021-02-10 Thread GitBox


CarbonDataQA2 commented on pull request #4090:
URL: https://github.com/apache/carbondata/pull/4090#issuecomment-776532399


   Build Success with Spark 2.3.4, Please check CI 
http://121.244.95.60:12444/job/ApacheCarbonPRBuilder2.3/5450/
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (CARBONDATA-4122) Support Writing Flink Stage data into Hdfs file system

2021-02-10 Thread Indhumathi Muthu Murugesh (Jira)


 [ 
https://issues.apache.org/jira/browse/CARBONDATA-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Indhumathi Muthu Murugesh updated CARBONDATA-4122:
--
Summary: Support Writing Flink Stage data into Hdfs file system  (was: 
Support HDFS Carbon writer for Flink Carbon Streaming)

> Support Writing Flink Stage data into Hdfs file system
> --
>
> Key: CARBONDATA-4122
> URL: https://issues.apache.org/jira/browse/CARBONDATA-4122
> Project: CarbonData
>  Issue Type: New Feature
>Reporter: Indhumathi Muthu Murugesh
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)