[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3579/ ---
[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1825 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2341/ ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2340/ ---
[GitHub] carbondata pull request #1952: [HotFix][CheckStyle] Fix import related check...
GitHub user xuchuanyin opened a pull request: https://github.com/apache/carbondata/pull/1952 [HotFix][CheckStyle] Fix import related checkstyle Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xuchuanyin/carbondata hot_fix_checkstyle Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1952.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1952 commit 210437a9099b29d264f356714d23d942ecd9c20e Author: xuchuanyin Date: 2018-02-08T07:39:45Z Fix import related checkstyle ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3578/ ---
[GitHub] carbondata issue #1951: [CARBONDATA-1763] Dropped table if exception thrown ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1951 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3424/ ---
[GitHub] carbondata pull request #1878: [CARBONDATA-2094] Filter DataMap Tables in Sh...
Github user xubo245 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1878#discussion_r166848593 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggCreateCommand.scala --- @@ -216,6 +216,20 @@ class TestPreAggCreateCommand extends QueryTest with BeforeAndAfterAll { } val timeSeries = TIMESERIES.toString + test("remove agg tables from show table command") { +sql("DROP TABLE IF EXISTS tbl_1") +sql("DROP TABLE IF EXISTS sparktable") +sql("create table if not exists tbl_1(imei string,age int,mac string ,prodate timestamp,update timestamp,gamepoint double,contrid double) stored by 'carbondata' ") +sql("create table if not exists sparktable(a int,b string)") +sql( + s"""create datamap preagg_sum on table tbl_1 using 'preaggregate' as select mac,avg(age) from tbl_1 group by mac""" +.stripMargin) +sql( + "create datamap agg2 on table tbl_1 using 'preaggregate' DMPROPERTIES ('timeseries" + + ".eventTime'='prodate', 'timeseries.hierarchy'='hour=1,day=1,month=1,year=1') as select prodate," + --- End diff -- I fix it on https://github.com/apache/carbondata/pull/1929, please review it @BJangir @jackylk @ravipesala @sraghunandan ---
[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...
Github user xuchuanyin commented on the issue: https://github.com/apache/carbondata/pull/1825 retest this please ---
[GitHub] carbondata pull request #1878: [CARBONDATA-2094] Filter DataMap Tables in Sh...
Github user xubo245 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1878#discussion_r166847806 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala --- @@ -46,8 +46,9 @@ class TestPreAggregateDrop extends QueryTest with BeforeAndAfterAll { " a,sum(c) from maintable group by a") sql("drop datamap if exists preagg2 on table maintable") val showTables = sql("show tables") +val showdatamaps =sql("show datamap on table maintable") checkExistence(showTables, false, "maintable_preagg2") --- End diff -- for example: test("dropping 1 aggregate table should not drop others") { sql( "create datamap preagg1 on table maintable using 'preaggregate' as select" + " a,sum(b) from maintable group by a") sql( "create datamap preagg2 on table maintable using 'preaggregate' as select" + " a,sum(c) from maintable group by a") sql("drop datamap if exists preagg2 on table maintable") val showTables = sql("show tables") checkExistence(showTables, false, "maintable_preagg2") showTables will always not contain maintable_preagg2, so we need adapt new show table function. There are some test case need check again. @BJangir @jackylk @chenliang613 @sraghunandan @ravipesala ---
[GitHub] carbondata issue #1878: [CARBONDATA-2094] Filter DataMap Tables in Show Tabl...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1878 There are many test case is unuseful/invalid after you change the function of "show tables" ---
[GitHub] carbondata pull request #1878: [CARBONDATA-2094] Filter DataMap Tables in Sh...
Github user xubo245 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1878#discussion_r166847284 --- Diff: integration/spark-common-test/src/test/scala/org/apache/carbondata/integration/spark/testsuite/preaggregate/TestPreAggregateDrop.scala --- @@ -46,8 +46,9 @@ class TestPreAggregateDrop extends QueryTest with BeforeAndAfterAll { " a,sum(c) from maintable group by a") sql("drop datamap if exists preagg2 on table maintable") val showTables = sql("show tables") +val showdatamaps =sql("show datamap on table maintable") checkExistence(showTables, false, "maintable_preagg2") --- End diff -- this test case is invalid after you change the function of "show tables ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1949 retest this please ---
[GitHub] carbondata issue #1948: [CARBONDATA-2143] Fixed query memory leak issue for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1948 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3575/ ---
[GitHub] carbondata issue #1948: [CARBONDATA-2143] Fixed query memory leak issue for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1948 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2337/ ---
[GitHub] carbondata issue #1808: [CARBONDATA-2023][DataLoad] Add size base block allo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1808 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2339/ ---
[GitHub] carbondata issue #1792: [CARBONDATA-2018][DataLoad] Optimization in reading/...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1792 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2338/ ---
[GitHub] carbondata issue #1951: [CARBONDATA-1763] Dropped table if exception thrown ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1951 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2336/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3423/ ---
[GitHub] carbondata issue #1792: [CARBONDATA-2018][DataLoad] Optimization in reading/...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1792 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3576/ ---
[GitHub] carbondata issue #1808: [CARBONDATA-2023][DataLoad] Add size base block allo...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1808 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3577/ ---
[GitHub] carbondata pull request #1951: [CARBONDATA-1763] Dropped table if exception ...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1951#discussion_r166845879 --- Diff: integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/package.scala --- @@ -87,8 +87,8 @@ abstract class AtomicRunnableCommand extends RunnableCommand with MetadataProcessOpeation with DataProcessOperation { override def run(sparkSession: SparkSession): Seq[Row] = { -processMetadata(sparkSession) --- End diff -- This change may impact other commands. Better don't include in try block. Let the individual command handle it. ---
[GitHub] carbondata issue #1951: [CARBONDATA-1763] Dropped table if exception thrown ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1951 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3574/ ---
[GitHub] carbondata pull request #1935: [CARBONDATA-2134] Prevent implicit column fil...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1935#discussion_r166845170 --- Diff: hadoop/src/main/java/org/apache/carbondata/hadoop/api/CarbonTableInputFormat.java --- @@ -1003,4 +1004,13 @@ public static String getTableName(Configuration configuration) } return tableName; } + + /** + * Method to remove InExpression node from filter expression + * + * @param expression + */ + public void removeInExpressionFromFilterExpression(Expression expression) { --- End diff -- This method should not belong here. Better do in scanrdd only ---
[jira] [Resolved] (CARBONDATA-2131) Alter table adding long datatype is failing but Create table with long type is successful, in Spark 2.1
[ https://issues.apache.org/jira/browse/CARBONDATA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manish Gupta resolved CARBONDATA-2131. -- Resolution: Fixed Fix Version/s: 1.3.0 > Alter table adding long datatype is failing but Create table with long type > is successful, in Spark 2.1 > --- > > Key: CARBONDATA-2131 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2131 > Project: CarbonData > Issue Type: Bug >Reporter: dhatchayani >Assignee: dhatchayani >Priority: Minor > Fix For: 1.3.0 > > Time Spent: 2h 10m > Remaining Estimate: 0h > > create table test4(a1 int) stored by 'carbondata'; > +-+--+ > | Result | > +-+--+ > +-+--+ > No rows selected (1.757 seconds) > ** > > *alter table test4 add columns (a6 long);* > *Error: java.lang.RuntimeException*: > BaseSqlParser == Parse1 == > > Operation not allowed: alter table add columns(line 1, pos 0) > > == SQL == > alter table test4 add columns (a6 long) > ^^^ > > == Parse2 == > [1.35] failure: identifier matching regex (?i)VARCHAR expected > > alter table test4 add columns (a6 long) > ^; > CarbonSqlParser [1.35] failure: identifier matching regex (?i)VARCHAR > expected > > alter table test4 add columns (a6 long) > ^ (state=,code=0) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #1808: [CARBONDATA-2023][DataLoad] Add size base block allo...
Github user xuchuanyin commented on the issue: https://github.com/apache/carbondata/pull/1808 retest this please ---
[GitHub] carbondata pull request #1932: [CARBONDATA-2131] Alter table adding long dat...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1932 ---
[GitHub] carbondata issue #1792: [CARBONDATA-2018][DataLoad] Optimization in reading/...
Github user xuchuanyin commented on the issue: https://github.com/apache/carbondata/pull/1792 retest this please ---
[GitHub] carbondata issue #1932: [CARBONDATA-2131] Alter table adding long datatype i...
Github user manishgupta88 commented on the issue: https://github.com/apache/carbondata/pull/1932 LGTM ---
[GitHub] carbondata pull request #1932: [CARBONDATA-2131] Alter table adding long dat...
Github user manishgupta88 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1932#discussion_r165971754 --- Diff: integration/spark2/src/test/scala/org/apache/spark/carbondata/restructure/AlterTableValidationTestCase.scala --- @@ -128,6 +128,24 @@ class AlterTableValidationTestCase extends Spark2QueryTest with BeforeAndAfterAl Row(new BigDecimal("123.45").setScale(2, RoundingMode.HALF_UP))) } + test("test add long column before load") { +sql("drop table if exists alterLong") +sql("create table alterLong (name string) stored by 'carbondata'") +sql("alter table alterLong add columns(newCol long)") +sql("insert into alterLong select 'a',6") +checkAnswer(sql("select * from alterLong"), Row("a", 6)) +sql("drop table if exists alterLong") + } + + test("test add long column after load") { +sql("drop table if exists alterLong1") +sql("create table alterLong1 (name string) stored by 'carbondata'") +sql("insert into alterLong1 select 'a'") +sql("alter table alterLong1 add columns(newCol long)") --- End diff -- do one more load after alter ---
[GitHub] carbondata pull request #1948: [CARBONDATA-2143] Fixed query memory leak iss...
Github user manishgupta88 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1948#discussion_r166842293 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java --- @@ -586,16 +586,27 @@ private int getKeySize(List queryDimension, */ @Override public void finish() throws QueryExecutionException { CarbonUtil.clearBlockCache(queryProperties.dataBlocks); + UnsafeMemoryManager.INSTANCE.freeMemoryAll(ThreadLocalTaskInfo.getCarbonTaskInfo().getTaskId()); --- End diff -- ok ---
[jira] [Created] (CARBONDATA-2146) Preaggregate table is not dropped from metastore if creation fails
Kunal Kapoor created CARBONDATA-2146: Summary: Preaggregate table is not dropped from metastore if creation fails Key: CARBONDATA-2146 URL: https://issues.apache.org/jira/browse/CARBONDATA-2146 Project: CarbonData Issue Type: Bug Affects Versions: 1.3.0 Reporter: Kunal Kapoor Assignee: Kunal Kapoor -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #1950: [CARBONDATA-2145] Refactored PreAggregate functional...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1950 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3422/ ---
[GitHub] carbondata pull request #1951: [CARBONDATA-1763] Dropped table if exception ...
GitHub user kunal642 opened a pull request: https://github.com/apache/carbondata/pull/1951 [CARBONDATA-1763] Dropped table if exception thrown while creation Preaggregate table is not getting dropped when creation fails because 1. Exceptions from undo metadata is not handled 2. If preaggregate table is not registered with main table(main table updation fails) then it is not dropped from metastore. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kunal642/carbondata drop_fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1951.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1951 commit d9628fc31c02dce51dabb8a329626f489b431358 Author: kunal642 Date: 2018-02-08T06:20:23Z dropped table if exception thrown while creation ---
[jira] [Closed] (CARBONDATA-2146) Preaggregate table is not dropped from metastore if creation fails
[ https://issues.apache.org/jira/browse/CARBONDATA-2146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kunal Kapoor closed CARBONDATA-2146. Resolution: Duplicate > Preaggregate table is not dropped from metastore if creation fails > -- > > Key: CARBONDATA-2146 > URL: https://issues.apache.org/jira/browse/CARBONDATA-2146 > Project: CarbonData > Issue Type: Bug >Affects Versions: 1.3.0 >Reporter: Kunal Kapoor >Assignee: Kunal Kapoor >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #1948: [CARBONDATA-2143] Fixed query memory leak iss...
Github user ravipesala commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1948#discussion_r166841331 --- Diff: core/src/main/java/org/apache/carbondata/core/scan/executor/impl/AbstractQueryExecutor.java --- @@ -586,16 +586,27 @@ private int getKeySize(List queryDimension, */ @Override public void finish() throws QueryExecutionException { CarbonUtil.clearBlockCache(queryProperties.dataBlocks); + UnsafeMemoryManager.INSTANCE.freeMemoryAll(ThreadLocalTaskInfo.getCarbonTaskInfo().getTaskId()); --- End diff -- Better move down it to after closing of queryIterator ---
[GitHub] carbondata pull request #1946: [WIP] Refresh fix
Github user kunal642 closed the pull request at: https://github.com/apache/carbondata/pull/1946 ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata issue #1950: [CARBONDATA-2145] Refactored PreAggregate functional...
Github user SangeetaGulia commented on the issue: https://github.com/apache/carbondata/pull/1950 @kumarvishal09 please review. ---
[GitHub] carbondata issue #1950: [CARBONDATA-2145] Refactored PreAggregate functional...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1950 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2335/ ---
[GitHub] carbondata issue #1950: [CARBONDATA-2145] Refactored PreAggregate functional...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1950 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3573/ ---
[GitHub] carbondata issue #1947: [CARBONDATA-2119]deserialization issue for carbonloa...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1947 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3421/ ---
[GitHub] carbondata pull request #1950: [CARBONDATA-2145] Refactored PreAggregate fun...
GitHub user SangeetaGulia opened a pull request: https://github.com/apache/carbondata/pull/1950 [CARBONDATA-2145] Refactored PreAggregate functionality for dictionary include Description: Add the count to measure column only when the column is dictionary type in maintable. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? No - [x] Any backward compatibility impacted? No - [x] Document update required? No - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? Ran already wriiten Unit Test case to test the functionality. Added Unit Test Case to check count on string type. - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. (N/A) You can merge this pull request into a Git repository by running: $ git pull https://github.com/SangeetaGulia/incubator-carbondata refactoringPreAgg Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1950.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1950 commit 576eb2e8997420125e90afbf85b37ca9cf9429ef Author: SangeetaGulia Date: 2018-02-07T08:15:29Z Refactored code for encodings when dictionary_include is present commit 71ab2208aa0a45b9504f6dd9245b23d5f996 Author: SangeetaGulia Date: 2018-02-07T08:16:11Z Added test case on preAggregate for string type with count as aggregate function ---
[GitHub] carbondata issue #1947: [CARBONDATA-2119]deserialization issue for carbonloa...
Github user akashrn5 commented on the issue: https://github.com/apache/carbondata/pull/1947 retest sdv please ---
[jira] [Created] (CARBONDATA-2145) Refactor PreAggregate functionality for dictionary include.
Sangeeta Gulia created CARBONDATA-2145: -- Summary: Refactor PreAggregate functionality for dictionary include. Key: CARBONDATA-2145 URL: https://issues.apache.org/jira/browse/CARBONDATA-2145 Project: CarbonData Issue Type: Improvement Reporter: Sangeeta Gulia If in maintable, the column is dictionary type then only add the count to measure column. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3420/ ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1949 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3419/ ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2334/ ---
[GitHub] carbondata issue #1949: [CARBONDATA2144] Optimize preaggregate table documen...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1949 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3572/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata pull request #1949: [CARBONDATA2144] Optimize preaggregate table ...
GitHub user xubo245 opened a pull request: https://github.com/apache/carbondata/pull/1949 [CARBONDATA2144] Optimize preaggregate table documentation, include timeseries Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/xubo245/carbondata CARBONDATA2144_OptimizePreAggDoc Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1949.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1949 commit bfbe05614183c33e1a3810c85bcc351fb22a76d4 Author: xubo245 <601450868@...> Date: 2018-02-08T03:28:53Z [CARBONDATA2144] Optimize preaggregate table documentation, include timeseries ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3418/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3417/ ---
[jira] [Created] (CARBONDATA-2144) Optimize pre-aggregate documentation
xubo245 created CARBONDATA-2144: --- Summary: Optimize pre-aggregate documentation Key: CARBONDATA-2144 URL: https://issues.apache.org/jira/browse/CARBONDATA-2144 Project: CarbonData Issue Type: Improvement Components: docs Reporter: xubo245 Assignee: xubo245 Optimize pre-aggregate documentation: * add blank space * upper case like: Carbondata supports pre aggregating of data so that OLAP kind of queries can fetch data much faster.Aggregate tables are created as datamaps so that the handling is as efficient as other indexing support.Users can create as many aggregate tables they require as datamaps to improve their query performance,provided the storage requirements and loading speeds are acceptable. For main table called sales which is defined as CREATE TABLE sales ( order_time timestamp, user_id string, sex string, country string, quantity int, price bigint) STORED BY 'carbondata') need to -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata issue #1928: [MINOR]Remove dependency of Java 1.8
Github user zzcclp commented on the issue: https://github.com/apache/carbondata/pull/1928 @jackylk please review, thanks. ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3416/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/1941 retest sdv please ---
[GitHub] carbondata issue #1867: [CARBONDATA-2055][Streaming][WIP]Support integrating...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1867 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3415/ ---
[GitHub] carbondata issue #1867: [CARBONDATA-2055][Streaming][WIP]Support integrating...
Github user zzcclp commented on the issue: https://github.com/apache/carbondata/pull/1867 retest sdv please ---
[GitHub] carbondata issue #1867: [CARBONDATA-2055][Streaming][WIP]Support integrating...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1867 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3414/ ---
[GitHub] carbondata issue #1948: [CARBONDATA-2143] Fixed query memory leak issue for ...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1948 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3413/ ---
[GitHub] carbondata issue #1937: [CARBONDATA-2137] Delete query performance improved
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1937 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3412/ ---
[GitHub] carbondata issue #1947: [CARBONDATA-2119]deserialization issue for carbonloa...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1947 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3411/ ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1946 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3410/ ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1946 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3409/ ---
[GitHub] carbondata issue #1945: [HOTFIX] Fix documentation errors.Add examples for p...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1945 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3408/ ---
[GitHub] carbondata issue #1945: [HOTFIX] Fix documentation errors.Add examples for p...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1945 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3407/ ---
[GitHub] carbondata issue #1942: [CARBONDATA-2136] Fixed bug related to data load for...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1942 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3406/ ---
[GitHub] carbondata issue #1904: [CARBONDATA-2059] - Changes to support compaction fo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1904 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3405/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3404/ ---
[GitHub] carbondata issue #1904: [CARBONDATA-2059] - Changes to support compaction fo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1904 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3403/ ---
[GitHub] carbondata issue #1941: [CARBONDATA1506] fix SDV error in PushUP_FILTER_uniq...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1941 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3402/ ---
[GitHub] carbondata issue #1867: [CARBONDATA-2055][Streaming][WIP]Support integrating...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1867 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2333/ ---
[GitHub] carbondata issue #1867: [CARBONDATA-2055][Streaming][WIP]Support integrating...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1867 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3571/ ---
[GitHub] carbondata issue #1948: [CARBONDATA-2143] Fixed query memory leak issue for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1948 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3570/ ---
[GitHub] carbondata issue #1948: [CARBONDATA-2143] Fixed query memory leak issue for ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1948 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2332/ ---
[GitHub] carbondata issue #1943: [CARBONDATA-2142] Fixed Pre-Aggregate datamap creati...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1943 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3401/ ---
[GitHub] carbondata issue #1937: [CARBONDATA-2137] Delete query performance improved
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1937 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2331/ ---
[GitHub] carbondata issue #1937: [CARBONDATA-2137] Delete query performance improved
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1937 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3569/ ---
[jira] [Updated] (CARBONDATA-2143) Fixed query memory leak issue for task failure during initialization of record reader
[ https://issues.apache.org/jira/browse/CARBONDATA-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manish Gupta updated CARBONDATA-2143: - Description: *Problem:* Whenever a query is executed, in the internalCompute method of CarbonScanRdd class record reader is initialized. A task completion listener is attached to each task after initialization of the record reader. During record reader initialization, queryResultIterator is initialized and one blocklet is processed. The blocklet processed will use available unsafe memory. Lets say there are 100 columns and 80 columns get the space but there is no space left for the remaining columns to be stored in the unsafe memory. This will result is memory exception and record reader initialization will fail leading to failure in query. In the above case the unsafe memory allocated for 80 columns will not be freed and will always remain occupied till the JVM process persists. *Impact* It is memory leak in the system and can lead to query failures for queries executed after one one query fails due to the above reason. *Exception Trace* java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.carbondata.core.memory.MemoryException: Not enough memory at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:136) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:50) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:41) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:31) at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.(ChunkRowIterator.java:41) at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84) at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:378) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) was: **Problem:** Whenever a query is executed, in the internalCompute method of CarbonScanRdd class record reader is initialized. A task completion listener is attached to each task after initialization of the record reader. During record reader initialization, queryResultIterator is initialized and one blocklet is processed. The blocklet processed will use available unsafe memory. Lets say there are 100 columns and 80 columns get the space but there is no space left for the remaining columns to be stored in the unsafe memory. This will result is memory exception and record reader initialization will fail leading to failure in query. In the above case the unsafe memory allocated for 80 columns will not be freed and will always remain occupied till the JVM process persists. **Impact** It is memory leak
[jira] [Created] (CARBONDATA-2143) Fixed query memory leak issue for task failure during initialization of record reader
Manish Gupta created CARBONDATA-2143: Summary: Fixed query memory leak issue for task failure during initialization of record reader Key: CARBONDATA-2143 URL: https://issues.apache.org/jira/browse/CARBONDATA-2143 Project: CarbonData Issue Type: Bug Reporter: Manish Gupta Assignee: Manish Gupta **Problem:** Whenever a query is executed, in the internalCompute method of CarbonScanRdd class record reader is initialized. A task completion listener is attached to each task after initialization of the record reader. During record reader initialization, queryResultIterator is initialized and one blocklet is processed. The blocklet processed will use available unsafe memory. Lets say there are 100 columns and 80 columns get the space but there is no space left for the remaining columns to be stored in the unsafe memory. This will result is memory exception and record reader initialization will fail leading to failure in query. In the above case the unsafe memory allocated for 80 columns will not be freed and will always remain occupied till the JVM process persists. **Impact** It is memory leak in the system and can lead to query failures for queries executed after one one query fails due to the above reason. *Exception Trace* java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.RuntimeException: java.lang.RuntimeException: org.apache.carbondata.core.memory.MemoryException: Not enough memory at org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:136) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:50) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:41) at org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:31) at org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.(ChunkRowIterator.java:41) at org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84) at org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:378) at org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:99) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata pull request #1948: [CARBONDATA-2143] Fixed query memory leak iss...
GitHub user manishgupta88 opened a pull request: https://github.com/apache/carbondata/pull/1948 [CARBONDATA-2143] Fixed query memory leak issue for task failure during initialization of record reader **Problem:** Whenever a query is executed, in the internalCompute method of CarbonScanRdd class record reader is initialized. A task completion listener is attached to each task after initialization of the record reader. During record reader initialization, queryResultIterator is initialized and one blocklet is processed. The blocklet processed will use available unsafe memory. Lets say there are 100 columns and 80 columns get the space but there is no space left for the remaining columns to be stored in the unsafe memory. This will result is memory exception and record reader initialization will fail leading to failure in query. In the above case the unsafe memory allocated for 80 columns will not be freed and will always remain occupied till the JVM process persists. **Impact** It is memory leak in the system and can lead to query failures for queries executed after one one query fails due to the above reason. **Solution:** Attach the task completion listener before record reader initialization so that if the query fails at the very first instance after using unsafe memory, still that memory will be cleared. - [ ] Any interfaces changed? No - [ ] Any backward compatibility impacted? No - [ ] Document update required? No - [ ] Testing done Manually tested - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. NA You can merge this pull request into a Git repository by running: $ git pull https://github.com/manishgupta88/carbondata query_memory_leak_fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1948.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1948 commit 2575db6757b21881fe8d7c706f6283f561f43555 Author: m00258959 Date: 2018-02-07T06:37:33Z Fixed memory leak issue. In case of any task failure unsafe memory for that task in not getting cleared from the executor if the task fails during initialization of the record reader ---
[GitHub] carbondata issue #1942: [CARBONDATA-2136] Fixed bug related to data load for...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1942 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3400/ ---
[GitHub] carbondata issue #1947: [CARBONDATA-2119]deserialization issue for carbonloa...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1947 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2330/ ---
[GitHub] carbondata issue #1947: [CARBONDATA-2119]deserialization issue for carbonloa...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1947 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3568/ ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1946 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2329/ ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1946 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3567/ ---
[GitHub] carbondata issue #1942: [CARBONDATA-2136] Fixed bug related to data load for...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1942 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3399/ ---
[GitHub] carbondata pull request #1947: [CARBONDATA-2119]deserialization issue for ca...
GitHub user akashrn5 opened a pull request: https://github.com/apache/carbondata/pull/1947 [CARBONDATA-2119]deserialization issue for carbonloadmodel Problem: Load model was not getting de-serialized in the executor due to which 2 different carbon table objects were being created. Solution: Reconstruct carbonTable from tableInfo if not already created. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [x] Any interfaces changed? NA - [x] Any backward compatibility impacted? NA - [x] Document update required? NA - [x] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [x] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/akashrn5/incubator-carbondata dat Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1947.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1947 commit 62a33b4245ccf3999a8d94121b0f15b37a711a76 Author: akashrn5 Date: 2018-02-07T13:14:33Z deserialization issue for carbonloadmodel ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1946 Build Failed with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2328/ ---
[GitHub] carbondata issue #1946: [WIP] Refresh fix
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1946 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3566/ ---
[GitHub] carbondata pull request #1946: [WIP] Refresh fix
GitHub user kunal642 opened a pull request: https://github.com/apache/carbondata/pull/1946 [WIP] Refresh fix Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/kunal642/carbondata refresh_fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1946.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1946 commit cb9fca5063b9a7882d09760ef777a9926ddffea0 Author: kunal642 Date: 2018-02-07T06:46:14Z refresh fix ---
[GitHub] carbondata issue #1904: [CARBONDATA-2059] - Changes to support compaction fo...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1904 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/3398/ ---
[GitHub] carbondata issue #1945: [HOTFIX] Fix documentation errors.Add examples for p...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1945 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2327/ ---
[GitHub] carbondata issue #1942: [CARBONDATA-2136] Fixed bug related to data load for...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1942 Build Success with Spark 2.2.1, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2326/ ---
[GitHub] carbondata issue #1945: [HOTFIX] Fix documentation errors.Add examples for p...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1945 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3565/ ---
[GitHub] carbondata issue #1942: [CARBONDATA-2136] Fixed bug related to data load for...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1942 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/3563/ ---
[GitHub] carbondata pull request #1944: [Documentation] Added a FAQ for executor retu...
Github user sraghunandan commented on a diff in the pull request: https://github.com/apache/carbondata/pull/1944#discussion_r166603209 --- Diff: docs/faq.md --- @@ -178,4 +179,9 @@ create datamap ag1 on table gdp21 using 'preaggregate' as select cntry, sum(gdp) select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry group by cntry; ``` +## Why all executors are returning success even after some of the query failed? --- End diff -- language is not correct. its not a query, it a command.We need to say why executor does not retry and why the signal sent to driver is not interpreted as failed ---