[GitHub] carbondata issue #1291: [CARBONDATA-1343] Hive can't query data when the car...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1291 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/906/ ---
[GitHub] carbondata issue #1291: [CARBONDATA-1343] Hive can't query data when the car...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1291 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/150/ ---
[GitHub] carbondata issue #1291: [CARBONDATA-1343] Hive can't query data when the car...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1291 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/274/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1372 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/273/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1372 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/905/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1372 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/149/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1367 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/904/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1367 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/148/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1367 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/272/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user rahulforallp commented on the issue: https://github.com/apache/carbondata/pull/1367 retest this please ---
[GitHub] carbondata pull request #1367: [CARBONDATA-1398] Support query from specifie...
Github user rahulforallp closed the pull request at: https://github.com/apache/carbondata/pull/1367 ---
[GitHub] carbondata pull request #1367: [CARBONDATA-1398] Support query from specifie...
GitHub user rahulforallp reopened a pull request: https://github.com/apache/carbondata/pull/1367 [CARBONDATA-1398] Support query from specified segments **1. **Objective : Support Query from specified segments. **2. **Proposed Solution** :** A new property will introduce to set the segment no. User will set property(carbon.input.segments. .) to specify segment no. During CarbonScan data will be read from from specified segments only. If property is not set, all segments will be caonsidered as default behavior. **3. Syntax Used :** **To show all the segments.** It will display one new column at the end that will display the new segment-id of compacted segment. > Syntax : SHOW SEGMENTS FOR TABLE ; `e.g. => show segments for table carbon_table;` **TO set the segment ids.** Segment ids to which we need to query, can be set to property carbon.input.segments (new) . Following syntax can be used to set segment ids from client(BEELINE) > Syntax : SET carbon.input.segments.. = ; `e.g => SET carbon.input.segments.default.carbontable=1,4,5;` To reset the segment id. Above property can be set to default behavior as follow. Following syntax can be used to reset segment ids from client(BEELINE) > Syntax : SET carbon.input.segments.. = *; `e.g => SET carbon.input.segments.default.carbontable=*` To reset all properties. It resets all the properties to default value. So it is recommended to use only when you want to reset all the different properties. > Syntax : RESET; `e.g => reset;` You can merge this pull request into a Git repository by running: $ git pull https://github.com/rahulforallp/incubator-carbondata CARBONDATA-1398 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1367.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1367 commit a630de7de54342c83ee2f294653ca7e285efb7c0 Author: rahulforallpDate: 2017-09-14T13:14:09Z [CARBONDATA-1398] support query from specified segments ---
[GitHub] carbondata issue #1361: [CARBONDATA-1481] Compaction support global sort
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1361 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/903/ ---
[GitHub] carbondata issue #1361: [CARBONDATA-1481] Compaction support global sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1361 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/147/ ---
[GitHub] carbondata issue #1361: [CARBONDATA-1481] Compaction support global sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1361 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/271/ ---
[GitHub] carbondata issue #1291: [CARBONDATA-1343] Hive can't query data when the car...
Github user anubhav100 commented on the issue: https://github.com/apache/carbondata/pull/1291 @cenyuhai I tried running your pr and got this error Caused by: java.lang.NullPointerException at org.apache.carbondata.hive.MapredCarbonInputFormat.populateCarbonTable(MapredCarbonInputFormat.java:106) at org.apache.carbondata.hive.MapredCarbonInputFormat.getSplits(MapredCarbonInputFormat.java:61) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextSplits(FetchOperator.java:362) at org.apache.hadoop.hive.ql.exec.FetchOperator.getRecordReader(FetchOperator.java:294) at org.apache.hadoop.hive.ql.exec.FetchOperator.getNextRow(FetchOperator.java:445) ... 29 more ---
[jira] [Resolved] (CARBONDATA-1509) Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata
[ https://issues.apache.org/jira/browse/CARBONDATA-1509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-1509. - Resolution: Fixed > Fixed bug for maintaining compatibility of decimal type with older releases > of Carbondata > - > > Key: CARBONDATA-1509 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1509 > Project: CarbonData > Issue Type: Bug >Reporter: Manish Gupta >Assignee: Manish Gupta > Fix For: 1.2.0 > > Time Spent: 1h > Remaining Estimate: 0h > > In old Carbondata releases, precision and scale is not stored for decimal > data type and both values are initialized to -1. In TableSpec.ColumnSpec > default values for precision and scale are initialized to 0 because of which > exception is thrown while reading the old store with decimal column. > Exception trace > Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long > at > org.apache.carbondata.core.metadata.datatype.DecimalConverterFactory$DecimalIntConverter.getDecimal(DecimalConverterFactory.java:98) > at > org.apache.carbondata.core.datastore.page.UnsafeDecimalColumnPage.getDecimal(UnsafeDecimalColumnPage.java:260) > at > org.apache.carbondata.core.datastore.page.LazyColumnPage.getDecimal(LazyColumnPage.java:111) > at > org.apache.carbondata.core.scan.result.vector.MeasureDataVectorProcessor$DecimalMeasureVectorFiller.fillMeasureVector(MeasureDataVectorProcessor.java:215) > at > org.apache.carbondata.core.scan.result.AbstractScannedResult.fillColumnarMeasureBatch(AbstractScannedResult.java:257) > at > org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.scanAndFillResult(DictionaryBasedVectorResultCollector.java:164) > at > org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectVectorBatch(DictionaryBasedVectorResultCollector.java:155) > at > org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.processNextBatch(DataBlockIteratorImpl.java:65) > at > org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:46) > at > org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:258) > at > org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:145) > at > org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:246) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1378: [CARBONDATA-1509] Fixed bug for maintaining c...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1378 ---
[GitHub] carbondata issue #1378: [CARBONDATA-1509] Fixed bug for maintaining compatib...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1378 LGTM ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1372 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/270/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1372 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/902/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1372 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/901/ ---
[GitHub] carbondata issue #1350: [CARBONDATA-1475] fix default maven dependencies for...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1350 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/900/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1372 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/269/ ---
[GitHub] carbondata issue #1372: [WIP] Support object storage by S3 interface
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1372 Build Failed with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/145/ ---
[GitHub] carbondata issue #1378: [CARBONDATA-1509] Fixed bug for maintaining compatib...
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1378 SDV Build Success , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/899/ ---
[GitHub] carbondata issue #1350: [CARBONDATA-1475] fix default maven dependencies for...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1350 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/268/ ---
[GitHub] carbondata issue #1350: [CARBONDATA-1475] fix default maven dependencies for...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1350 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/144/ ---
[jira] [Created] (CARBONDATA-1510) Load,query, filter, NULL values, UDFs, Describe support
Rahul Kumar created CARBONDATA-1510: --- Summary: Load,query, filter, NULL values, UDFs, Describe support Key: CARBONDATA-1510 URL: https://issues.apache.org/jira/browse/CARBONDATA-1510 Project: CarbonData Issue Type: Sub-task Reporter: Rahul Kumar Assignee: Rahul Kumar Implementation in place needs to add test-cases and bug fix -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (CARBONDATA-1494) Load, query, filter, NULL values, UDFs, Describe support
[ https://issues.apache.org/jira/browse/CARBONDATA-1494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rahul Kumar reassigned CARBONDATA-1494: --- Assignee: Rahul Kumar > Load, query, filter, NULL values, UDFs, Describe support > > > Key: CARBONDATA-1494 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1494 > Project: CarbonData > Issue Type: Sub-task > Components: core, sql >Reporter: Venkata Ramana G >Assignee: Rahul Kumar > Fix For: 1.3.0 > > > Implementation in place needs to add test-cases and bug fix -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1378: [CARBONDATA-1509] Fixed bug for maintaining compatib...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1378 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/267/ ---
[GitHub] carbondata issue #1378: [CARBONDATA-1509] Fixed bug for maintaining compatib...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1378 Build Success with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/143/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1367 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/266/ ---
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/1367 Build Failed with Spark 1.6, Please check CI http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/142/ ---
[jira] [Created] (CARBONDATA-1508) Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata
Manish Gupta created CARBONDATA-1508: Summary: Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata Key: CARBONDATA-1508 URL: https://issues.apache.org/jira/browse/CARBONDATA-1508 Project: CarbonData Issue Type: Bug Reporter: Manish Gupta Assignee: Manish Gupta Fix For: 1.2.0 In old Carbondata releases, precision and scale is not stored for decimal data type and both values are initialized to -1. In TableSpec.ColumnSpec default values for precision and scale are initialized to 0 because of which exception is thrown while reading the old store with decimal column. Exception trace Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long at org.apache.carbondata.core.metadata.datatype.DecimalConverterFactory$DecimalIntConverter.getDecimal(DecimalConverterFactory.java:98) at org.apache.carbondata.core.datastore.page.UnsafeDecimalColumnPage.getDecimal(UnsafeDecimalColumnPage.java:260) at org.apache.carbondata.core.datastore.page.LazyColumnPage.getDecimal(LazyColumnPage.java:111) at org.apache.carbondata.core.scan.result.vector.MeasureDataVectorProcessor$DecimalMeasureVectorFiller.fillMeasureVector(MeasureDataVectorProcessor.java:215) at org.apache.carbondata.core.scan.result.AbstractScannedResult.fillColumnarMeasureBatch(AbstractScannedResult.java:257) at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.scanAndFillResult(DictionaryBasedVectorResultCollector.java:164) at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectVectorBatch(DictionaryBasedVectorResultCollector.java:155) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.processNextBatch(DataBlockIteratorImpl.java:65) at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:46) at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:258) at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:145) at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:246) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1378: [CARBONDATA-1509] Fixed bug for maintaining c...
GitHub user manishgupta88 opened a pull request: https://github.com/apache/carbondata/pull/1378 [CARBONDATA-1509] Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata In old Carbondata releases, precision and scale is not stored for decimal data type and both values are initialized to -1. In TableSpec.ColumnSpec default values for precision and scale are initialized to 0 because of which exception is thrown while reading the old store with decimal column. Both precision and scale should be initialized to -1. You can merge this pull request into a Git repository by running: $ git pull https://github.com/manishgupta88/carbondata decimal_backward_compatibility Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/1378.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #1378 commit bd7c4787ae8e0e0baad6218c9c6e5c58cd99db91 Author: manishgupta88Date: 2017-09-22T06:52:39Z In old Carbondata releases, precision and scale is not stored for decimal data type and both values are initialized to -1. In TableSpec.ColumnSpec default values for precision and scale are initialized to 0 because of which exception is thrown while reading the old store with decimal column. Both precision and scale should be initialized to -1. ---
[jira] [Created] (CARBONDATA-1509) Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata
Manish Gupta created CARBONDATA-1509: Summary: Fixed bug for maintaining compatibility of decimal type with older releases of Carbondata Key: CARBONDATA-1509 URL: https://issues.apache.org/jira/browse/CARBONDATA-1509 Project: CarbonData Issue Type: Bug Reporter: Manish Gupta Assignee: Manish Gupta Fix For: 1.2.0 In old Carbondata releases, precision and scale is not stored for decimal data type and both values are initialized to -1. In TableSpec.ColumnSpec default values for precision and scale are initialized to 0 because of which exception is thrown while reading the old store with decimal column. Exception trace Caused by: java.lang.ClassCastException: [B cannot be cast to java.lang.Long at org.apache.carbondata.core.metadata.datatype.DecimalConverterFactory$DecimalIntConverter.getDecimal(DecimalConverterFactory.java:98) at org.apache.carbondata.core.datastore.page.UnsafeDecimalColumnPage.getDecimal(UnsafeDecimalColumnPage.java:260) at org.apache.carbondata.core.datastore.page.LazyColumnPage.getDecimal(LazyColumnPage.java:111) at org.apache.carbondata.core.scan.result.vector.MeasureDataVectorProcessor$DecimalMeasureVectorFiller.fillMeasureVector(MeasureDataVectorProcessor.java:215) at org.apache.carbondata.core.scan.result.AbstractScannedResult.fillColumnarMeasureBatch(AbstractScannedResult.java:257) at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.scanAndFillResult(DictionaryBasedVectorResultCollector.java:164) at org.apache.carbondata.core.scan.collector.impl.DictionaryBasedVectorResultCollector.collectVectorBatch(DictionaryBasedVectorResultCollector.java:155) at org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.processNextBatch(DataBlockIteratorImpl.java:65) at org.apache.carbondata.core.scan.result.iterator.VectorDetailQueryResultIterator.processNextBatch(VectorDetailQueryResultIterator.java:46) at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextBatch(VectorizedCarbonRecordReader.java:258) at org.apache.carbondata.spark.vectorreader.VectorizedCarbonRecordReader.nextKeyValue(VectorizedCarbonRecordReader.java:145) at org.apache.carbondata.spark.rdd.CarbonScanRDD$$anon$1.hasNext(CarbonScanRDD.scala:246) -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1367: [CARBONDATA-1398] Support query from specified segme...
Github user rahulforallp commented on the issue: https://github.com/apache/carbondata/pull/1367 retest this please ---
[jira] [Issue Comment Deleted] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
[ https://issues.apache.org/jira/browse/CARBONDATA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Shahid Khan updated CARBONDATA-1507: - Comment: was deleted (was: This happening due to rdd unpersist blocking calll. Sometime the RDD unpersist is not happening in default "spark.rp.askTimeout" or "spark.network.timeout" time. Solution, we may make the unpersist call as non blocking. And doing this will not have any functional impact as spark automatically monitors the cache usage on each node and drops out.) > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] > -- > > Key: CARBONDATA-1507 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 > Project: CarbonData > Issue Type: Bug >Reporter: Mohammad Shahid Khan >Assignee: Mohammad Shahid Khan > > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Issue Comment Deleted] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
[ https://issues.apache.org/jira/browse/CARBONDATA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohammad Shahid Khan updated CARBONDATA-1507: - Comment: was deleted (was: This happening due to rdd unpersist blocking calll. Sometime the RDD unpersist is not happening in default "spark.rp.askTimeout" or "spark.network.timeout" time. Solution, we may make the unpersist call as non blocking. And doing this will not have any functional impact as spark automatically monitors the cache usage on each node and drops out.) > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] > -- > > Key: CARBONDATA-1507 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 > Project: CarbonData > Issue Type: Bug >Reporter: Mohammad Shahid Khan >Assignee: Mohammad Shahid Khan > > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata issue #1003: [CARBONDATA-988] Added Presto benchmarking
Github user ravipesala commented on the issue: https://github.com/apache/carbondata/pull/1003 SDV Build Fail , Please check CI http://144.76.159.231:8080/job/ApacheSDVTests/897/ ---
[jira] [Commented] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
[ https://issues.apache.org/jira/browse/CARBONDATA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175990#comment-16175990 ] Mohammad Shahid Khan commented on CARBONDATA-1507: -- This happening due to rdd unpersist blocking calll. Sometime the RDD unpersist is not happening in default "spark.rp.askTimeout" or "spark.network.timeout" time. Solution, we may make the unpersist call as non blocking. And doing this will not have any functional impact as spark automatically monitors the cache usage on each node and drops out. > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] > -- > > Key: CARBONDATA-1507 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 > Project: CarbonData > Issue Type: Bug >Reporter: Mohammad Shahid Khan >Assignee: Mohammad Shahid Khan > > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
[ https://issues.apache.org/jira/browse/CARBONDATA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175989#comment-16175989 ] Mohammad Shahid Khan commented on CARBONDATA-1507: -- This happening due to rdd unpersist blocking calll. Sometime the RDD unpersist is not happening in default "spark.rp.askTimeout" or "spark.network.timeout" time. Solution, we may make the unpersist call as non blocking. And doing this will not have any functional impact as spark automatically monitors the cache usage on each node and drops out. > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] > -- > > Key: CARBONDATA-1507 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 > Project: CarbonData > Issue Type: Bug >Reporter: Mohammad Shahid Khan >Assignee: Mohammad Shahid Khan > > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
[ https://issues.apache.org/jira/browse/CARBONDATA-1507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175988#comment-16175988 ] Mohammad Shahid Khan commented on CARBONDATA-1507: -- This happening due to rdd unpersist blocking calll. Sometime the RDD unpersist is not happening in default "spark.rp.askTimeout" or "spark.network.timeout" time. Solution, we may make the unpersist call as non blocking. And doing this will not have any functional impact as spark automatically monitors the cache usage on each node and drops out. > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] > -- > > Key: CARBONDATA-1507 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 > Project: CarbonData > Issue Type: Bug >Reporter: Mohammad Shahid Khan >Assignee: Mohammad Shahid Khan > > Dataload global sort fails with RpcTimeOutException: Futures timed out after > [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (CARBONDATA-1507) Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds]
Mohammad Shahid Khan created CARBONDATA-1507: Summary: Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds] Key: CARBONDATA-1507 URL: https://issues.apache.org/jira/browse/CARBONDATA-1507 Project: CarbonData Issue Type: Bug Reporter: Mohammad Shahid Khan Assignee: Mohammad Shahid Khan Dataload global sort fails with RpcTimeOutException: Futures timed out after [120 seconds] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Resolved] (CARBONDATA-1448) PartitionInfo is null in CarbonTable
[ https://issues.apache.org/jira/browse/CARBONDATA-1448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ravindra Pesala resolved CARBONDATA-1448. - Resolution: Fixed Fix Version/s: 1.2.0 > PartitionInfo is null in CarbonTable > > > Key: CARBONDATA-1448 > URL: https://issues.apache.org/jira/browse/CARBONDATA-1448 > Project: CarbonData > Issue Type: Bug > Components: core >Reporter: Cao, Lionel >Assignee: Cao, Lionel > Fix For: 1.2.0 > > Time Spent: 1.5h > Remaining Estimate: 0h > > PartitionInfo is null in CarbonTable -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[GitHub] carbondata pull request #1369: [CARBONDATA-1448] fix partitionInfo null issu...
Github user asfgit closed the pull request at: https://github.com/apache/carbondata/pull/1369 ---