[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1840/ ---
[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9889/ ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9888/ ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1839/ ---
[jira] [Created] (CARBONDATA-3145) Read improvement for complex column pages while querying
dhatchayani created CARBONDATA-3145: --- Summary: Read improvement for complex column pages while querying Key: CARBONDATA-3145 URL: https://issues.apache.org/jira/browse/CARBONDATA-3145 Project: CarbonData Issue Type: Sub-task Reporter: dhatchayani Assignee: dhatchayani -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1629/ ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1628/ ---
[jira] [Closed] (CARBONDATA-3137) Update the Project List
[ https://issues.apache.org/jira/browse/CARBONDATA-3137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Indhumathi Muthumurugesh closed CARBONDATA-3137. Resolution: Fixed > Update the Project List > --- > > Key: CARBONDATA-3137 > URL: https://issues.apache.org/jira/browse/CARBONDATA-3137 > Project: CarbonData > Issue Type: Sub-task >Reporter: Indhumathi Muthumurugesh >Assignee: Indhumathi Muthumurugesh >Priority: Minor > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2899 @KanakaKumar @jackylk @QiangCai @ajantha-bhat @kunal642 Rebased, Please review it. ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2919 Rebased and CI passï¼@KanakaKumar @jackylk @QiangCai @ajantha-bhat @kunal642 Please review it. ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1837/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9887/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1838/ ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9886/ ---
[jira] [Created] (CARBONDATA-3144) CarbonData support spark-2.4.0
xubo245 created CARBONDATA-3144: --- Summary: CarbonData support spark-2.4.0 Key: CARBONDATA-3144 URL: https://issues.apache.org/jira/browse/CARBONDATA-3144 Project: CarbonData Issue Type: New Feature Reporter: xubo245 Assignee: xubo245 Fix For: 1.5.2 Spark has released spark-2.4 more than one month. CarbonData should start to support spark-2.4. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9885/ ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1836/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1627/ ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1626/ ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user QiangCai commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238507037 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala --- @@ -156,4 +158,132 @@ object DataLoadProcessBuilderOnSpark { Array((uniqueLoadStatusId, (loadMetadataDetails, executionErrors))) } } + + /** + * 1. range partition the whole input data + * 2. for each range, sort the data and writ it to CarbonData files + */ + def loadDataUsingRangeSort( + sparkSession: SparkSession, + dataFrame: Option[DataFrame], + model: CarbonLoadModel, + hadoopConf: Configuration): Array[(String, (LoadMetadataDetails, ExecutionErrors))] = { +val originRDD = if (dataFrame.isDefined) { --- End diff -- better, but after refactoring, the code logic is not clear. Now, these two flows already reuse the process steps. ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user QiangCai commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238505807 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala --- @@ -156,4 +158,132 @@ object DataLoadProcessBuilderOnSpark { Array((uniqueLoadStatusId, (loadMetadataDetails, executionErrors))) } } + + /** + * 1. range partition the whole input data + * 2. for each range, sort the data and writ it to CarbonData files + */ + def loadDataUsingRangeSort( + sparkSession: SparkSession, + dataFrame: Option[DataFrame], + model: CarbonLoadModel, + hadoopConf: Configuration): Array[(String, (LoadMetadataDetails, ExecutionErrors))] = { +val originRDD = if (dataFrame.isDefined) { + dataFrame.get.rdd +} else { + // input data from files + val columnCount = model.getCsvHeaderColumns.length + CsvRDDHelper.csvFileScanRDD(sparkSession, model, hadoopConf) +.map(DataLoadProcessorStepOnSpark.toStringArrayRow(_, columnCount)) +} +val sc = sparkSession.sparkContext +val modelBroadcast = sc.broadcast(model) +val partialSuccessAccum = sc.accumulator(0, "Partial Success Accumulator") +val inputStepRowCounter = sc.accumulator(0, "Input Processor Accumulator") +val convertStepRowCounter = sc.accumulator(0, "Convert Processor Accumulator") +val sortStepRowCounter = sc.accumulator(0, "Sort Processor Accumulator") +val writeStepRowCounter = sc.accumulator(0, "Write Processor Accumulator") +hadoopConf + .set(CarbonCommonConstants.CARBON_WRITTEN_BY_APPNAME, sparkSession.sparkContext.appName) +val conf = SparkSQLUtil.broadCastHadoopConf(sc, hadoopConf) +// 1. Input +val inputRDD = originRDD + .mapPartitions(rows => DataLoadProcessorStepOnSpark.toRDDIterator(rows, modelBroadcast)) + .mapPartitionsWithIndex { case (index, rows) => +DataLoadProcessorStepOnSpark.inputFunc(rows, index, modelBroadcast, inputStepRowCounter) + } +// 2. Convert +val convertRDD = inputRDD.mapPartitionsWithIndex { case (index, rows) => + ThreadLocalSessionInfo.setConfigurationToCurrentThread(conf.value.value) + DataLoadProcessorStepOnSpark.convertFunc(rows, index, modelBroadcast, partialSuccessAccum, +convertStepRowCounter) +}.filter(_ != null) +// 3. Range partition +val configuration = DataLoadProcessBuilder.createConfiguration(model) +val objectOrdering: Ordering[Object] = createOrderingForColumn(model.getRangePartitionColumn) +var numPartitions = CarbonDataProcessorUtil.getGlobalSortPartitions( + configuration.getDataLoadProperty(CarbonCommonConstants.LOAD_GLOBAL_SORT_PARTITIONS)) +if (numPartitions <= 0) { + if (model.getTotalSize <= 0) { +numPartitions = convertRDD.partitions.length + } else { +// calculate the number of partitions +// better to generate a CarbonData file for each partition +val totalSize = model.getTotalSize.toDouble +val table = model.getCarbonDataLoadSchema.getCarbonTable +val blockSize = 1024L * 1024 * table.getBlockSizeInMB +val blockletSize = 1024L * 1024 * table.getBlockletSizeInMB +// here it assumes the compression ratio of CarbonData is about 33%, +// so it multiply by 3 to get the split size of CSV files. +val splitSize = Math.max(blockletSize, (blockSize - blockletSize)) * 3 +numPartitions = Math.ceil(totalSize / splitSize).toInt --- End diff -- yes, insert will use global sort ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user QiangCai commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238505859 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/model/LoadOption.java --- @@ -188,6 +188,8 @@ optionsFinal.put(CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB, Maps.getOrDefault(options, CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB, CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB_DEFAULT)); + +optionsFinal.put("range_column", Maps.getOrDefault(options, "range_column", null)); --- End diff -- now it only try to support load data command ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2899 retest this please ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1625/ ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2919 retest this please ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2940 CI passï¼ please review it. @jackylk @manishgupta88 @kunal642 ---
[GitHub] carbondata issue #2970: [CARBONDATA-3142]Add timestamp with thread name whic...
Github user qiuchenjian commented on the issue: https://github.com/apache/carbondata/pull/2970 > Please add more description , why we need this pr. thanks. Done ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1835/ ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9884/ ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user qiuchenjian commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238297683 --- Diff: processing/src/main/java/org/apache/carbondata/processing/loading/model/LoadOption.java --- @@ -188,6 +188,8 @@ optionsFinal.put(CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB, Maps.getOrDefault(options, CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB, CarbonCommonConstants.CARBON_LOAD_MIN_SIZE_INMB_DEFAULT)); + +optionsFinal.put("range_column", Maps.getOrDefault(options, "range_column", null)); --- End diff -- Does makeCreateTableString of makeCreateTableString need add "range_column" ? ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user qiuchenjian commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238292207 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala --- @@ -156,4 +158,132 @@ object DataLoadProcessBuilderOnSpark { Array((uniqueLoadStatusId, (loadMetadataDetails, executionErrors))) } } + + /** + * 1. range partition the whole input data + * 2. for each range, sort the data and writ it to CarbonData files + */ + def loadDataUsingRangeSort( + sparkSession: SparkSession, + dataFrame: Option[DataFrame], + model: CarbonLoadModel, + hadoopConf: Configuration): Array[(String, (LoadMetadataDetails, ExecutionErrors))] = { +val originRDD = if (dataFrame.isDefined) { + dataFrame.get.rdd +} else { + // input data from files + val columnCount = model.getCsvHeaderColumns.length + CsvRDDHelper.csvFileScanRDD(sparkSession, model, hadoopConf) +.map(DataLoadProcessorStepOnSpark.toStringArrayRow(_, columnCount)) +} +val sc = sparkSession.sparkContext +val modelBroadcast = sc.broadcast(model) +val partialSuccessAccum = sc.accumulator(0, "Partial Success Accumulator") +val inputStepRowCounter = sc.accumulator(0, "Input Processor Accumulator") +val convertStepRowCounter = sc.accumulator(0, "Convert Processor Accumulator") +val sortStepRowCounter = sc.accumulator(0, "Sort Processor Accumulator") +val writeStepRowCounter = sc.accumulator(0, "Write Processor Accumulator") +hadoopConf + .set(CarbonCommonConstants.CARBON_WRITTEN_BY_APPNAME, sparkSession.sparkContext.appName) +val conf = SparkSQLUtil.broadCastHadoopConf(sc, hadoopConf) +// 1. Input +val inputRDD = originRDD + .mapPartitions(rows => DataLoadProcessorStepOnSpark.toRDDIterator(rows, modelBroadcast)) + .mapPartitionsWithIndex { case (index, rows) => +DataLoadProcessorStepOnSpark.inputFunc(rows, index, modelBroadcast, inputStepRowCounter) + } +// 2. Convert +val convertRDD = inputRDD.mapPartitionsWithIndex { case (index, rows) => + ThreadLocalSessionInfo.setConfigurationToCurrentThread(conf.value.value) + DataLoadProcessorStepOnSpark.convertFunc(rows, index, modelBroadcast, partialSuccessAccum, +convertStepRowCounter) +}.filter(_ != null) +// 3. Range partition +val configuration = DataLoadProcessBuilder.createConfiguration(model) +val objectOrdering: Ordering[Object] = createOrderingForColumn(model.getRangePartitionColumn) +var numPartitions = CarbonDataProcessorUtil.getGlobalSortPartitions( + configuration.getDataLoadProperty(CarbonCommonConstants.LOAD_GLOBAL_SORT_PARTITIONS)) +if (numPartitions <= 0) { + if (model.getTotalSize <= 0) { +numPartitions = convertRDD.partitions.length + } else { +// calculate the number of partitions +// better to generate a CarbonData file for each partition +val totalSize = model.getTotalSize.toDouble +val table = model.getCarbonDataLoadSchema.getCarbonTable +val blockSize = 1024L * 1024 * table.getBlockSizeInMB +val blockletSize = 1024L * 1024 * table.getBlockletSizeInMB +// here it assumes the compression ratio of CarbonData is about 33%, +// so it multiply by 3 to get the split size of CSV files. +val splitSize = Math.max(blockletSize, (blockSize - blockletSize)) * 3 +numPartitions = Math.ceil(totalSize / splitSize).toInt --- End diff -- If insert using dataframe, I think totalSize will be 0 ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user qiuchenjian commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238290309 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessBuilderOnSpark.scala --- @@ -156,4 +158,132 @@ object DataLoadProcessBuilderOnSpark { Array((uniqueLoadStatusId, (loadMetadataDetails, executionErrors))) } } + + /** + * 1. range partition the whole input data + * 2. for each range, sort the data and writ it to CarbonData files + */ + def loadDataUsingRangeSort( + sparkSession: SparkSession, + dataFrame: Option[DataFrame], + model: CarbonLoadModel, + hadoopConf: Configuration): Array[(String, (LoadMetadataDetails, ExecutionErrors))] = { +val originRDD = if (dataFrame.isDefined) { --- End diff -- This method has too much of the same code as loadDataUsingGlobalSortï¼ I recommend refactoring these two methods ---
[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9883/ ---
[GitHub] carbondata issue #2966: [WIP] test and check no sort by default
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2966 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1624/ ---
[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1834/ ---
[GitHub] carbondata issue #2972: [CARBONDATA-3143] Fixed local dictionary in presto
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2972 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1623/ ---
[GitHub] carbondata pull request #2972: [CARBONDATA-3143] Fixed local dictionary in p...
GitHub user ravipesala opened a pull request: https://github.com/apache/carbondata/pull/2972 [CARBONDATA-3143] Fixed local dictionary in presto Problem: Currently, local dictionary columns are not working for presto as it is not handled in the integration layer. Solution: Add local dictionary support to presto integration layer. Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/ravipesala/incubator-carbondata presto-ditionary-fix Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2972.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2972 commit df64b977c74e9b6e82441899223a2a8c6d9b748d Author: ravipesala Date: 2018-12-03T12:57:33Z Fixed local dictionary in presto ---
[jira] [Created] (CARBONDATA-3143) Fix local dictionary issue for presto
Ravindra Pesala created CARBONDATA-3143: --- Summary: Fix local dictionary issue for presto Key: CARBONDATA-3143 URL: https://issues.apache.org/jira/browse/CARBONDATA-3143 Project: CarbonData Issue Type: Bug Reporter: Ravindra Pesala Fix local dictionary issue for presto -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[GitHub] carbondata issue #2161: [CARBONDATA-2218] AlluxioCarbonFile while trying to ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2161 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9881/ ---
[GitHub] carbondata issue #2161: [CARBONDATA-2218] AlluxioCarbonFile while trying to ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2161 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1832/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1833/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Failed with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9882/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Failed with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1622/ ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9880/ ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1831/ ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2940 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1829/ ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Failed with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1830/ ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9879/ ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2940 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9878/ ---
[GitHub] carbondata issue #2161: [CARBONDATA-2218] AlluxioCarbonFile while trying to ...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2161 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1621/ ---
[GitHub] carbondata pull request #2161: [CARBONDATA-2218] AlluxioCarbonFile while try...
Github user chandrasaripaka commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2161#discussion_r238211698 --- Diff: core/src/test/java/org/apache/carbondata/core/datastore/filesystem/AlluxioCarbonFileTest.java --- @@ -108,12 +121,12 @@ public void testListFilesForNullListStatus() { alluxioCarbonFile = new AlluxioCarbonFile(fileStatusWithOutDirectoryPermission); new MockUp() { @Mock -public FileSystem getFileSystem(Configuration conf) throws IOException { -return new DistributedFileSystem(); +public FileSystem get(FileSystemContext context) throws IOException { --- End diff -- Fixed the test in the latest push ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9877/ ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1828/ ---
[GitHub] carbondata issue #2899: [CARBONDATA-3073][CARBONDATA-3044] Support configure...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2899 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1620/ ---
[GitHub] carbondata issue #2914: [CARBONDATA-3093] Provide property builder for carbo...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2914 @jackylk @ravipesala @kunal642 @KanakaKumar @chenliang613 @sraghunandan CI pass, please review. ---
[GitHub] carbondata issue #2890: [CARBONDATA-3002] Fix some spell error
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2890 @jackylk @chenliang613 CI pass, please handle it. ---
[GitHub] carbondata issue #2919: [CARBONDATA-3097] Support folder path in getVersionD...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2919 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1619/ ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2940 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1618/ ---
[GitHub] carbondata issue #2930: [CARBONDATA-3109] Support get length from CarbonRow ...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2930 @KanakaKumar @jackylk @QiangCai @ajantha-bhat @kunal642 Please review it. ---
[GitHub] carbondata issue #2925: [CARBONDATA-3102] Fix NoClassDefFoundError when use ...
Github user xubo245 commented on the issue: https://github.com/apache/carbondata/pull/2925 @kunal642 @jackylk Please review it. ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2940 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1827/ ---
[GitHub] carbondata pull request #2940: [CARBONDATA-3116] Support set carbon.query.di...
Github user xubo245 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2940#discussion_r238188148 --- Diff: integration/spark2/pom.xml --- @@ -105,6 +105,11 @@ + + org.apache.httpcomponents --- End diff -- Please handle it fastï¼no this dependency will affect the carbonThriftServer ---
[GitHub] carbondata issue #2971: [TEST] Test loading performance of range_sort
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2971 Build Success with Spark 2.1.0, Please check CI http://136.243.101.176:8080/job/ApacheCarbonPRBuilder2.1/1617/ ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
Github user Indhumathi27 commented on a diff in the pull request: https://github.com/apache/carbondata/pull/2971#discussion_r238184022 --- Diff: integration/spark-common/src/main/scala/org/apache/carbondata/spark/load/DataLoadProcessorStepOnSpark.scala --- @@ -305,4 +307,107 @@ object DataLoadProcessorStepOnSpark { e) } } + + def sortAdnWriteFunc( --- End diff -- Please change the method name from sortAdnWriteFunc to sortAndWriteFunc ---
[GitHub] carbondata issue #2940: [CARBONDATA-3116] Support set carbon.query.directQue...
Github user kumarvishal09 commented on the issue: https://github.com/apache/carbondata/pull/2940 @xubo245 this property was added for internal purpose to restrict user from directly querying on pre-ggregate data map as it will show aggregated output ---
[GitHub] carbondata pull request #2971: [TEST] Test loading performance of range_sort
GitHub user QiangCai opened a pull request: https://github.com/apache/carbondata/pull/2971 [TEST] Test loading performance of range_sort For global_sort, add a option 'range_column': LOAD DATA LOCAL INPATH 'xxx' INTO TABLE xxx OPTIONS('range_column'='a column name') During data loading 1. range partition the input data by range_column 2. for each range, execute local sort step to load the data Be sure to do all of the following checklist to help us incorporate your contribution quickly and easily: - [ ] Any interfaces changed? - [ ] Any backward compatibility impacted? - [ ] Document update required? - [ ] Testing done Please provide details on - Whether new unit test cases have been added or why no new tests are required? - How it is tested? Please attach test report. - Is it a performance related change? Please attach the performance test report. - Any additional information to help reviewers in testing this change. - [ ] For large changes, please consider breaking it into sub-tasks under an umbrella JIRA. You can merge this pull request into a Git repository by running: $ git pull https://github.com/QiangCai/carbondata range_sort Alternatively you can review and apply these changes as the patch at: https://github.com/apache/carbondata/pull/2971.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #2971 commit 9d855bc65aa85efb84ad3e54f3188a16e1a58d3b Author: QiangCai Date: 2018-12-03T08:37:57Z support range sort ---
[GitHub] carbondata issue #2970: [CARBONDATA-3142]Add timestamp with thread name whic...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2970 Build Success with Spark 2.2.1, Please check CI http://95.216.28.178:8080/job/ApacheCarbonPRBuilder1/1826/ ---
[GitHub] carbondata issue #2970: [CARBONDATA-3142]Add timestamp with thread name whic...
Github user CarbonDataQA commented on the issue: https://github.com/apache/carbondata/pull/2970 Build Success with Spark 2.3.1, Please check CI http://136.243.101.176:8080/job/carbondataprbuilder2.3/9875/ ---
[GitHub] carbondata issue #2970: [CARBONDATA-3142]Add timestamp with thread name whic...
Github user zzcclp commented on the issue: https://github.com/apache/carbondata/pull/2970 Please add more description , why we need this pr. thanks. ---