Jenkins build is back to normal : carbondata-master-spark-2.2 #874

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to stable : carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #874

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to normal : carbondata-master-spark-2.1 #2781

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to normal : carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 Examples #2781

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build became unstable: carbondata-master-spark-2.1 #2782

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #2782

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build became unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 #2782

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is unstable: carbondata-master-spark-2.2 #872

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build became unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #872

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to normal : carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 Examples #872

2018-08-07 Thread Apache Jenkins Server
See

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: Spark2 Examples #871

2018-08-07 Thread Apache Jenkins Server
See -- [...truncated 902.19 KB...] at

Build failed in Jenkins: carbondata-master-spark-2.2 #871

2018-08-07 Thread Apache Jenkins Server
See Changes: [jacky.likun] [CARBONDATA-2539]Fix mv classcast exception issue -- [...truncated 83.78 MB...] at

Build failed in Jenkins: carbondata-master-spark-2.1 » Apache CarbonData :: Spark2 Examples #2779

2018-08-07 Thread Apache Jenkins Server
See -- [...truncated 504.92 KB...] at org.scalatest.Suite$$anonfun$runNestedSuites$1.apply(Suite.scala:1526)

Build failed in Jenkins: carbondata-master-spark-2.1 #2779

2018-08-07 Thread Apache Jenkins Server
See Changes: [jacky.likun] [CARBONDATA-2539]Fix mv classcast exception issue -- [...truncated 69.74 MB...] at

Jenkins build is back to normal : carbondata-master-spark-2.2 #870

2018-08-07 Thread Apache Jenkins Server
See

Build failed in Jenkins: carbondata-master-spark-2.2 #873

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H26 (ubuntu xenial) in workspace

carbondata git commit: [CARBONDATA-2807] Fixed data load performance issue in Intermediate merger When number of records are high

2018-08-07 Thread ravipesala
Repository: carbondata Updated Branches: refs/heads/master ed225085e -> 8e54f1e45 [CARBONDATA-2807] Fixed data load performance issue in Intermediate merger When number of records are high Problem: Data Loading is taking more time when number of records are high. Root cause: As number of

Build failed in Jenkins: carbondata-master-spark-2.1 #2780

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H27 (ubuntu xenial) in workspace

carbondata git commit: [CARBONDATA-2831] Added Support Merge index files read from non transactional table

2018-08-07 Thread ravipesala
Repository: carbondata Updated Branches: refs/heads/master 3d7fa1276 -> ed225085e [CARBONDATA-2831] Added Support Merge index files read from non transactional table problem : Currently SDK read/ nontransactional table read from external table gives null output when carbonMergeindex file is

[45/50] [abbrv] carbondata git commit: [CARBONDATA-2823] Support streaming property with datamap

2018-08-07 Thread jackylk
[CARBONDATA-2823] Support streaming property with datamap Since during query, carbondata get splits from streaming segment and columnar segments repectively, we can support streaming with index datamap. For preaggregate datamap, it already supported streaming table, so here we will remove the

[39/50] [abbrv] carbondata git commit: [Documentation] Editorial review comment fixed

2018-08-07 Thread jackylk
[Documentation] Editorial review comment fixed Minor issues fixed (spelling, syntax, and missing info) This closes #2603 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/12725b75 Tree:

[43/50] [abbrv] carbondata git commit: [CARBONDATA-2829][CARBONDATA-2832] Fix creating merge index on older V1 V2 store

2018-08-07 Thread jackylk
[CARBONDATA-2829][CARBONDATA-2832] Fix creating merge index on older V1 V2 store Block merge index creation for the old store V1 V2 versions This closes #2608 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[46/50] [abbrv] carbondata git commit: [CARBONDATA-2836]Fixed data loading performance issue

2018-08-07 Thread jackylk
[CARBONDATA-2836]Fixed data loading performance issue Problem: Data Loading is taking more time when number of records are high(3.5 billion) records Root Cause: In case of Final merge sort temp row conversion is done in main thread because of this final step processing became slower.

[50/50] [abbrv] carbondata git commit: [CARBONDATA-2768][CarbonStore] Fix error in tests for external csv format

2018-08-07 Thread jackylk
[CARBONDATA-2768][CarbonStore] Fix error in tests for external csv format In previous implementation earlier than PR2495, we only supportted csv as external format for carbondata. And we validated the restriction while creating the table.PR2495 added kafka support, so it removed the validation,

[35/50] [abbrv] carbondata git commit: [CARBONDATA-2812] Implement freeMemory for complex pages

2018-08-07 Thread jackylk
[CARBONDATA-2812] Implement freeMemory for complex pages Problem: The memory used by the ColumnPageWrapper (for complex data types) is not cleared and so it requires more memory to Load and Query. Solution: Clear the used memory in the freeMemory method. This closes #2599 Project:

[20/50] [abbrv] carbondata git commit: [CARBONDATA-2625] While BlockletDataMap loading, avoid multiple times listing of files

2018-08-07 Thread jackylk
[CARBONDATA-2625] While BlockletDataMap loading, avoid multiple times listing of files CarbonReader is very slow for many files as blockletDataMap lists files of folder while loading each segment. This optimization lists once across segment loads. This closes #2441 Project:

[47/50] [abbrv] carbondata git commit: [CARBONDATA-2585] Fix local dictionary for both table level and system level property based on priority

2018-08-07 Thread jackylk
[CARBONDATA-2585] Fix local dictionary for both table level and system level property based on priority Added a System level Property for local dictionary Support. Property 'carbon.local.dictionary.enable' can be set to true/false to enable/disable local dictionary at system level. If table

[22/50] [abbrv] carbondata git commit: Problem: Insert into select is failing as both are running as single task, both are sharing the same taskcontext and resources are cleared once if any one of the

2018-08-07 Thread jackylk
Problem: Insert into select is failing as both are running as single task, both are sharing the same taskcontext and resources are cleared once if any one of the RDD(Select query's ScanRDD) is completed, so the other RDD(LoadRDD) running is crashing as it is trying to access the cleared memory.

[38/50] [abbrv] carbondata git commit: [CARBONDATA-2804] fix the bug when bloom filter or preaggregate datamap tried to be created on older V1-V2 version stores

2018-08-07 Thread jackylk
[CARBONDATA-2804] fix the bug when bloom filter or preaggregate datamap tried to be created on older V1-V2 version stores This PR change read carbon file version from carbondata file header to carbonindex file header, because the version filed of carondata file header is not compatible with

[36/50] [abbrv] carbondata git commit: [CARBONDATA-2813] Fixed code to get data size from LoadDetails if size is written there

2018-08-07 Thread jackylk
[CARBONDATA-2813] Fixed code to get data size from LoadDetails if size is written there Problem: In 1.3.x when index files are merged to form mergeindex file a mapping of which index files if merged to which mergeindex is kept in the segments file. In 1.4.x both the index and merge index files

[17/50] [abbrv] carbondata git commit: [CARBONDATA-2798] Fix Dictionary_Include for ComplexDataType

2018-08-07 Thread jackylk
[CARBONDATA-2798] Fix Dictionary_Include for ComplexDataType Problem1: Select Filter is throwing BufferUnderFlow Exception as cardinality is filled for Non-Dictionary columns. Solution: Check if a complex column has Encoding => Dictionary and fill cardinality for that column only. Problem2:

[40/50] [abbrv] carbondata git commit: [CARBONDATA-2815][Doc] Add documentation for spilling memory and datamap rebuild

2018-08-07 Thread jackylk
[CARBONDATA-2815][Doc] Add documentation for spilling memory and datamap rebuild Add documentation for:1.spilling unsafe memory for data loading,2.datamap rebuild for index datamap This closes #2604 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[16/50] [abbrv] carbondata git commit: [HotFix][CARBONDATA-2788][BloomDataMap] Fix bugs in incorrect query result with bloom datamap

2018-08-07 Thread jackylk
[HotFix][CARBONDATA-2788][BloomDataMap] Fix bugs in incorrect query result with bloom datamap This PR solve two problems which will affect the correctness of the query on bloom. Revert PR2539 After review the code, we found that modification in PR2539 is not needed, so we revert that PR.

[28/50] [abbrv] carbondata git commit: [CARBONDATA-2753][Compatibility] Merge Index file not getting created with blocklet information for old store

2018-08-07 Thread jackylk
[CARBONDATA-2753][Compatibility] Merge Index file not getting created with blocklet information for old store Problem Merge Index file not getting created with blocklet information for old store Analysis In legacy store (store <= 1.1 version), blocklet information is not written in the carbon

[15/50] [abbrv] carbondata git commit: [CARBONDATA-2585]disable local dictionary by default

2018-08-07 Thread jackylk
[CARBONDATA-2585]disable local dictionary by default make local dictionary false by default This closes #2570 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/34ca0214 Tree:

[09/50] [abbrv] carbondata git commit: [HOTFIX] Removed file existence check to improve dataMap loading performance

2018-08-07 Thread jackylk
[HOTFIX] Removed file existence check to improve dataMap loading performance Problem DataMap loading performance degraded after adding file existence check. Analysis When carbonIndex file is read and carbondata file path to its metadata Info map is prepared, file physical existence is getting

[24/50] [abbrv] carbondata git commit: [CARBONDATA-2800][Doc] Add useful tips about bloomfilter datamap

2018-08-07 Thread jackylk
[CARBONDATA-2800][Doc] Add useful tips about bloomfilter datamap add useful tips about bloomfilter datamap This closes #2581 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/a302cd1c Tree:

[25/50] [abbrv] carbondata git commit: [CARBONDATA-2806] Delete delete delta files upon clean files for flat folder

2018-08-07 Thread jackylk
[CARBONDATA-2806] Delete delete delta files upon clean files for flat folder Problem: Delete delta files are not removed after clean files operation. Solution: Get the delta files using Segment Status Manager and remove them during clean operation. This closes #2587 Project:

[13/50] [abbrv] carbondata git commit: [CARBONDATA-2801]Added documentation for flat folder

2018-08-07 Thread jackylk
[CARBONDATA-2801]Added documentation for flat folder [CARBONDATA-2801]Added documentation for flat folder This closes #2582 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/790cde87 Tree:

[32/50] [abbrv] carbondata git commit: [CARBONDATA-2799][BloomDataMap] Fix bugs in querying with bloom datamap on preagg with dictionary column

2018-08-07 Thread jackylk
[CARBONDATA-2799][BloomDataMap] Fix bugs in querying with bloom datamap on preagg with dictionary column For preaggregate table, if the groupby column is dictionary column in parent table, the preaggregate table will inherit the dictionary encoding as well as the dictionary file from the parent

[33/50] [abbrv] carbondata git commit: [CARBONDATA-2803]fix wrong datasize calculation and Refactoring for better readability and handle local dictionary for older tables

2018-08-07 Thread jackylk
[CARBONDATA-2803]fix wrong datasize calculation and Refactoring for better readability and handle local dictionary for older tables Changes in this PR: 1.data size was calculation wrongly, indexmap contains duplicate paths as it stores all blocklets, so remove duplicate and maintain uniq block

[37/50] [abbrv] carbondata git commit: [CARBONDATA-2802][BloomDataMap] Remove clearing cache after rebuiding index datamap

2018-08-07 Thread jackylk
[CARBONDATA-2802][BloomDataMap] Remove clearing cache after rebuiding index datamap This is no need to clear cache after rebuilding index datamap due to the following reasons: 1.currently it will clear all the caches for all index datamaps, not only for the current rebuilding one 2.the life

[12/50] [abbrv] carbondata git commit: [CARBONDATA-2789] Support Hadoop 2.8.3 eco-system integration

2018-08-07 Thread jackylk
[CARBONDATA-2789] Support Hadoop 2.8.3 eco-system integration Add hadoop 2.8.3 profile and passed the compile This closes #2566 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/7b538906 Tree:

[26/50] [abbrv] carbondata git commit: [CARBONDATA-2796][32K]Fix data loading problem when table has complex column and long string column

2018-08-07 Thread jackylk
[CARBONDATA-2796][32K]Fix data loading problem when table has complex column and long string column currently both varchar column and complex column believes itself is the last one member in noDictionary group when converting carbon row from raw format to 3-parted format. Since they need to be

[03/50] [abbrv] carbondata git commit: [CARBONDATA-2775] Adaptive encoding fails for Unsafe OnHeap. if, target datatype is SHORT_INT

2018-08-07 Thread jackylk
[CARBONDATA-2775] Adaptive encoding fails for Unsafe OnHeap. if, target datatype is SHORT_INT problem: [CARBONDATA-2775] Adaptive encoding fails for Unsafe OnHeap if, target data type is SHORT_INT solution: If ENABLE_OFFHEAP_SORT = false, in carbon property. UnsafeFixLengthColumnPage.java

[01/50] [abbrv] carbondata git commit: [CARBONDATA-2782]delete dead code in class 'CarbonCleanFilesCommand' [Forced Update!]

2018-08-07 Thread jackylk
Repository: carbondata Updated Branches: refs/heads/external-format ccf64ce5a -> 12ab57992 (forced update) [CARBONDATA-2782]delete dead code in class 'CarbonCleanFilesCommand' The variables(dms、indexDms) in function processMetadata are nerver used. This closes #2557 Project:

[19/50] [abbrv] carbondata git commit: [CARBONDATA-2790][BloomDataMap]Optimize default parameter for bloomfilter datamap

2018-08-07 Thread jackylk
[CARBONDATA-2790][BloomDataMap]Optimize default parameter for bloomfilter datamap To provide better query performance for bloomfilter datamap by default, we optimize bloom_size from 32000 to 64 and optimize bloom_fpp from 0.01 to 0.1. This closes #2567 Project:

[10/50] [abbrv] carbondata git commit: [CARBONDATA-2749][dataload] In HDFS Empty tablestatus file is written during datalaod, iud or compaction when disk is full.

2018-08-07 Thread jackylk
[CARBONDATA-2749][dataload] In HDFS Empty tablestatus file is written during datalaod, iud or compaction when disk is full. Problem: When a failure happens due to disk full during load, IUD or Compaction, then while updating the tablestatus file, the tablestaus.tmp file during atomic file

[27/50] [abbrv] carbondata git commit: [CARBONDATA-2478] Added datamap-developer-guide.md file to Readme.md

2018-08-07 Thread jackylk
[CARBONDATA-2478] Added datamap-developer-guide.md file to Readme.md [CARBONDATA-2478] Added datamap-developer-guide.md file to Readme.md This closes #2305 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/77642cff

[05/50] [abbrv] carbondata git commit: [HOTFIX] CreateDataMapPost Event was skipped in case of preaggregate datamap

2018-08-07 Thread jackylk
[HOTFIX] CreateDataMapPost Event was skipped in case of preaggregate datamap CreateDataMapPost Event was skipped in case of preaggregate datamap This closes #2562 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[08/50] [abbrv] carbondata git commit: [CARBONDATA-2794]Distinct count fails on ArrayOfStruct

2018-08-07 Thread jackylk
[CARBONDATA-2794]Distinct count fails on ArrayOfStruct This PR fixes Code Generator Error thrown when Select filter contains more than one count of distinct of ArrayofStruct with group by Clause This closes #2573 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[14/50] [abbrv] carbondata git commit: [CARBONDATA-2606][Complex DataType Enhancements]Fix Null result if projection column have null primitive column and struct

2018-08-07 Thread jackylk
[CARBONDATA-2606][Complex DataType Enhancements]Fix Null result if projection column have null primitive column and struct Problem: In case if the actual value of the primitive data type is null, by PR#2489, we are moving all the null values to the end of the collected row without considering

[48/50] [abbrv] carbondata git commit: [CARBONDATA-2613] Support csv based carbon table

2018-08-07 Thread jackylk
http://git-wip-us.apache.org/repos/asf/carbondata/blob/1a26ac16/integration/spark2/src/main/scala/org/apache/spark/sql/execution/command/management/CarbonAddSegmentCommand.scala -- diff --git

[23/50] [abbrv] carbondata git commit: [CARBONDATA-2793][32k][Doc] Add 32k support in document

2018-08-07 Thread jackylk
[CARBONDATA-2793][32k][Doc] Add 32k support in document This closes #2572 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f9b02a5c Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f9b02a5c Diff:

[41/50] [abbrv] carbondata git commit: [CARBONDATA-2795] Add documentation for S3

2018-08-07 Thread jackylk
[CARBONDATA-2795] Add documentation for S3 This closes #2576 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e26a742c Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e26a742c Diff:

[11/50] [abbrv] carbondata git commit: Fixed Spelling

2018-08-07 Thread jackylk
Fixed Spelling Fixed Spelling This closes #2584 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/1cf3f398 Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/1cf3f398 Diff:

[34/50] [abbrv] carbondata git commit: [Documentation] [Unsafe Configuration] Added carbon.unsafe.driver.working.memory.in.mb parameter to differentiate between driver and executor unsafe memory

2018-08-07 Thread jackylk
[Documentation] [Unsafe Configuration] Added carbon.unsafe.driver.working.memory.in.mb parameter to differentiate between driver and executor unsafe memory Added carbon.unsafe.driver.working.memory.in.mb parameter to differentiate between driver and executor unsafe memory Usually in

[07/50] [abbrv] carbondata git commit: [CARBONDATA-2791]Fix Encoding for Double if exceeds LONG.Max_value

2018-08-07 Thread jackylk
[CARBONDATA-2791]Fix Encoding for Double if exceeds LONG.Max_value If Factor(decimalcount) * absMaxValue exceeds LONG.MAX_VALUE, then go for direct compression. This closes #2569 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[42/50] [abbrv] carbondata git commit: [CARBONDATA-2750] Updated documentation on Local Dictionary Supoort

2018-08-07 Thread jackylk
[CARBONDATA-2750] Updated documentation on Local Dictionary Supoort Updated Documentation on Local Dictionary Support. Changed default scenario for Local dictionary to false This closes #2590 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit:

[04/50] [abbrv] carbondata git commit: [HOTFIX] Fixed random test failure

2018-08-07 Thread jackylk
[HOTFIX] Fixed random test failure Fixed random test failure This closes #2553 Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/f5d3c17b Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/f5d3c17b Diff:

[02/50] [abbrv] carbondata git commit: [CARBONDATA-2753][Compatibility] Row count of page is calculated wrong for old store(V2 store)

2018-08-07 Thread jackylk
[CARBONDATA-2753][Compatibility] Row count of page is calculated wrong for old store(V2 store) Row count of page is calculated wrong for V2 store. Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/8d3e8b82 Tree:

[49/50] [abbrv] carbondata git commit: [CARBONDATA-2613] Support csv based carbon table

2018-08-07 Thread jackylk
[CARBONDATA-2613] Support csv based carbon table 1. create csv based carbon table using CREATE TABLE fact_table (col1 bigint, col2 string, ..., col100 string) STORED BY 'CarbonData' TBLPROPERTIES( 'foramt'='csv', 'csv.delimiter'=',', 'csv.header'='col1,col2,col100') 2. Load data to this

[21/50] [abbrv] carbondata git commit: [CARBONDATA-2781] Added fix for Null Pointer Excpetion when create datamap killed from UI

2018-08-07 Thread jackylk
[CARBONDATA-2781] Added fix for Null Pointer Excpetion when create datamap killed from UI What was the issue? In undo meta, datamap was not being dropped. In case of Pre-aggregate table or timeseries table, the datamap was not being dropped from schema as undo meta method was not handling the

[44/50] [abbrv] carbondata git commit: [CARBONDATA-2809][DataMap] Block rebuilding for bloom/lucene and preagg datamap

2018-08-07 Thread jackylk
[CARBONDATA-2809][DataMap] Block rebuilding for bloom/lucene and preagg datamap As manual refresh currently only works fine for MV, it has some bugs with other types of datamap such as preaggregate, timeserials, lucene, bloomfilter, we will block 'deferred rebuild' for them as well as block

[06/50] [abbrv] carbondata git commit: [CARBONDATA-2784][CARBONDATA-2786][SDK writer] Fixed:Forever blocking wait with more than 21 batch of data

2018-08-07 Thread jackylk
[CARBONDATA-2784][CARBONDATA-2786][SDK writer] Fixed:Forever blocking wait with more than 21 batch of data problem: [CARBONDATA-2784] [SDK writer] Forever blocking wait with more than 21 batch of data, when consumer is dead due to data loading exception (bad record / out of memory) root cause:

[30/50] [abbrv] carbondata git commit: [HOTFIX][PR 2575] Fixed modular plan creation only if valid datamaps are available

2018-08-07 Thread jackylk
[HOTFIX][PR 2575] Fixed modular plan creation only if valid datamaps are available update query is failing in spark-2.2 cluster if mv jars are available because catalogs are not empty if datamap are created for other table also and returns true from isValidPlan() inside MVAnalyzerRule. This

Build failed in Jenkins: carbondata-master-spark-2.2 » Apache CarbonData :: presto #866

2018-08-07 Thread Apache Jenkins Server
See -- [INFO] [INFO]

Jenkins build became unstable: carbondata-master-spark-2.2 » Apache CarbonData :: Spark Common Test #866

2018-08-07 Thread Apache Jenkins Server
See

carbondata git commit: [CARBONDATA-2539]Fix mv classcast exception issue

2018-08-07 Thread jackylk
Repository: carbondata Updated Branches: refs/heads/master 78438451b -> 3d7fa1276 [CARBONDATA-2539]Fix mv classcast exception issue Class cast exception happens during min type aggregate function happening. It is corrected in this PR This closes #2602 Project:

carbondata git commit: [CARBONDATA-2585] Fix local dictionary for both table level and system level property based on priority

2018-08-07 Thread jackylk
Repository: carbondata Updated Branches: refs/heads/master f27efb3e3 -> 78438451b [CARBONDATA-2585] Fix local dictionary for both table level and system level property based on priority Added a System level Property for local dictionary Support. Property 'carbon.local.dictionary.enable' can

Jenkins build is back to normal : carbondata-master-spark-2.1 #2776

2018-08-07 Thread Apache Jenkins Server
See

Build failed in Jenkins: carbondata-master-spark-2.2 #869

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H33 (ubuntu xenial) in workspace

Build failed in Jenkins: carbondata-master-spark-2.2 #868

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H33 (ubuntu xenial) in workspace

Jenkins build is back to normal : carbondata-master-spark-2.2 #865

2018-08-07 Thread Apache Jenkins Server
See

carbondata git commit: [CARBONDATA-2836]Fixed data loading performance issue

2018-08-07 Thread ravipesala
Repository: carbondata Updated Branches: refs/heads/master b9e510640 -> f27efb3e3 [CARBONDATA-2836]Fixed data loading performance issue Problem: Data Loading is taking more time when number of records are high(3.5 billion) records Root Cause: In case of Final merge sort temp row conversion

Jenkins build is back to stable : carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #2774

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to normal : carbondata-master-spark-2.1 #2774

2018-08-07 Thread Apache Jenkins Server
See

carbondata git commit: [CARBONDATA-2823] Support streaming property with datamap

2018-08-07 Thread jackylk
Repository: carbondata Updated Branches: refs/heads/master abcd4f6e2 -> b9e510640 [CARBONDATA-2823] Support streaming property with datamap Since during query, carbondata get splits from streaming segment and columnar segments repectively, we can support streaming with index datamap. For

Build failed in Jenkins: carbondata-master-spark-2.1 #2775

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H27 (ubuntu xenial) in workspace

carbondata git commit: [CARBONDATA-2809][DataMap] Block rebuilding for bloom/lucene and preagg datamap

2018-08-07 Thread jackylk
Repository: carbondata Updated Branches: refs/heads/master b702a1b01 -> abcd4f6e2 [CARBONDATA-2809][DataMap] Block rebuilding for bloom/lucene and preagg datamap As manual refresh currently only works fine for MV, it has some bugs with other types of datamap such as preaggregate, timeserials,

Jenkins build is still unstable: carbondata-master-spark-2.1 #2772

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is still unstable: carbondata-master-spark-2.1 » Apache CarbonData :: Spark Common Test #2772

2018-08-07 Thread Apache Jenkins Server
See

Jenkins build is back to stable : carbondata-master-spark-2.1 » Apache CarbonData :: Processing #2772

2018-08-07 Thread Apache Jenkins Server
See

Build failed in Jenkins: carbondata-master-spark-2.2 #864

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H33 (ubuntu xenial) in workspace

Build failed in Jenkins: carbondata-master-spark-2.1 #2773

2018-08-07 Thread Apache Jenkins Server
See -- Started by an SCM change [EnvInject] - Loading node environment variables. Building remotely on H27 (ubuntu xenial) in workspace

carbondata git commit: [CARBONDATA-2829][CARBONDATA-2832] Fix creating merge index on older V1 V2 store

2018-08-07 Thread manishgupta88
Repository: carbondata Updated Branches: refs/heads/master 40571b846 -> b702a1b01 [CARBONDATA-2829][CARBONDATA-2832] Fix creating merge index on older V1 V2 store Block merge index creation for the old store V1 V2 versions This closes #2608 Project:

Build failed in Jenkins: carbondata-master-spark-2.2 #863

2018-08-07 Thread Apache Jenkins Server
See -- Started by timer [EnvInject] - Loading node environment variables. Building remotely on H33 (ubuntu xenial) in workspace