[GitHub] carbondata issue #1920: [CARBONDATA-2110] Remove tempCsv option in test case...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1920
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3772/



---


[GitHub] carbondata issue #2030: Merging datamap branch into master

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2030
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3771/



---


[GitHub] carbondata issue #1857: [CARBONDATA-2073][CARBONDATA-1516][Tests] Add test c...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1857
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3770/



---


[GitHub] carbondata issue #1929: [CARBONDATA-2129][CARBONDATA-2094][CARBONDATA-1516] ...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1929
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3769/



---


[GitHub] carbondata issue #1939: [CARBONDATA-2139] Optimize CTAS documentation and te...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1939
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3768/



---


[GitHub] carbondata issue #1920: [CARBONDATA-2110] Remove tempCsv option in test case...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1920
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4074/



---


[GitHub] carbondata issue #1920: [CARBONDATA-2110] Remove tempCsv option in test case...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1920
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2828/



---


[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1990
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3767/



---


[GitHub] carbondata issue #2030: Merging datamap branch into master

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2030
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4073/



---


[GitHub] carbondata issue #2030: Merging datamap branch into master

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2030
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2827/



---


[GitHub] carbondata pull request #2030: Merging datamap branch into master

2018-03-04 Thread jackylk
GitHub user jackylk opened a pull request:

https://github.com/apache/carbondata/pull/2030

Merging datamap branch into master

This PR includes all commit from datamap branch:
[CARBONDATA-2172][Lucene] Add text_columns property for Lucene DataMap (5 
minutes ago) 
[REBASE] Fix style after rebasing master (12 hours ago) 
[CARBONDATA-2216][Test] Fix bugs in sdv tests (12 hours ago) 
[CARBONDATA-2213][DataMap] Fixed wrong version for module datamap-example 
(12 hours ago) 
[CARBONDATA-2206] Fixed lucene datamap evaluation issue in executor (12 
hours ago) 
[CARBONDATA-2206] support lucene index datamap (12 hours ago) 
[HOTFIX] Fix timestamp issue in TestSortColumnsWithUnsafe (12 hours ago) 

[HOTFIX] Add dava doc for datamap interface (12 hours ago) 
[CARBONDATA-2189] Add DataMapProvider developer interface (12 hours ago) 

[CARBONDATA-1543] Supported DataMap chooser and expression for supporting 
multiple datamaps in single query (13 hours ago) 
[REBASE] Solve conflict after merging master (14 hours ago) 
[CARBONDATA-1114][Tests] Fix bugs in tests in windows env (14 hours ago) 

[CARBONDATA-2091][DataLoad] Support specifying sort column bounds in data 
loading (14 hours ago) 
[CARBONDATA-2186] Add InterfaceAudience.Internal to annotate internal 
interface (14 hours ago) 
[HOTFIX] Support generating assembling JAR for store-sdk module (14 hours 
ago) 
[CARBONDATA-2023][DataLoad] Add size base block allocation in data loading 
(14 hours ago) 
[CARBONDATA-2018][DataLoad] Optimization in reading/writing for sort temp 
row (14 hours ago) 
[CARBONDATA-2159] Remove carbon-spark dependency in store-sdk module (14 
hours ago) 
[CARBONDATA-1997] Add CarbonWriter SDK API (14 hours ago) 
[CARBONDATA-2156] Add interface annotation (14 hours ago) 
[REBASE] resolve conflict after rebasing to master (14 hours ago) 
[REBASE] Solve conflict after rebasing master (14 hours ago) 
[HotFix][CheckStyle] Fix import related checkstyle (14 hours ago) 

[CARBONDATA-1544][Datamap] Datamap FineGrain implementation (14 hours ago) 

[CARBONDATA-1480]Min Max Index Example for DataMap (14 hours ago) 
[CARBONDATA-2080] [S3-Implementation] Propagated hadoopConf from driver to 
executor for s3 implementation in clustermode. (14 hours ago) 
[CARBONDATA-2025] Unify all path construction through CarbonTablePath 
static method (14 hours ago) 
[CARBONDATA-2099] Refactor query scan process to improve readability (14 
hours ago) 
[REBASE] Solve conflict after rebasing master (14 hours ago) 
[CARBONDATA-1827] S3 Carbon Implementation (14 hours ago) 
[CARBONDATA-1968] Add external table support (14 hours ago) 
[CARBONDATA-1992] Remove partitionId in CarbonTablePath (14 hours ago) 


 - [X] Any interfaces changed?
 
 - [X] Any backward compatibility impacted?
 
 - [X] Document update required?

 - [X] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [X] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jackylk/incubator-carbondata datamap-rebase1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2030.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2030


commit 9086a1b9f2cd6cf1d4d42290a4e3678b01472714
Author: SangeetaGulia 
Date:   2017-09-21T09:26:26Z

[CARBONDATA-1827] S3 Carbon Implementation

1.Provide support for s3 in carbondata.
2.Added S3Example to create carbon table on s3.
3.Added S3CSVExample to load carbon table using csv from s3.

This closes #1805

commit 0c75ab7359ad89a16f749e84bd42416523d5255a
Author: Jacky Li 
Date:   2018-01-02T15:46:14Z

[CARBONDATA-1968] Add external table support

This PR adds support for creating external table with existing carbondata 
files, using Hive syntax.
CREATE EXTERNAL TABLE tableName STORED BY 'carbondata' LOCATION 'path'

This closes #1749

commit 5663e916fe906675ce8efa320de1ed550315dc00
Author: Jacky Li 
Date:   2018-01-06T12:28:44Z

[CARBONDATA-1992] Remove partitionId in CarbonTablePath

In CarbonTablePath, there is a deprecated partition id which is always 0, 
it should be removed to avoid confusion.

This closes #1765

commit bd40a0d73d2a7086caaa6773a2c6a1a45e24334c
Author: Jacky Li 
Date:   2018-01-31T

[GitHub] carbondata issue #1857: [CARBONDATA-2073][CARBONDATA-1516][Tests] Add test c...

2018-03-04 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/1857
  
retest sdv please


---


[GitHub] carbondata issue #1939: [CARBONDATA-2139] Optimize CTAS documentation and te...

2018-03-04 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/1939
  
retest sdv please


---


[GitHub] carbondata issue #1929: [CARBONDATA-2129][CARBONDATA-2094][CARBONDATA-1516] ...

2018-03-04 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/1929
  
retest sdv please


---


[GitHub] carbondata issue #1990: [CARBONDATA-2195] Add new test case for partition fe...

2018-03-04 Thread xubo245
Github user xubo245 commented on the issue:

https://github.com/apache/carbondata/pull/1990
  
retest sdv please


---


[GitHub] carbondata issue #2019: [CARBONDATA-2172][Lucene] Add text_columns property ...

2018-03-04 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/2019
  
LGTM


---


[GitHub] carbondata issue #1571: [CARBONDATA-1811] Use StructType as schema when crea...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1571
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3766/



---


[GitHub] carbondata issue #1772: [CARBONDATA-1995] Unify all writer steps and make te...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1772
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3765/



---


[GitHub] carbondata issue #1772: [CARBONDATA-1995] Unify all writer steps and make te...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1772
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4071/



---


[GitHub] carbondata issue #1772: [CARBONDATA-1995] Unify all writer steps and make te...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1772
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2825/



---


[GitHub] carbondata issue #1798: [CARBONDATA-1995][CARBONDATA-1996] Support file leve...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1798
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3764/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3763/



---


[GitHub] carbondata issue #1571: [CARBONDATA-1811] Use StructType as schema when crea...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1571
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4072/



---


[GitHub] carbondata issue #1798: [CARBONDATA-1995][CARBONDATA-1996] Support file leve...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1798
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4070/



---


[GitHub] carbondata issue #1571: [CARBONDATA-1811] Use StructType as schema when crea...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1571
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2826/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4069/



---


[GitHub] carbondata issue #1973: [CARBONDATA-2163][CARBONDATA-2164] Remove spark depe...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1973
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3762/



---


[GitHub] carbondata issue #1798: [CARBONDATA-1995][CARBONDATA-1996] Support file leve...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1798
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2824/



---


[GitHub] carbondata issue #1825: [CARBONDATA-2032][DataLoad] directly write carbon da...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1825
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2823/



---


[jira] [Updated] (CARBONDATA-2139) Optimize CTAS documentation and test case

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2139:

Fix Version/s: 1.4.0

> Optimize CTAS documentation and test case
> -
>
> Key: CARBONDATA-2139
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2139
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs, test
>Affects Versions: 1.3.1
>Reporter: xubo245
>Assignee: xubo245
>Priority: Trivial
> Fix For: 1.4.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Optimize CTAS:
> * optimize documentation 
> * add test case
> * drop table  after finishing run test acse, remove the file of table from 
> disk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2139) Optimize CTAS documentation and test case

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2139:

Fix Version/s: (was: 1.3.1)

> Optimize CTAS documentation and test case
> -
>
> Key: CARBONDATA-2139
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2139
> Project: CarbonData
>  Issue Type: Improvement
>  Components: docs, test
>Affects Versions: 1.3.1
>Reporter: xubo245
>Assignee: xubo245
>Priority: Trivial
> Fix For: 1.4.0
>
>  Time Spent: 5.5h
>  Remaining Estimate: 0h
>
> Optimize CTAS:
> * optimize documentation 
> * add test case
> * drop table  after finishing run test acse, remove the file of table from 
> disk



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2221) Drop table should throw exception when metastore operation failed

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2221:

Fix Version/s: (was: 1.3.1)
   1.4.0

> Drop table should throw exception when metastore operation failed
> -
>
> Key: CARBONDATA-2221
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2221
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Jacky Li
>Priority: Major
> Fix For: 1.4.0
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2169) Conflicting classes cause NoSuchMethodError, when our project using org.apache.carbondata:carbondata-hive:1.3.0

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2169:

Fix Version/s: (was: 1.3.1)
   1.4.0

> Conflicting classes cause NoSuchMethodError, when our project using 
> org.apache.carbondata:carbondata-hive:1.3.0
> ---
>
> Key: CARBONDATA-2169
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2169
> Project: CarbonData
>  Issue Type: Bug
>  Components: hive-integration
>Affects Versions: 1.3.0
>Reporter: PandaMonkey
>Priority: Major
> Fix For: 1.4.0
>
> Attachments: carbondata conflicts.txt
>
>
> Hi, when we using org.apache.carbondata:carbondata-hive:1.3.0, we got 
> *NoSuchMethodError*. And by analyzing the source code, we found the root 
> cause is conflicting classes in different JARs. It means that duplicate 
> classes exist in different JARs, but they have different features, which 
> leads to the really loaded classes are not the actually required ones of our 
> project. (As JVM only load the classes present first on the classpath and 
> shadow the other duplicate ones with the same name.) And such conflictiing 
> problems exist in several JAR pairs dependent by carbondata-hive:1.3.0. The 
> detailed conflicting info is listed in the attachment.
> Conflicting Jar-pairs:
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  
> jar-pair:
>  jar-pair:
>  jar-pair:



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2130) Find some Spelling error in CarbonData

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2130:

Fix Version/s: (was: 1.3.1)
   1.4.0

> Find some Spelling error in CarbonData
> --
>
> Key: CARBONDATA-2130
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2130
> Project: CarbonData
>  Issue Type: Bug
>  Components: core
>Affects Versions: 1.3.1
>Reporter: xubo245
>Assignee: xubo245
>Priority: Minor
> Fix For: 1.4.0
>
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Find some Spelling error in CarbonData:
> like:
> realtion
> cloumn



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2159) Remove carbon-spark dependency for sdk module

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2159:

Fix Version/s: (was: 1.3.1)
   1.4.0

> Remove carbon-spark dependency for sdk module
> -
>
> Key: CARBONDATA-2159
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2159
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Jacky Li
>Priority: Major
> Fix For: 1.4.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> store-sdk module should not depend on carbon-spark module



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2162) Remove spark dependency in carbon-core and carbon-processing module

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2162:

Fix Version/s: (was: 1.3.1)
   1.4.0

> Remove spark dependency in carbon-core and carbon-processing module
> ---
>
> Key: CARBONDATA-2162
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2162
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Jacky Li
>Priority: Major
> Fix For: 1.4.0
>
>
> The assembly JAR of store-sdk module should be small, but currently it 
> includes spark JAR because carbon-core, carbon-processing, carbon-hadoop 
> modules depends on spark.
> This dependency should be removed



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata issue #1973: [CARBONDATA-2163][CARBONDATA-2164] Remove spark depe...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1973
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2822/



---


[GitHub] carbondata issue #1973: [CARBONDATA-2163][CARBONDATA-2164] Remove spark depe...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1973
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4068/



---


[GitHub] carbondata issue #2029: [CARBONDATA-2222] Update the FAQ doc for some mistak...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2029
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4066/



---


[GitHub] carbondata issue #1995: [WIP] File Format Reader

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/1995
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3761/



---


[GitHub] carbondata issue #2029: [CARBONDATA-2222] Update the FAQ doc for some mistak...

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/2029
  
Build Success with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2820/



---


[GitHub] carbondata issue #2029: [CARBONDATA-2222] Update the FAQ doc for some mistak...

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2029
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3760/



---


[GitHub] carbondata issue #1995: [WIP] File Format Reader

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1995
  
Build Failed with Spark 2.2.1, Please check CI 
http://88.99.58.216:8080/job/ApacheCarbonPRBuilder/2821/



---


[GitHub] carbondata issue #1995: [WIP] File Format Reader

2018-03-04 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1995
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder1/4067/



---


[GitHub] carbondata pull request #2029: [CARBONDATA-2222] Update the FAQ doc for some...

2018-03-04 Thread chenerlu
GitHub user chenerlu opened a pull request:

https://github.com/apache/carbondata/pull/2029

[CARBONDATA-] Update the FAQ doc for some mistakes

Be sure to do all of the following checklist to help us incorporate 
your contribution quickly and easily:

 - [No] Any interfaces changed?
 
 - [No] Any backward compatibility impacted?
 
 - [Yes ] Document update required?

 - [NA ] Testing done
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
- How it is tested? Please attach test report.
- Is it a performance related change? Please attach the performance 
test report.
- Any additional information to help reviewers in testing this 
change.
   
 - [ NA] For large changes, please consider breaking it into sub-tasks 
under an umbrella JIRA. 



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenerlu/incubator-carbondata updatedoc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/2029.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #2029


commit e10ed2c0d96070112bb5701ba85b896fb6fe1f18
Author: chenerlu 
Date:   2018-03-04T15:39:40Z

update the FAQ doc for some mistakes




---


[jira] [Created] (CARBONDATA-2222) Update the FAQ doc for some mistakes

2018-03-04 Thread chenerlu (JIRA)
chenerlu created CARBONDATA-:


 Summary: Update the FAQ doc for some mistakes
 Key: CARBONDATA-
 URL: https://issues.apache.org/jira/browse/CARBONDATA-
 Project: CarbonData
  Issue Type: Bug
Reporter: chenerlu
Assignee: chenerlu






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2027: [WIP] Add more queries in CompareTest

2018-03-04 Thread jackylk
Github user jackylk closed the pull request at:

https://github.com/apache/carbondata/pull/2027


---


[jira] [Updated] (CARBONDATA-2131) Alter table adding long datatype is failing but Create table with long type is successful, in Spark 2.1

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2131:

Fix Version/s: 1.3.1

> Alter table adding long datatype is failing but Create table with long type 
> is successful, in Spark 2.1
> ---
>
> Key: CARBONDATA-2131
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2131
> Project: CarbonData
>  Issue Type: Bug
>Reporter: dhatchayani
>Assignee: dhatchayani
>Priority: Minor
> Fix For: 1.3.0, 1.3.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> create table test4(a1 int) stored by 'carbondata';
>  +-+--+
>  | Result  |
>  +-+--+
>  +-+--+
>  No rows selected (1.757 seconds)
> **
>  
> *alter table test4 add columns (a6 long);*
>  *Error: java.lang.RuntimeException*:
>  BaseSqlParser == Parse1 ==
>  
> Operation not allowed: alter table add columns(line 1, pos 0)
>  
> == SQL ==
>  alter table test4 add columns (a6 long)
>  ^^^
>  
> == Parse2 ==
>  [1.35] failure: identifier matching regex (?i)VARCHAR expected
>  
> alter table test4 add columns (a6 long)
>    ^;
>  CarbonSqlParser [1.35] failure: identifier matching regex (?i)VARCHAR 
> expected
>  
> alter table test4 add columns (a6 long)
>    ^ (state=,code=0)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2119) CarbonDataWriterException thrown when loading using global_sort

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2119?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2119:

Fix Version/s: 1.3.1

> CarbonDataWriterException thrown when loading using global_sort
> ---
>
> Key: CARBONDATA-2119
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2119
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Kunal Kapoor
>Assignee: Kunal Kapoor
>Priority: Major
> Fix For: 1.3.0, 1.3.1
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> CREATE TABLE uniqdata_globalsort1 (CUST_ID int,CUST_NAME 
> String,ACTIVE_EMUI_VERSION string, DOB timestamp, DOJ timestamp, 
> BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 bigint,DECIMAL_COLUMN1 decimal(30,10), 
> DECIMAL_COLUMN2 decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 
> double,INTEGER_COLUMN1 int) STORED BY 'carbondata' 
> TBLPROPERTIES('SORT_SCOPE'='GLOBAL_SORT')
> LOAD DATA INPATH 'hdfs://hacluster/chetan/2000_UniqData.csv' into table 
> uniqdata_globalsort1 OPTIONS('DELIMITER'=',' , 
> 'QUOTECHAR'='"','BAD_RECORDS_ACTION'='FORCE','FILEHEADER'='CUST_ID,CUST_NAME,ACTIVE_EMUI_VERSION,DOB,DOJ,BIGINT_COLUMN1,BIGINT_COLUMN2,DECIMAL_COLUMN1,DECIMAL_COLUMN2,Double_COLUMN1,Double_COLUMN2,INTEGER_COLUMN1');
>  
> *EXCEPTION*
> There is an unexpected error: unable to generate the mdkey 
> org.apache.carbondata.spark.load.DataLoadProcessorStepOnSpark$.writeFunc(DataLoadProcessorStepOnSpark.scala:222)org.apache.carbondata.spark.load.DataLoadProcessBuilderOnSpark$$anonfun$loadDataUsingGlobalSort$1.apply(DataLoadProcessBuilderOnSpark.scala:136)
>  
> org.apache.carbondata.spark.load.DataLoadProcessBuilderOnSpark$$anonfun$loadDataUsingGlobalSort$1.apply(DataLoadProcessBuilderOnSpark.scala:135)
>  
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) 
> org.apache.spark.scheduler.Task.run(Task.scala:99) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)   
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> java.lang.Thread.run(Thread.java:748)
> at org.apache.spark.TaskContextImpl.invokeListeners(TaskContextImpl.scala:138)
> at org.apache.spark.TaskContextImpl.markTaskFailed(TaskContextImpl.scala:106)
> at org.apache.spark.scheduler.Task.run(Task.scala:104)
> ... 4 more
> Caused by: 
> org.apache.carbondata.processing.loading.exception.CarbonDataLoadingException:
>  unable to generate the mdkey
> at 
> org.apache.carbondata.processing.loading.steps.DataWriterProcessorStepImpl.processRow(DataWriterProcessorStepImpl.java:189)
> at 
> org.apache.carbondata.spark.load.DataLoadProcessorStepOnSpark$.writeFunc(DataLoadProcessorStepOnSpark.scala:207)
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-2143) Fixed query memory leak issue for task failure during initialization of record reader

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2143.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

> Fixed query memory leak issue for task failure during initialization of 
> record reader
> -
>
> Key: CARBONDATA-2143
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2143
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Manish Gupta
>Assignee: Manish Gupta
>Priority: Major
> Fix For: 1.3.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> *Problem:*
>  Whenever a query is executed, in the internalCompute method of CarbonScanRdd 
> class record reader is initialized. A task completion listener is attached to 
> each task after initialization of the record reader.
>  During record reader initialization, queryResultIterator is initialized and 
> one blocklet is processed. The blocklet processed will use available unsafe 
> memory.
>  Lets say there are 100 columns and 80 columns get the space but there is no 
> space left for the remaining columns to be stored in the unsafe memory. This 
> will result is memory exception and record reader initialization will fail 
> leading to failure in query.
>  In the above case the unsafe memory allocated for 80 columns will not be 
> freed and will always remain occupied till the JVM process persists.
> *Impact*
>  It is memory leak in the system and can lead to query failures for queries 
> executed after one one query fails due to the above reason.
> *Exception Trace*
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.carbondata.core.memory.MemoryException: Not enough memory
>    at 
> org.apache.carbondata.core.scan.processor.AbstractDataBlockIterator.updateScanner(AbstractDataBlockIterator.java:136)
>    at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:50)
>    at 
> org.apache.carbondata.core.scan.processor.impl.DataBlockIteratorImpl.next(DataBlockIteratorImpl.java:32)
>    at 
> org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.getBatchResult(DetailQueryResultIterator.java:49)
>    at 
> org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:41)
>    at 
> org.apache.carbondata.core.scan.result.iterator.DetailQueryResultIterator.next(DetailQueryResultIterator.java:31)
>    at 
> org.apache.carbondata.core.scan.result.iterator.ChunkRowIterator.(ChunkRowIterator.java:41)
>    at 
> org.apache.carbondata.hadoop.CarbonRecordReader.initialize(CarbonRecordReader.java:84)
>    at 
> org.apache.carbondata.spark.rdd.CarbonScanRDD.internalCompute(CarbonScanRDD.scala:378)
>    at 
> org.apache.carbondata.spark.rdd.CarbonRDD.compute(CarbonRDD.scala:60)
>    at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>    at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>    at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>    at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>    at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>    at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>    at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>    at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>    at 
> org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
>    at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
>    at 
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>    at org.apache.spark.scheduler.Task.run(Task.scala:99)
>    at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-2134) Prevent implicit column filter list from getting serialized while submitting task to executor

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2134.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

> Prevent implicit column filter list from getting serialized while submitting 
> task to executor
> -
>
> Key: CARBONDATA-2134
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2134
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Manish Gupta
>Assignee: Manish Gupta
>Priority: Major
> Fix For: 1.3.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> **Problem**
> In the current store blocklet pruning in driver and no further pruning takes 
> place in the executor side. But still the implicit column filter list being 
> sent to executor. As the size of list grows the cost of serializing and 
> deserializing the list is increasing which can impact the query performance.
> **Solution**
> Remove the list from the filter expression before submitting the task to 
> executor.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-2137) Delete query is taking more time while processing the carbondata.

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2137.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

> Delete query is taking more time while processing the carbondata.
> -
>
> Key: CARBONDATA-2137
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2137
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Major
> Fix For: 1.3.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> *Expected Output* : Delete query should take less time
> *Actual Output* : Delete Query is taking 20min
> *Following the steps to reproduce :* 
>  * create table and load 500 million records
>  * create hive table with 10% of data
>  * delete the data in main-table using hive table
>  * check the performance
> *Following is the configuration used :*
>  * SPARK_EXECUTOR_MEMORY : 200G
>  * SPARK_DRIVER_MEMORY : 20G
>  * SPARK_EXECUTOR_CORES : 32
>  * SPARK_EXECUTOR_INSTANCEs : 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2150) Unwanted updatetable status files are being generated for the delete operation where no records are deleted

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2150:

Fix Version/s: 1.3.1

> Unwanted updatetable status files are being generated for the delete 
> operation where no records are deleted
> ---
>
> Key: CARBONDATA-2150
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2150
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Akash R Nilugal
>Assignee: Akash R Nilugal
>Priority: Minor
> Fix For: 1.3.0, 1.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Unwanted updatetable status files are being generated for the delete 
> operation where no records are deleted



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2185) add InputMetrics for Streaming Reader

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2185:

Fix Version/s: 1.3.1

> add InputMetrics for Streaming Reader
> -
>
> Key: CARBONDATA-2185
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2185
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Babulal
>Priority: Minor
> Fix For: 1.3.1
>
> Attachments: image-2018-02-19-22-14-15-190.png
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Run Select Query in Streaming Table .
>  
> Result::- Record count in Inputmetrics is always 0.
>  
> !image-2018-02-19-22-14-15-190.png!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-2201) firing the LoadTablePreExecutionEvent before streaming causes NPE

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2201.
-
   Resolution: Fixed
Fix Version/s: 1.3.1

> firing the LoadTablePreExecutionEvent before streaming causes NPE
> -
>
> Key: CARBONDATA-2201
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2201
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Major
> Fix For: 1.3.1
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2187) Restructure the partition folders as per the standard hive folders

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2187:

Fix Version/s: 1.4.0

> Restructure the partition folders as per the standard hive folders
> --
>
> Key: CARBONDATA-2187
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2187
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Ravindra Pesala
>Assignee: Ravindra Pesala
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 21h 20m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2168) Support global sort on partition tables

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2168:

Fix Version/s: 1.3.1
   1.4.0

> Support global sort on partition tables
> ---
>
> Key: CARBONDATA-2168
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2168
> Project: CarbonData
>  Issue Type: Improvement
>Affects Versions: 1.3.1
>Reporter: Ravindra Pesala
>Assignee: Ravindra Pesala
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> Currently, user cannot use global sort on standard hive partitioned tables. 
> Better support global sort on partitioned tables to get better resource 
> utilization while loading and concurrent performance while querying tables



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2207) TestCase Fails using Hive Metastore

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2207:

Fix Version/s: 1.3.1

> TestCase Fails using Hive Metastore
> ---
>
> Key: CARBONDATA-2207
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2207
> Project: CarbonData
>  Issue Type: Bug
>Affects Versions: 1.4.0
>Reporter: Jatin
>Assignee: Jatin
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Run All the Cabon TestCases using hive metastore out of which some test cases 
> were failing because of not getting carbon table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2219) Add validation for external partition location to use same schema

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2219:

Fix Version/s: 1.4.0

> Add validation for external partition location to use same schema
> -
>
> Key: CARBONDATA-2219
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2219
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ravindra Pesala
>Assignee: Ravindra Pesala
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2209) Rename table with partitions not working issue and batch_sort and no_sort with partition table issue

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2209:

Fix Version/s: 1.4.0

> Rename table with partitions not working issue and batch_sort and no_sort 
> with partition table issue
> 
>
> Key: CARBONDATA-2209
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2209
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Ravindra Pesala
>Assignee: Ravindra Pesala
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> # After table rename on partitions table, it returns empty data upon querying.
>  # Batch sort and no sort loading is not working on partition table



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2138) Documentation for HEADER option

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2138?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2138:

Fix Version/s: 1.3.1

> Documentation for HEADER option
> ---
>
> Key: CARBONDATA-2138
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2138
> Project: CarbonData
>  Issue Type: Task
>Reporter: Gururaj Shetty
>Assignee: Gururaj Shetty
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Add documentation for HEADER option as per the discussion in the below 
> mailing list.
> http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Discussion-Add-HEADER-option-to-load-data-sql-td17080.html#a17138



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2135) Documentation for Table Comment and Column Comment

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2135:

Fix Version/s: 1.3.1

> Documentation for Table Comment and Column Comment
> --
>
> Key: CARBONDATA-2135
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2135
> Project: CarbonData
>  Issue Type: Task
>Reporter: Gururaj Shetty
>Assignee: Gururaj Shetty
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Add documentation for Table Comment and Column Comment



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2214) Remove config 'spark.sql.hive.thriftServer.singleSession' from installation-guide.md

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2214:

Fix Version/s: 1.4.0

> Remove config 'spark.sql.hive.thriftServer.singleSession' from 
> installation-guide.md
> 
>
> Key: CARBONDATA-2214
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2214
> Project: CarbonData
>  Issue Type: Task
>  Components: docs
>Reporter: Zhichao  Zhang
>Assignee: Zhichao  Zhang
>Priority: Trivial
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Remove config 'spark.sql.hive.thriftServer.singleSession' from 
> installation-guide.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2215) Add the description of Carbon Stream Parser into streaming-guide.md

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2215:

Fix Version/s: 1.3.1
   1.4.0

> Add the description of Carbon Stream Parser into streaming-guide.md
> ---
>
> Key: CARBONDATA-2215
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2215
> Project: CarbonData
>  Issue Type: Task
>  Components: docs
>Reporter: Zhichao  Zhang
>Assignee: Zhichao  Zhang
>Priority: Trivial
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Add the description of Carbon Stream Parser into streaming-guide.md



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2161) Compacted Segment of Streaming Table should update "mergeTo" column

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2161:

Fix Version/s: 1.3.1
   1.4.0

> Compacted Segment of Streaming Table should update "mergeTo" column
> ---
>
> Key: CARBONDATA-2161
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2161
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Babulal
>Assignee: Babulal
>Priority: Minor
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> When Handoff is trigger , ROW file format will be converted into COLUMNAR  
> and that segment status will be updated as "Compacted". But "Merged TO" 
> column is not updated.
> +-+-+---+---+-+---+
> |SegmentSequenceId|Status   |Load Start Time|Load End Time  
> |Merged To|File Format|
> +-+-+---+---+-+---+
> |2|Success  |2018-02-11 18:17:24.157|2018-02-11 
> 18:17:25.899|NA   |COLUMNAR_V3|
> |1|Streaming|2018-02-11 18:17:24.137|null   
> |NA   |ROW_V1 |
> |0|Compacted|2018-02-11 18:15:54.262|2018-02-11 
> 18:17:24.137|NA|ROW_V1 |
> +-+-+---+---+-+---+
> Expected 
> +-+-+---+---+-+---+
> |SegmentSequenceId|Status   |Load Start Time|Load End Time  
> |Merged To|File Format|
> +-+-+---+---+-+---+
> |2|Success  |2018-02-11 18:17:24.157|2018-02-11 
> 18:17:25.899|NA   |COLUMNAR_V3|
> |1|Streaming|2018-02-11 18:17:24.137|null   
> |NA   |ROW_V1 |
> |0|Compacted|2018-02-11 18:15:54.262|2018-02-11 
> 18:17:24.137|2|ROW_V1 |
> +-+-+---+---+-+---+



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (CARBONDATA-2147) Exception displays while loading data with streaming

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2147.
-
   Resolution: Fixed
Fix Version/s: 1.3.1
   1.4.0

> Exception displays while loading data with streaming
> 
>
> Key: CARBONDATA-2147
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2147
> Project: CarbonData
>  Issue Type: Bug
>  Components: data-load
>Affects Versions: 1.3.0
> Environment: spark 2.1, spark 2.2.1
>Reporter: Vandana Yadav
>Priority: Minor
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Exception displays while loading data with streaming
> Steps to reproduce:
> 1) start spark-shell:
> ./spark-shell --jars 
> /opt/spark/spark-2.2.1/carbonlib/carbondata_2.11-1.3.0-SNAPSHOT-shade-hadoop2.7.2.jar
> 2) Execute following script:
> import org.apache.spark.sql.SparkSession
> import org.apache.spark.sql.CarbonSession._
> import org.apache.carbondata.core.util.CarbonProperties
> import org.apache.spark.sql.streaming.\{ProcessingTime, StreamingQuery}
> val carbon = SparkSession.builder().config(sc.getConf) 
> .getOrCreateCarbonSession("hdfs://localhost:54310/newCarbonStore","/tmp")
> import org.apache.carbondata.core.constants.CarbonCommonConstants
> import org.apache.carbondata.core.util.CarbonProperties
> CarbonProperties.getInstance().addProperty(CarbonCommonConstants.CARBON_BAD_RECORDS_ACTION,
>  "FORCE")
> carbon.sql("drop table if exists uniqdata_stream")
> carbon.sql("create table uniqdata_stream(CUST_ID int,CUST_NAME String,DOB 
> timestamp,DOJ timestamp, BIGINT_COLUMN1 bigint,BIGINT_COLUMN2 
> bigint,DECIMAL_COLUMN1 decimal(30,10),DECIMAL_COLUMN2 
> decimal(36,10),Double_COLUMN1 double, Double_COLUMN2 double,INTEGER_COLUMN1 
> int) STORED BY 'org.apache.carbondata.format' TBLPROPERTIES 
> ('TABLE_BLOCKSIZE'= '256 MB', 'streaming'='true')");
> import carbon.sqlContext.implicits._
> import org.apache.spark.sql.types._
> val uniqdataSch = StructType(
> Array(StructField("CUST_ID", IntegerType),StructField("CUST_NAME", 
> StringType),StructField("DOB", TimestampType), StructField("DOJ", 
> TimestampType), StructField("BIGINT_COLUMN1", LongType), 
> StructField("BIGINT_COLUMN2", LongType), StructField("DECIMAL_COLUMN1", 
> org.apache.spark.sql.types.DecimalType(30, 10)), 
> StructField("DECIMAL_COLUMN2", 
> org.apache.spark.sql.types.DecimalType(36,10)), StructField("Double_COLUMN1", 
> DoubleType), StructField("Double_COLUMN2", DoubleType), 
> StructField("INTEGER_COLUMN1", IntegerType)))
> val streamDf = carbon.readStream
> .schema(uniqdataSch)
> .option("sep", ",")
> .csv("file:///home/knoldus/Documents/uniqdata")
> val qry = streamDf.writeStream.format("carbondata").trigger(ProcessingTime("5 
> seconds"))
>  .option("checkpointLocation","/stream/uniq")
>  .option("dbName", "default")
>  .option("tableName", "uniqdata_stream")
>  .start()
>  
> 3) Error logs:
> warning: there was one deprecation warning; re-run with -deprecation for 
> details
> uniqdataSch: org.apache.spark.sql.types.StructType = 
> StructType(StructField(CUST_ID,IntegerType,true), 
> StructField(CUST_NAME,StringType,true), StructField(DOB,TimestampType,true), 
> StructField(DOJ,TimestampType,true), 
> StructField(BIGINT_COLUMN1,LongType,true), 
> StructField(BIGINT_COLUMN2,LongType,true), 
> StructField(DECIMAL_COLUMN1,DecimalType(30,10),true), 
> StructField(DECIMAL_COLUMN2,DecimalType(36,10),true), 
> StructField(Double_COLUMN1,DoubleType,true), 
> StructField(Double_COLUMN2,DoubleType,true), 
> StructField(INTEGER_COLUMN1,IntegerType,true))
> streamDf: org.apache.spark.sql.DataFrame = [CUST_ID: int, CUST_NAME: string 
> ... 9 more fields]
> qry: org.apache.spark.sql.streaming.StreamingQuery = 
> org.apache.spark.sql.execution.streaming.StreamingQueryWrapper@d0e155c
> scala> 18/02/08 16:38:53 ERROR StreamSegment: Executor task launch worker for 
> task 5 Failed to append batch data to stream segment: 
> hdfs://localhost:54310/newCarbonStore/default/uniqdata_stream1/Fact/Part0/Segment_0
> java.lang.NullPointerException
>  at org.apache.spark.sql.catalyst.InternalRow.getString(InternalRow.scala:32)
>  at 
> org.apache.carbondata.streaming.parser.CSVStreamParserImp.parserRow(CSVStreamParserImp.java:40)
>  at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:337)
>  at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$InputIterator.next(CarbonAppendableStreamSink.scala:331)
>  at 
> org.apache.carbondata.streaming.segment.StreamSegment.appendBatchData(StreamSegment.java:244)
>  at 
> org.apache.spark.sql.execution.streaming.CarbonAppendableStreamSink$$anonfun$writeDataFileTask$1.apply$mcV$sp(Carbon

[jira] [Resolved] (CARBONDATA-2148) Use Row parser to replace current default parser:CSVStreamParserImp

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala resolved CARBONDATA-2148.
-
   Resolution: Fixed
Fix Version/s: 1.4.0

> Use Row parser to replace current default parser:CSVStreamParserImp
> ---
>
> Key: CARBONDATA-2148
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2148
> Project: CarbonData
>  Issue Type: Improvement
>  Components: data-load, spark-integration
>Affects Versions: 1.3.1
>Reporter: Zhichao  Zhang
>Assignee: Zhichao  Zhang
>Priority: Minor
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 10h
>  Remaining Estimate: 0h
>
> Currently the default value of 'carbon.stream.parser' is CSVStreamParserImp, 
> it transforms InternalRow(0) to Array[Object], InternalRow(0) represents the 
> value of one line which is received from Socket. When it receives data from 
> Kafka, the schema of InternalRow is changed, either it need to assemble the 
> fields of kafka data Row into a String and stored it as InternalRow(0), or 
> define a new parser to convert kafka data Row to Array[Object]. It needs the 
> same operation for every table.
> *Solution:*
> Use a new parser called RowStreamParserImpl as the default parser instead of 
> CSVStreamParserImpl, this new parser will automatically convert InternalRow 
> to Array[Object] according to the schema. In general, we will transform 
> source data to a structed Row object, using this way, we do not need to 
> define a parser for every table.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2208) Pre aggregate datamap creation is failing when count(*) present in query

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2208:

Fix Version/s: 1.4.0

> Pre aggregate datamap creation is failing when count(*) present in query
> 
>
> Key: CARBONDATA-2208
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2208
> Project: CarbonData
>  Issue Type: Bug
>Reporter: kumar vishal
>Assignee: kumar vishal
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Pre aggregate data map creation is failing with parsing error 
> create datamap agg on table maintable using 'preaggregate' as select name, 
> count(*) from maintable group by name
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2211) Alter Table Streaming DDL should blocking DDL like other DDL ( All DDL are blocking DDL)

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2211:

Fix Version/s: 1.4.0

> Alter Table Streaming DDL should blocking DDL like other DDL ( All DDL are 
> blocking DDL)
> 
>
> Key: CARBONDATA-2211
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2211
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> DDL displayed output immediately.  But compaction took more time.
> it should be Blocking so that if any issue comes in the Alter (compaction ) 
> result of DDL can be error (exception) so that use can check ERROR why  
> compaction is failed. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2055) Support integrating Streaming table with Spark Streaming

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2055:

Fix Version/s: 1.4.0

> Support integrating Streaming table with Spark Streaming
> 
>
> Key: CARBONDATA-2055
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2055
> Project: CarbonData
>  Issue Type: New Feature
>  Components: spark-integration
>Reporter: Zhichao  Zhang
>Assignee: Zhichao  Zhang
>Priority: Minor
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 15h 40m
>  Remaining Estimate: 0h
>
> Currently CarbonData just support integrating 
> with Spark Structured Streaming which requires Kafka's version must be >= 
> 0.10. But there are still many users  integrating Spark Streaming with 
> kafka 0.8, the cost of upgrading kafka is too 
> much. So CarbonData need to integrate with Spark Streaming too.
> Please see the discussion in mailing list:
> [http://apache-carbondata-dev-mailing-list-archive.1130556.n5.nabble.com/Should-CarbonData-need-to-integrate-with-Spark-Streaming-too-td35341.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (CARBONDATA-2204) Access tablestatus file too many times during query

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala reassigned CARBONDATA-2204:
---

Assignee: Ravindra Pesala

> Access tablestatus file too many times during query
> ---
>
> Key: CARBONDATA-2204
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2204
> Project: CarbonData
>  Issue Type: Improvement
>  Components: data-query
>Affects Versions: 1.3.0
>Reporter: xuchuanyin
>Assignee: Ravindra Pesala
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> * Problems
> Currently in carbondata, a single query will access tablestatus file 7 times, 
> which will definitely slow down the query performance especially when this 
> file is in remote cluster since reading this file is purely client side 
> operation.
>  
>  *  Steps to reproduce
> 1. Add logger in `AtomicFileOperationsImpl.openForRead` and printout the file 
> name to read.
> 2. Run a query on carbondata table. Here I ran 
> `TestLoadDataGeneral.test("test data loading CSV file without extension 
> name")`.
> 3. Observe the output log and search the keyword 'tablestatus'.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2204) Access tablestatus file too many times during query

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2204:

Fix Version/s: 1.4.0

> Access tablestatus file too many times during query
> ---
>
> Key: CARBONDATA-2204
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2204
> Project: CarbonData
>  Issue Type: Improvement
>  Components: data-query
>Affects Versions: 1.3.0
>Reporter: xuchuanyin
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> * Problems
> Currently in carbondata, a single query will access tablestatus file 7 times, 
> which will definitely slow down the query performance especially when this 
> file is in remote cluster since reading this file is purely client side 
> operation.
>  
>  *  Steps to reproduce
> 1. Add logger in `AtomicFileOperationsImpl.openForRead` and printout the file 
> name to read.
> 2. Run a query on carbondata table. Here I ran 
> `TestLoadDataGeneral.test("test data loading CSV file without extension 
> name")`.
> 3. Observe the output log and search the keyword 'tablestatus'.  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2196) during stream sometime carbontable is null in executor side

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2196:

Fix Version/s: 1.4.0

> during stream sometime carbontable is null in executor side
> ---
>
> Key: CARBONDATA-2196
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2196
> Project: CarbonData
>  Issue Type: Improvement
>Reporter: Rahul Kumar
>Assignee: Rahul Kumar
>Priority: Major
> Fix For: 1.4.0, 1.3.1
>
>  Time Spent: 5h 50m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (CARBONDATA-2098) Add documentation for pre-aggregate tables

2018-03-04 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-2098?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-2098:

Fix Version/s: 1.3.1

> Add documentation for pre-aggregate tables
> --
>
> Key: CARBONDATA-2098
> URL: https://issues.apache.org/jira/browse/CARBONDATA-2098
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Raghunandan
>Assignee: Raghunandan S
>Priority: Minor
> Fix For: 1.3.0, 1.3.1
>
>  Time Spent: 8h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] carbondata pull request #2028: [HOTFIX] Fixed sdv tests

2018-03-04 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/2028


---


[GitHub] carbondata issue #2028: [HOTFIX] Fixed sdv tests

2018-03-04 Thread jackylk
Github user jackylk commented on the issue:

https://github.com/apache/carbondata/pull/2028
  
LGTM


---


[GitHub] carbondata issue #2028: [HOTFIX] Fixed sdv tests

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2028
  
SDV Build Success , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3759/



---


[GitHub] carbondata issue #2028: [HOTFIX] Fixed sdv tests

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2028
  
retest sdv please


---


[GitHub] carbondata issue #2028: [HOTFIX] Fixed sdv tests

2018-03-04 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/2028
  
SDV Build Fail , Please check CI 
http://144.76.159.231:8080/job/ApacheSDVTests/3758/



---