[GitHub] carbondata issue #1103: [WIP] Implement range interval partition

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1103
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/723/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1105: [WIP] Implement range interval partition

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1105
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/724/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1112: [CARBONDATA-1244] Rewrite README.md of presto integr...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1112
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/722/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1105: [WIP] Implement range interval partition

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1105
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2789/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1103: [WIP] Implement range interval partition

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1103
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/209/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1103: [WIP] Implement range interval partition

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1103
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2788/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1002: [CARBONDATA-1136] Fix compaction bug for the partiti...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1002
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/721/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1002: [CARBONDATA-1136] Fix compaction bug for the ...

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1002


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1002: [CARBONDATA-1136] Fix compaction bug for the partiti...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1002
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2787/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1002: [CARBONDATA-1136] Fix compaction bug for the partiti...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1002
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/208/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1002: [CARBONDATA-1136] Fix compaction bug for the partiti...

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1002
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread HoneyQuery
Github user HoneyQuery commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124720966
  
--- Diff: 
integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java
 ---
@@ -72,25 +72,54 @@
  * 2:FileFactory, (physic table file)
  * 3:CarbonCommonFactory, (offer some )
  * 4:DictionaryFactory, (parse dictionary util)
+ *
+ * Currently, it is mainly used to parse metadata of tables under
+ * the configured carbondata-store path and filter the relevant
+ * input splits with given query predicates.
  */
 public class CarbonTableReader {
 
   private CarbonTableConfig config;
+
+  /**
+   * The names of the tables under the schema (this.carbonFileList).
+   */
   private List tableList;
+
+  /**
+   * carbonFileList represents the store path of the schema, which is 
configured as carbondata-store
+   * in the CarbonData catalog file 
($PRESTO_HOME$/etc/catalog/carbondata.properties).
+   * Under the schema store path, there should be a directory named as the 
schema name.
+   * And under each schema directory, there are directories named as the 
table names.
+   * For example, the schema is named 'default' and there is two table 
named 'foo' and 'bar' in it, then the
--- End diff --

I have simplified this note as:
```
   carbonFileList represents the store path of the schema, which is 
configured as carbondata-store
   in the CarbonData catalog file 
($PRESTO_HOME$/etc/catalog/carbondata.properties).
```



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1002: [CARBONDATA-1136] Fix compaction bug for the partiti...

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1002
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread HoneyQuery
Github user HoneyQuery commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124720145
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
--- End diff --

OK. The two 'you' are removed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread HoneyQuery
Github user HoneyQuery commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124719922
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
+  For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to 
compile using:
+  ```
+  mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 
clean package
+  ```
+
+  Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin 
and
+  copy all jar from 
carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
+to $PRESTO_HOME$/plugin/carbondata
+
+  Thirdly: Create a carbondata.properties file under 
$PRESTO_HOME$/etc/catalog/ containing the following contents:
   ```
-  Second:create one folder "carbondata" under ./presto-server-0.166/plugin
-  Third:copy all jar from 
./carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
-to ./presto-server-0.166/plugin/carbondata
+  connector.name=carbondata
+  carbondata-store={schema-store-path}
+  ```
+  Replace the schema-store-path with the absolute path the directory which 
is the parent of the schema.
+  For example, if you have a schema named 'default' stored under 
hdfs://namenode:9000/test/carbondata/,
+  Then set carbondata-store=hdfs://namenode:9000/test/carbondata
+
+  If you changed the jar balls or configuration files, make sure you have 
dispatch the new jar balls
+  and configuration file to all the presto nodes and restart the nodes in 
the cluster. A modification of the
+  carbondata connector will not take an effect automatically.
   
 ### Generate CarbonData file
 
-Please refer to quick start : 
https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md
+Please refer to quick start: 
https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md
+Load data statement in Spark can be used to create carbondata tables. And 
you can easily find the creaed
--- End diff --

OK.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread HoneyQuery
Github user HoneyQuery commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124719863
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
+  For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to 
compile using:
+  ```
+  mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 
clean package
+  ```
+
+  Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin 
and
+  copy all jar from 
carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
--- End diff --

OK.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1179) Improve the Object Size calculation for Objects added to LRU cache

2017-06-28 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1179.
--
   Resolution: Fixed
 Assignee: Raghunandan S
Fix Version/s: 1.2.0

> Improve the Object Size calculation for Objects added to LRU cache
> --
>
> Key: CARBONDATA-1179
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1179
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Raghunandan
>Assignee: Raghunandan S
>Priority: Minor
> Fix For: 1.2.0
>
>  Time Spent: 5h
>  Remaining Estimate: 0h
>
> Java Object model has a bigger overhead when loading the objects into memory.
> The current way of estimating the object size by looking at the file size is 
> not correct and gives wrong results.Moreover due to this calculation, we are 
> storing more than the configured size for LRU cache.
> Improve the ObjectSize calculation by using the spark SizeEstimator utility



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1038: [CARBONDATA-1179] Improve the Size calculatio...

2017-06-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/1038


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
LGTM


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1113
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/720/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1113
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/719/Failed Tests: 
4carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 4org.apache.carbondata.core.datastore.filesystem.AlluxioCarbonFileTest.testListFilesWithOutDirectoryPermissionorg.apache.carbondata.core.datastore.filesystem.HDFSCarbonFileTest.testListFilesWithOutDirectoryPermissionorg.apache.carbondata.core.datastore.filesystem.LocalCarbonFileTest.testListFilesWithOutDirPermissionorg.apache.carbondata.core.datastore.filesystem.ViewFsCarbonFileTest.testListFilesWithOutDirectoryPermission



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1108: [CARBONDATA-1242] performance issue resolved

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1108
  
@rahulforallp Please give new perforamance timings after the fix.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1113: [CARBONDATA-1246] fix null pointer exception by chan...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1113
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1113: [CARBONDATA-1246] fix null pointer exception ...

2017-06-28 Thread ray6080
GitHub user ray6080 opened a pull request:

https://github.com/apache/carbondata/pull/1113

[CARBONDATA-1246] fix null pointer exception by changing null to empty array

In presto integration, `CarbonFile.listFiles()` function will return null 
when the specified `fileStatus` is not a directory or is null. This will incur 
a `NullPointerException` when called by `CarbonTableReader.updateTableList()` 
function.
Change the `listFiles()` function to return an empty array, instead of a 
null value.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ray6080/carbondata master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1113.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1113


commit e9e1fa7ad40f38103a7a0572ce58c49a1c1f6365
Author: Jin Guodong 
Date:   2017-06-29T04:43:51Z

fix null pointer exception by changing null to empty array




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (CARBONDATA-1245) NullPointerException invoked by CarbonFile.listFiles() function which returns null

2017-06-28 Thread Jelly King (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jelly King closed CARBONDATA-1245.
--
Resolution: Fixed

I thought this one was not submitted successfully, so I submitted another one 
[#CARBONDATA-1246]

> NullPointerException invoked by CarbonFile.listFiles() function which returns 
> null
> --
>
> Key: CARBONDATA-1245
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1245
> Project: CarbonData
>  Issue Type: Bug
>  Components: presto-integration
>Affects Versions: 1.1.0
>Reporter: Jelly King
>  Labels: beginner
> Fix For: 1.2.0
>
>
> In the implementation classes of _CarbonFile_, the _listFiles()_ function can 
> return null, which incurs _NullPointerException_ when called by 
> _CarbonTableReader.updateTableList()_ function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1246) NullPointerException in Presto Integration

2017-06-28 Thread Jelly King (JIRA)
Jelly King created CARBONDATA-1246:
--

 Summary: NullPointerException in Presto Integration
 Key: CARBONDATA-1246
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1246
 Project: CarbonData
  Issue Type: Bug
  Components: presto-integration
Affects Versions: 1.1.0
Reporter: Jelly King
 Fix For: 1.2.0


In presto integration, _CarbonFile.listFiles()_ function will return null when 
the specified _fileStatus_ is not a directory or is null. This will incur a 
NullPointerException when called by _CarbonTableReader.updateTableList()_ 
function.
The _listFiles()_ function should return an empty array or throw a new 
exception to indicate users, instead of returning a null value.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (CARBONDATA-1245) NullPointerException invoked by CarbonFile.listFiles() function which returns null

2017-06-28 Thread Jelly King (JIRA)
Jelly King created CARBONDATA-1245:
--

 Summary: NullPointerException invoked by CarbonFile.listFiles() 
function which returns null
 Key: CARBONDATA-1245
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1245
 Project: CarbonData
  Issue Type: Bug
  Components: presto-integration
Affects Versions: 1.1.0
Reporter: Jelly King
 Fix For: 1.2.0


In the implementation classes of _CarbonFile_, the _listFiles()_ function can 
return null, which incurs _NullPointerException_ when called by 
_CarbonTableReader.updateTableList()_ function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124708591
  
--- Diff: 
integration/presto/src/main/java/org/apache/carbondata/presto/impl/CarbonTableReader.java
 ---
@@ -72,25 +72,54 @@
  * 2:FileFactory, (physic table file)
  * 3:CarbonCommonFactory, (offer some )
  * 4:DictionaryFactory, (parse dictionary util)
+ *
+ * Currently, it is mainly used to parse metadata of tables under
+ * the configured carbondata-store path and filter the relevant
+ * input splits with given query predicates.
  */
 public class CarbonTableReader {
 
   private CarbonTableConfig config;
+
+  /**
+   * The names of the tables under the schema (this.carbonFileList).
+   */
   private List tableList;
+
+  /**
+   * carbonFileList represents the store path of the schema, which is 
configured as carbondata-store
+   * in the CarbonData catalog file 
($PRESTO_HOME$/etc/catalog/carbondata.properties).
+   * Under the schema store path, there should be a directory named as the 
schema name.
+   * And under each schema directory, there are directories named as the 
table names.
+   * For example, the schema is named 'default' and there is two table 
named 'foo' and 'bar' in it, then the
--- End diff --

Some notes like this, I think it is not necessary. We can discuss.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124708068
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
+  For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to 
compile using:
+  ```
+  mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 
clean package
+  ```
+
+  Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin 
and
+  copy all jar from 
carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
+to $PRESTO_HOME$/plugin/carbondata
+
+  Thirdly: Create a carbondata.properties file under 
$PRESTO_HOME$/etc/catalog/ containing the following contents:
   ```
-  Second:create one folder "carbondata" under ./presto-server-0.166/plugin
-  Third:copy all jar from 
./carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
-to ./presto-server-0.166/plugin/carbondata
+  connector.name=carbondata
+  carbondata-store={schema-store-path}
+  ```
+  Replace the schema-store-path with the absolute path the directory which 
is the parent of the schema.
+  For example, if you have a schema named 'default' stored under 
hdfs://namenode:9000/test/carbondata/,
+  Then set carbondata-store=hdfs://namenode:9000/test/carbondata
+
+  If you changed the jar balls or configuration files, make sure you have 
dispatch the new jar balls
+  and configuration file to all the presto nodes and restart the nodes in 
the cluster. A modification of the
+  carbondata connector will not take an effect automatically.
   
 ### Generate CarbonData file
 
-Please refer to quick start : 
https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md
+Please refer to quick start: 
https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md
+Load data statement in Spark can be used to create carbondata tables. And 
you can easily find the creaed
--- End diff --

created -> created


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124707906
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
+  For example, if you use Spark2.1.0 and Hadoop 2.7.3, you would like to 
compile using:
+  ```
+  mvn -DskipTests -Pspark-2.1 -Dspark.version=2.1.0 -Dhadoop.version=2.7.3 
clean package
+  ```
+
+  Secondly: Create a folder named 'carbondata' under $PRESTO_HOME$/plugin 
and
+  copy all jar from 
carbondata/integration/presto/target/carbondata-presto-x.x.x-SNAPSHOT
--- End diff --

jar -> jars


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread chenerlu
Github user chenerlu commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1112#discussion_r124707828
  
--- Diff: integration/presto/README.md ---
@@ -59,28 +55,50 @@ Please follow the below steps to query carbondata in 
presto
   ```
 * config carbondata-connector for presto
   
-  First:compile carbondata-presto integration module
+  Firstly: Compile carbondata, including carbondata-presto integration 
module
   ```
   $ git clone https://github.com/apache/carbondata
-  $ cd carbondata/integration/presto
-  $ mvn clean package
+  $ cd carbondata
+  $ mvn -DskipTests -P{spark-version} 
-Dspark.version={spark-version-number} -Dhadoop.version={hadoop-version-number} 
clean package
+  ```
+  Replace the spark and hadoop version with you the version you used in 
your cluster.
--- End diff --

Maybe it will be better to delete these two "you".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1244) Rewrite README.md of presto integration and add/rewrite some comments to presto integration.

2017-06-28 Thread Haoqiong Bian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haoqiong Bian updated CARBONDATA-1244:
--
Fix Version/s: (was: 1.1.1)
   1.2.0

> Rewrite README.md of presto integration and add/rewrite some comments to 
> presto integration.
> 
>
> Key: CARBONDATA-1244
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1244
> Project: CarbonData
>  Issue Type: Improvement
>  Components: presto-integration
>Reporter: Haoqiong Bian
> Fix For: 1.2.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Rewrite README.md of presto integration and add/rewrite some comments to 
> presto integration.
> Make the README easier for starters to play with. Write more comments for the 
> source code of  presto integration to make the code more readable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (CARBONDATA-1244) Rewrite README.md of presto integration and add/rewrite some comments to presto integration.

2017-06-28 Thread Haoqiong Bian (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1244?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haoqiong Bian updated CARBONDATA-1244:
--
Fix Version/s: (was: 1.2.0)
   1.1.1

> Rewrite README.md of presto integration and add/rewrite some comments to 
> presto integration.
> 
>
> Key: CARBONDATA-1244
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1244
> Project: CarbonData
>  Issue Type: Improvement
>  Components: presto-integration
>Reporter: Haoqiong Bian
> Fix For: 1.2.0
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Rewrite README.md of presto integration and add/rewrite some comments to 
> presto integration.
> Make the README easier for starters to play with. Write more comments for the 
> source code of  presto integration to make the code more readable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1064: [CARBONDATA-1173] Stream ingestion - write path fram...

2017-06-28 Thread aniketadnaik
Github user aniketadnaik commented on the issue:

https://github.com/apache/carbondata/pull/1064
  
@jackylk, @chenliang613 - All review comments are addressed.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1064: [CARBONDATA-<1173>] Stream ingestion - write path fr...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1064
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/718/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1112: [CARBONDATA-1244] Rewrite README.md of presto integr...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1112
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/717/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1112: [CARBONDATA-1244] Rewrite README.md of presto integr...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1112
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1112: [CARBONDATA-1244] Rewrite README.md of presto integr...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1112
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1112: [CARBONDATA-1244] Rewrite README.md of presto...

2017-06-28 Thread HoneyQuery
GitHub user HoneyQuery opened a pull request:

https://github.com/apache/carbondata/pull/1112

[CARBONDATA-1244] Rewrite README.md of presto integration and add/rewrite 
some comments to presto integration.

Be sure to do all of the following to help us incorporate your contribution
quickly and easily:

 - [x] Make sure the PR title is formatted like:
   `[CARBONDATA-] Description of pull request`
 - [x] Make sure tests pass via `mvn clean verify`. (Even better, enable
   Travis-CI on your fork and ensure the whole test matrix passes).
 - [x] Replace `` in the title with the actual Jira issue
   number, if there is one.
 - [x] If this contribution is large, please file an Apache
   [Individual Contributor License 
Agreement](https://www.apache.org/licenses/icla.txt).
 - [x] Testing done
 
Please provide details on 
- Whether new unit test cases have been added or why no new tests 
are required?
  We only add some comments and rewrite the docs, no source code is 
changed.
- What manual testing you have done?
 None.
- Any additional information to help reviewers in testing this 
change.
 None.
 
 - [x] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA. 
 
---


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoneyQuery/carbondata master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1112.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1112


commit 1a9512c549014253d469d892312aa5103d0b21d8
Author: Jin Guodong 
Date:   2017-06-08T13:58:13Z

remove some useless code, and add a simple Main for HiveEmbeddedServer2

commit 011035c91fbe43418a3cf8f4d5c35a1ecdff8e39
Author: Haoqiong Bian 
Date:   2017-06-08T14:09:54Z

Merge pull request #1 from ray6080/master

remove some useless code, and add a simple Main for HiveEmbeddedServer2

commit 45032dbdc8737410597410730d2fb35469b7d0b9
Author: Haoqiong Bian 
Date:   2017-06-08T14:13:35Z

Merge pull request #1 from dbiir/master

pull from dbiir

commit bc219288908b656f1ba1f0af75935233265dc9b2
Author: bianhq 
Date:   2017-06-10T09:43:10Z

add comments

commit 9c88798bf0f5358ac03987523ea9b54a7d6f46d1
Author: bianhq 
Date:   2017-06-12T06:00:18Z

add comments

commit 97c6485393fb86c6f6f860e27e9273f48f6c861d
Author: Guodong Jin 
Date:   2017-06-12T06:18:34Z

Merge pull request #2 from HoneyQuery/master

Add comments to presto connector

commit bb223a83ff85e042461f16e4d5a83b4b8e13cb1f
Author: bianhq 
Date:   2017-06-14T19:50:14Z

add comments.

commit 1f1f52e0089a7fc2b56d0b909c30b56e44b99f6c
Author: Guodong Jin 
Date:   2017-06-15T04:55:17Z

Merge pull request #3 from HoneyQuery/master

add comments.

commit f2a0409035b2b1fb581aca83d9d7c3155c15786b
Author: Guodong Jin 
Date:   2017-06-26T09:27:13Z

Merge pull request #4 from apache/master

aync with apache carbondata 1.1

commit fcfae17e65b083f107142d6bcaca26fe0d164d9e
Author: Haoqiong Bian 
Date:   2017-06-26T12:37:43Z

Merge pull request #2 from dbiir/master

sync with apache carbondata 0.1.1

commit d55c735d486a34f8d2e96a5b01a53ca8c3d5de85
Author: bianhq 
Date:   2017-06-28T13:07:49Z

recover some unnecessary changes.

commit f67feeea3837e3fa9315c6dfb0c51de9e99615bd
Author: Haoqiong Bian 
Date:   2017-06-28T13:11:44Z

Merge pull request #3 from apache/master

sync with apache carbondata

commit c4dd8a0b63b1222b74d2b8c4268a4eb5b240d731
Author: bianhq 
Date:   2017-06-28T20:04:28Z

add comments to presto connector and polish README.md

commit 5b95791695bc17d6c076c98c610a9c5c9e0774f5
Author: Haoqiong Bian 
Date:   2017-06-28T20:06:21Z

Merge pull request #4 from apache/master

sync with apache carbondata.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1112: [CARBONDATA-1244] Rewrite README.md of presto integr...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1112
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1038
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/716/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1244) Rewrite README.md of presto integration and add/rewrite some comments to presto integration.

2017-06-28 Thread Haoqiong Bian (JIRA)
Haoqiong Bian created CARBONDATA-1244:
-

 Summary: Rewrite README.md of presto integration and add/rewrite 
some comments to presto integration.
 Key: CARBONDATA-1244
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1244
 Project: CarbonData
  Issue Type: Improvement
  Components: presto-integration
Reporter: Haoqiong Bian
 Fix For: 1.2.0


Rewrite README.md of presto integration and add/rewrite some comments to presto 
integration.
Make the README easier for starters to play with. Write more comments for the 
source code of  presto integration to make the code more readable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/207/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2786/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1079
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/715/Failed Tests: 
25carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 25org.apache.carbondata.spark.testsuite.bigdecimal.TestBigInt.test
 big int data type storage for boundary valuesorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFields.test
 filter query on column is nullorg.apache.carbondata.spark.testsuite.bigdecimal.TestNullAndEmptyFieldsUnsafe.test
 filter query on column is nullorg.apache.carbondata.spark.testsuite.detailquery.ExpressionWithNullTestCase.test
 to check in expression with nul
 l valuesorg.apache.carbondata.spark.testsuite.detailquery.ExpressionWithNullTestCase.test
 to check not in expression with null valuesorg.apache.carbondata.spark.testsuite.filterexpr.FilterProcessorTestCase.Greater
 Than equal to Filterorg.apache.carbondata.spark.testsuite.filterexpr.FilterProcessorTestCase.Greater
 Than equal to Filter with limitorg.apache.carbondata.spark.testsuite.filterexpr.FilterProcessorTestCase.Greater
 Than equal to Filter with aggregation limitorg.apache.carbondata.spark.testsuite.filterexpr.FilterProcessorTestCase.Greater
 Than equal to Filter with decimalorg.apache.carbondata.spark.testsuite.filterexpr.NullMeasureValueTestCaseFilter.select
 ID from t3 where salary is nullorg.apache.carbondata.spark.testsuite.iud.DeleteCarbonTableTestCase.delete
 data from  carbon table[where numeric condition  ]org.apache.carbondata.spark.testsuite.iud.UpdateCarbonTableTestCase.update
 carbon [sub query, between and existing in outer condition.(Customer query ) 
]org.apache.carbondata.spark.testsuite.nullvalueserialization.TestNullValueSerialization.test
 filter query on column is nullorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeFo
 rPartitionTable.allTypeTable_hash_bigintorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_hash_floatorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_hash_doubleorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_list_floatorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_list_doubleorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_intorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_bigintorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allTypeTable_range_floatorg.apache.carbondata.spark.testsuite.partition.TestAllDataTypeForPartitionTable.allT
 ypeTable_range_doubleorg.apache.carbondata.spark.testsuite.partition.TestDataLoadingForPartitionTable.badrecords
 on partition columnorg.apache.carbondata.spark.testsuite.sortcolumns.TestSortColumns.unsorted
 table creation, query and data loading with offheap and inmemory sort 
configorg.apache.carbondata.spark.testsuite.sortcolumns.TestSortColumnsWithUnsafe.unsorted
 table creation, query and data loading with offheap and inmemory sort 
config



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1038
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/714/Failed Tests: 
1carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 1org.apache.carbondata.core.cache.dictionary.ReverseDictionaryCacheTest.testLRUCacheForKeyDeletionAfterMaxSizeIsReached



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1079
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2785/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1079: [WIP]Measure Filter implementation

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1079
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/206/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1111: Rectify Vector Buffer Overflow Calculation

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/713/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2784/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/205/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1111: Rectify Vector Buffer Overflow Calculation

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/204/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1111: Rectify Vector Buffer Overflow Calculation

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2783/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1110
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/712/Failed Tests: 
4carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.writer.CarbonFooterWriterTest.testWriteFactMetadataorg.apache.carbondata.core.writer.CarbonFooterWriterTes
 t.testReadFactMetadatacarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 2org.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.query
 using SQLContextorg.apache.carbondata.integr
 ation.spark.testsuite.dataload.SparkDatasourceSuite.query using SQLContext 
without providing schema



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1038: [CARBONDATA-1179] Improve the Size calculatio...

2017-06-28 Thread sraghunandan
Github user sraghunandan commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1038#discussion_r124640045
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/dictionary/ReverseDictionaryCache.java
 ---
@@ -45,6 +49,20 @@
   private static final LogService LOGGER =
   
LogServiceFactory.getLogService(ReverseDictionaryCache.class.getName());
 
+  private static long sizeOfEmptyDictChunks =
--- End diff --

updated


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1038: [CARBONDATA-1179] Improve the Size calculatio...

2017-06-28 Thread sraghunandan
Github user sraghunandan commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1038#discussion_r124639967
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/CarbonLRUCache.java ---
@@ -199,6 +199,36 @@ public boolean put(String columnIdentifier, Cacheable 
cacheInfo, long requiredSi
   }
 
   /**
+   * This method will check if required size is available in the memory
+   * @param columnIdentifier
+   * @param cacheInfo
+   * @param requiredSize
+   * @return
+   */
+  public boolean tryPut(String columnIdentifier, long requiredSize) {
+if (LOGGER.isDebugEnabled()) {
--- End diff --

handled


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1111: Rectify Vector Buffer Overflow Calculation

2017-06-28 Thread sounakr
GitHub user sounakr opened a pull request:

https://github.com/apache/carbondata/pull/

Rectify Vector Buffer Overflow Calculation


Rectify Vector Buffer Overflow calculation. Previously we are keeping track 
of all the deleted rows from the buffer which is not needed as deleted rows are 
not physically removed from buffer. Better to make all calculations with total 
number of rows which is being filled in the buffer.  

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sounakr/incubator-carbondata 
Dictionary_Based_vector_reader

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #


commit 319dde19753dbf8cd47143f4e64e4f6d42acf93b
Author: sounakr 
Date:   2017-06-28T19:45:21Z

Rectify Vector Buffer Calculation




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1111: Rectify Vector Buffer Overflow Calculation

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1110
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2782/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1110
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/203/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/1110
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1038: [CARBONDATA-1179] Improve the Size calculatio...

2017-06-28 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1038#discussion_r124601946
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/dictionary/ReverseDictionaryCache.java
 ---
@@ -45,6 +49,20 @@
   private static final LogService LOGGER =
   
LogServiceFactory.getLogService(ReverseDictionaryCache.class.getName());
 
+  private static long sizeOfEmptyDictChunks =
--- End diff --

these can be static final


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1094: [CARBONDATA-1181] Show partitions

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1094
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/711/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1094: [CARBONDATA-1181] Show partitions

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1094
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2781/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1094: [CARBONDATA-1181] Show partitions

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1094
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/202/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1038: [CARBONDATA-1179] Improve the Size calculatio...

2017-06-28 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1038#discussion_r124599881
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/CarbonLRUCache.java ---
@@ -199,6 +199,36 @@ public boolean put(String columnIdentifier, Cacheable 
cacheInfo, long requiredSi
   }
 
   /**
+   * This method will check if required size is available in the memory
+   * @param columnIdentifier
+   * @param cacheInfo
+   * @param requiredSize
+   * @return
+   */
+  public boolean tryPut(String columnIdentifier, long requiredSize) {
+if (LOGGER.isDebugEnabled()) {
--- End diff --

It can better remove entry so that temp block memory also with in LRU limit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1102
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/710/Failed Tests: 
120carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.util.CarbonMetadataUtilTest.testConvertFileFooterorg.apache.carbondata.core.writer.CarbonFooterWriterTest
 .testWriteFactMetadatacarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-processing:
 3org.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testloadAndGetTaskIdToSegmentsMapForDifferentSegmentLoadedConcurrentlyorg.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testLoadAndGetTaskIdToSegmentsMapForSingleSegmentorg.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testloadAndGetTaskIdToSegmentsMapForSameBlockLoadedConcurrentlycarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 5org.apache.carbondata.spark.testsuite.datacompaction.DataCompactionMinorThresholdTest.check
 if compaction is completed correctly for minor.org.apache.carbondata.spark.testsuite.datacompaction.DataCompactionNoDictionaryTest.delete
 merged folder and execute queryorg.apache.carbondata.spark.testsuite.datacompaction.Data
 CompactionTest.delete merged folder and execute queryorg.apache.carbondata.spark.testsuite.datacompaction.DataCompactionTest.check
 if compaction with Updatesorg.apache.carbondata.spark.testsuite.hadooprelation.HadoopFSRelationTestCase.hadoopfsrelation
 select all testcarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 110org.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithMaxMinInteger.test
 carbon table data loading when the int column contains min and max integer 
valueorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadData
 WithSinglePass.test data loading use one passorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithSinglePass.test
 data loading use one pass when offer column dictionary 
fileorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithSinglePass.test
 data loading use one pass when do incremental loadorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index load and agg queryorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measureorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measure as dictionary_includeorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measure as sort_column
 
org.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 Latest_YEAR ,sum(distinct Latest_YEAR)+10 from Carbon_automation_test group by 
Latest_YEARorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 Latest_YEAR ,sum(distinct Latest_YEAR)+10 from Carbon_automation_test group by 
Latest_YEAR.org.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 count(latest_year)+10.364 as a,series  from Carbon_automation_test group by 
seriesorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 var_samp(Latest_YEAR) from Carbon_automation_testorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggre
 gate.select var_samp(Latest_YEAR) from 
Carbon_automation_test1org.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from hiveorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from hive-sum expressionorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from carbon-select columnsorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from carbon-select * columnsorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 into existing load-passorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 select from same tableorg.apache.carbondata.spark.testsuite.allqueries.TestQueryWithOldCarbonDataFile.Test
 select * queryorg.apache.carbondata.s
 park.testsuite.allqueries.TestTableNameHasDbName.test query when part of 
tableName has dbNameorg.apache.carbondata.spark.testsuite.datacompaction.DataCompactionBoundaryConditionsTest.check
 if compaction is completed correctly for multiple 
load.org.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 merged folder and check segmentsorg.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 compacted segment and check statusorg.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 compacted segment by date and check statusorg.apach

[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/201/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2780/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1102: [CARBONDATA-1098] Change page statistics use exact t...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1102
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/709/Failed Tests: 
120carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-core:
 2org.apache.carbondata.core.util.CarbonMetadataUtilTest.testConvertFileFooterorg.apache.carbondata.core.writer.CarbonFooterWriterTest
 .testWriteFactMetadatacarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-processing:
 3org.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testloadAndGetTaskIdToSegmentsMapForDifferentSegmentLoadedConcurrentlyorg.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testLoadAndGetTaskIdToSegmentsMapForSingleSegmentorg.apache.carbondata.carbon.datastore.BlockIndexStoreTest.testloadAndGetTaskIdToSegmentsMapForSameBlockLoadedConcurrentlycarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 5org.apache.carbondata.spark.testsuite.datacompaction.DataCompactionMinorThresholdTest.check
 if compaction is completed correctly for minor.org.apache.carbondata.spark.testsuite.datacompaction.DataCompactionNoDictionaryTest.delete
 merged folder and execute queryorg.apache.carbondata.spark.testsuite.datacompaction.Data
 CompactionTest.delete merged folder and execute queryorg.apache.carbondata.spark.testsuite.datacompaction.DataCompactionTest.check
 if compaction with Updatesorg.apache.carbondata.spark.testsuite.hadooprelation.HadoopFSRelationTestCase.hadoopfsrelation
 select all testcarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 110org.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithMaxMinInteger.test
 carbon table data loading when the int column contains min and max integer 
valueorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadData
 WithSinglePass.test data loading use one passorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithSinglePass.test
 data loading use one pass when offer column dictionary 
fileorg.apache.carbondata.integration.spark.testsuite.dataload.TestLoadDataWithSinglePass.test
 data loading use one pass when do incremental loadorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index load and agg queryorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measureorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measure as dictionary_includeorg.apache.carbondata.integration.spark.testsuite.dataload.TestNoInvertedIndexLoadAndQuery.no
 inverted index with measure as sort_column
 
org.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 Latest_YEAR ,sum(distinct Latest_YEAR)+10 from Carbon_automation_test group by 
Latest_YEARorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 Latest_YEAR ,sum(distinct Latest_YEAR)+10 from Carbon_automation_test group by 
Latest_YEAR.org.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 count(latest_year)+10.364 as a,series  from Carbon_automation_test group by 
seriesorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggregate.select
 var_samp(Latest_YEAR) from Carbon_automation_testorg.apache.carbondata.spark.testsuite.allqueries.AllDataTypesTestCaseAggre
 gate.select var_samp(Latest_YEAR) from 
Carbon_automation_test1org.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from hiveorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from hive-sum expressionorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from carbon-select columnsorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 from carbon-select * columnsorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 into existing load-passorg.apache.carbondata.spark.testsuite.allqueries.InsertIntoCarbonTableTestCase.insert
 select from same tableorg.apache.carbondata.spark.testsuite.allqueries.TestQueryWithOldCarbonDataFile.Test
 select * queryorg.apache.carbondata.s
 park.testsuite.allqueries.TestTableNameHasDbName.test query when part of 
tableName has dbNameorg.apache.carbondata.spark.testsuite.datacompaction.DataCompactionBoundaryConditionsTest.check
 if compaction is completed correctly for multiple 
load.org.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 merged folder and check segmentsorg.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 compacted segment and check statusorg.apache.carbondata.spark.testsuite.datacompaction.MajorCompactionIgnoreInMinorTest.delete
 compacted segment by date and check statusorg.apach

[GitHub] carbondata issue #1102: [CARBONDATA-1098] [WIP] Change page statistics use e...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/200/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1102: [CARBONDATA-1098] [WIP] Change page statistics use e...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1102
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2779/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1089
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/708/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1089
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2778/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1089
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/199/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1038
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/707/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2777/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1038: [CARBONDATA-1179] Improve the Size calculation of Ob...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1038
  
Build Success with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/198/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1110
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/705/Failed Tests: 
3carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 2org.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.query
 using SQLContextorg.apache.carbondata.integration.spark.testsuite.dataload.SparkDatasourceSuite.query
 using SQLContext without providing schemacarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 1org.apache.carbondata.spark.testsuite.dataretention.DataRetentionConcurrencyTestCase.DataRetention_Concurrency_load_date



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1089
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/197/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1089
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2776/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1089
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/706/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1110
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/196/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1110: [CARBONDATA-1238] Decouple the datatype convert in c...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1110
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2775/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1094: [CARBONDATA-1181] Show partitions

2017-06-28 Thread mayunSaicmotor
Github user mayunSaicmotor commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1094#discussion_r124582318
  
--- Diff: 
examples/spark/src/main/scala/org/apache/carbondata/examples/CarbonPartitionExample.scala
 ---
@@ -0,0 +1,128 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.examples
+
+import scala.collection.mutable.LinkedHashMap
+
+import org.apache.spark.sql.AnalysisException
+
+import org.apache.carbondata.core.constants.CarbonCommonConstants
+import org.apache.carbondata.core.util.CarbonProperties
+import org.apache.carbondata.examples.util.ExampleUtils
+
+object CarbonPartitionExample {
--- End diff --

actually there are some creating partition table and showing partitions 
code in it, do you mean delete  the showing partition code and keep creating 
partition table code?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1110: [CARBONDATA-1238] Decouple the datatype conve...

2017-06-28 Thread chenliang613
GitHub user chenliang613 opened a pull request:

https://github.com/apache/carbondata/pull/1110

[CARBONDATA-1238] Decouple the datatype convert in core module

Decouple the datatype convert of Spark in core module:
1.Use decimal(org.apache.spark.sql.types.Decimal.apply()) in spark engine, 
use java's decimal in other engines.
2.Use org.apache.spark.unsafe.types.UTF8String in spark engine, use String 
in other engines.


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/chenliang613/carbondata decouple_spark

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/1110.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1110


commit eac2a6aaf3e55ab08db5823af53024712b9102c9
Author: chenliang613 
Date:   2017-06-28T15:45:50Z

[CARBONDATA-1238] Decouple the datatype convert from Spark code in core 
module




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (CARBONDATA-1238) Decouple the datatype convert from Spark code in core module

2017-06-28 Thread Liang Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liang Chen updated CARBONDATA-1238:
---
Description: 
Decouple the datatype convert from Spark code in core module:
1.Use decimal(org.apache.spark.sql.types.Decimal.apply()) in Spark engine, use 
java's decimal in other engines.
2.Use org.apache.spark.unsafe.types.UTF8String in Spark engine, use String in 
other engines.

  was:Decouple the datatype convert from Spark code in core module,for spark 
engine, use spark's decimal(org.apache.spark.sql.types.Decimal.apply()); for 
other engine, use java's decimal.


> Decouple the datatype convert from Spark code in core module
> 
>
> Key: CARBONDATA-1238
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1238
> Project: CarbonData
>  Issue Type: Improvement
>  Components: core
>Reporter: Liang Chen
>Assignee: Liang Chen
>
> Decouple the datatype convert from Spark code in core module:
> 1.Use decimal(org.apache.spark.sql.types.Decimal.apply()) in Spark engine, 
> use java's decimal in other engines.
> 2.Use org.apache.spark.unsafe.types.UTF8String in Spark engine, use String in 
> other engines.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #1104: [CARBONDATA-1239] Add validation for set command par...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1104
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/704/Failed Tests: 
13carbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark:
 1org.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerSharedDictionaryTest.dataload
 with bad record testcarbondata-pr-spark-1.6/org.apache.carbondata:carbondata-spark-common-test:
 12org.apache.carbondata.spark.testsuite.badrecordloger.BadRecordEmptyDataTest.select
 count(*) from empty_timestamporg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordEmptyDataTest.select
 count(*) from empty_timestamp_falseorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from salesorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from serializable_valuesorg.apache.carbondata.sp
 ark.testsuite.badrecordloger.BadRecordLoggerTest.select count(*) from 
serializable_values_falseorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from empty_timestamporg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from insufficientColumnorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from insufficientColumn_falseorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from emptyColumnValuesorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest
 .select count(*) from emptyColumnValues_falseorg.apache.carbondata.spark.testsuite.badrecordloger.BadRecordLoggerTest.select
 count(*) from empty_timestamp_falseorg.apache.carbondata.spark.testsuite.dataload.TestGlobalSortDataLoad.Test
 GLOBAL_SORT with BAD_RECORDS_ACTION = 'REDIRECT'



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1104: [CARBONDATA-1239] Add validation for set command par...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1104
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/195/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1104: [CARBONDATA-1239] Add validation for set command par...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/1104
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2774/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #988: [CARBONDATA-1110] put if clause out of the for clause

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/988
  
Thanks for working on this. This looks like duplicate of #990 which is 
already merged. So please close the same.
Rebased changes or new changes can always be pushed to same PR branch from 
any local branch using force push option. Not required to raise new PRs.
git push -uf  :


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1032: [CARBONDATA-1149] Fixed range info overlappin...

2017-06-28 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1032#discussion_r124549140
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
 ---
@@ -288,6 +297,69 @@ object CommonUtil {
 result
   }
 
+  def validateForOverLappingRangeValues(desType: Option[String],
+  rangeInfoArray: Array[String]): Boolean = {
+val rangeInfoValuesValid = desType match {
+  case Some("IntegerType") | Some("int") =>
+val intRangeInfoArray = rangeInfoArray.map(_.toInt)
+val sortedRangeInfoArray = intRangeInfoArray.sorted
+intRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("StringType") | Some("string") =>
+val sortedRangeInfoArray = rangeInfoArray.sorted
+rangeInfoArray.sameElements(sortedRangeInfoArray)
+  case a if (desType.get.startsWith("varchar") || 
desType.get.startsWith("char")) =>
+val sortedRangeInfoArray = rangeInfoArray.sorted
+rangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("LongType") | Some("long") | Some("bigint") =>
+val longRangeInfoArray = rangeInfoArray.map(_.toLong)
+val sortedRangeInfoArray = longRangeInfoArray.sorted
+longRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("FloatType") | Some("float") =>
+val floatRangeInfoArray = rangeInfoArray.map(_.toFloat)
+val sortedRangeInfoArray = floatRangeInfoArray.sorted
+floatRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("DoubleType") | Some("double") =>
+val doubleRangeInfoArray = rangeInfoArray.map(_.toDouble)
+val sortedRangeInfoArray = doubleRangeInfoArray.sorted
+doubleRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("ByteType") | Some("tinyint") =>
+val byteRangeInfoArray = rangeInfoArray.map(_.toByte)
+val sortedRangeInfoArray = byteRangeInfoArray.sorted
+byteRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("ShortType") | Some("smallint") =>
+val shortRangeInfoArray = rangeInfoArray.map(_.toShort)
+val sortedRangeInfoArray = shortRangeInfoArray.sorted
+shortRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("BooleanType") | Some("boolean") =>
+true
+  case a if (desType.get.startsWith("DecimalType") || 
desType.get.startsWith("decimal")) =>
+val decimalRangeInfoArray = rangeInfoArray.map(value => 
BigDecimal(value))
--- End diff --

Bigdecimal precision and scale needs to be considered , other wise two 
ranges can overlap after converting value during dataload. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #1032: [CARBONDATA-1149] Fixed range info overlappin...

2017-06-28 Thread gvramana
Github user gvramana commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/1032#discussion_r124548518
  
--- Diff: 
integration/spark-common/src/main/scala/org/apache/carbondata/spark/util/CommonUtil.scala
 ---
@@ -288,6 +297,69 @@ object CommonUtil {
 result
   }
 
+  def validateForOverLappingRangeValues(desType: Option[String],
+  rangeInfoArray: Array[String]): Boolean = {
+val rangeInfoValuesValid = desType match {
+  case Some("IntegerType") | Some("int") =>
+val intRangeInfoArray = rangeInfoArray.map(_.toInt)
+val sortedRangeInfoArray = intRangeInfoArray.sorted
+intRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("StringType") | Some("string") =>
+val sortedRangeInfoArray = rangeInfoArray.sorted
+rangeInfoArray.sameElements(sortedRangeInfoArray)
+  case a if (desType.get.startsWith("varchar") || 
desType.get.startsWith("char")) =>
+val sortedRangeInfoArray = rangeInfoArray.sorted
+rangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("LongType") | Some("long") | Some("bigint") =>
+val longRangeInfoArray = rangeInfoArray.map(_.toLong)
+val sortedRangeInfoArray = longRangeInfoArray.sorted
+longRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("FloatType") | Some("float") =>
+val floatRangeInfoArray = rangeInfoArray.map(_.toFloat)
+val sortedRangeInfoArray = floatRangeInfoArray.sorted
+floatRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("DoubleType") | Some("double") =>
+val doubleRangeInfoArray = rangeInfoArray.map(_.toDouble)
+val sortedRangeInfoArray = doubleRangeInfoArray.sorted
+doubleRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("ByteType") | Some("tinyint") =>
+val byteRangeInfoArray = rangeInfoArray.map(_.toByte)
+val sortedRangeInfoArray = byteRangeInfoArray.sorted
+byteRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("ShortType") | Some("smallint") =>
+val shortRangeInfoArray = rangeInfoArray.map(_.toShort)
+val sortedRangeInfoArray = shortRangeInfoArray.sorted
+shortRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("BooleanType") | Some("boolean") =>
+true
+  case a if (desType.get.startsWith("DecimalType") || 
desType.get.startsWith("decimal")) =>
+val decimalRangeInfoArray = rangeInfoArray.map(value => 
BigDecimal(value))
+val sortedRangeInfoArray = decimalRangeInfoArray.sorted
+decimalRangeInfoArray.sameElements(sortedRangeInfoArray)
+  case Some("DateType") | Some("date") =>
+val dateRangeInfoArray = rangeInfoArray.map { value =>
--- End diff --

Dictionary generation can bring duplicate values. duplicate value check 
required.
Same is the case with timesamp case also.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1089: [CARBONDATA-1224] Added page level reader instead of...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/1089
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/702/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #825: [CARBONDATA-961] Added condition to skip sort step fo...

2017-06-28 Thread asfgit
Github user asfgit commented on the issue:

https://github.com/apache/carbondata/pull/825
  

Refer to this link for build results (access rights to CI server needed): 
https://builds.apache.org/job/carbondata-pr-spark-1.6/703/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (CARBONDATA-1222) Residual files created from Update are not deleted after clean operation

2017-06-28 Thread Venkata Ramana G (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Venkata Ramana G resolved CARBONDATA-1222.
--
   Resolution: Fixed
Fix Version/s: 1.1.1
   1.2.0

> Residual files created from Update are not deleted after clean operation
> 
>
> Key: CARBONDATA-1222
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1222
> Project: CarbonData
>  Issue Type: Bug
>  Components: spark-integration
>Reporter: panner selvam velmyl
>Priority: Minor
> Fix For: 1.2.0, 1.1.1
>
>   Original Estimate: 108h
>  Time Spent: 3h 50m
>  Remaining Estimate: 104h 10m
>
> Spark - sql:
> 1.Create a table
> create table t_carbn31(item_code string,item_name1 string) stored by 
> 'carbondata'
> 2.Load Data
> insert into t_carbn31 select 'a1','Phone';
> insert into t_carbn31 select 'a2','Router';
> 3.Update the table
> update t_carbn31 set(item_name1)=('Mobile') where item_code ='a1'
> update t_carbn31 set(item_name1)=('USB') where item_code ='a2'
> update t_carbn31 set(item_name1)=('General') where item_code !='a1'
> 4.Run clean files on the table
> clean files for table t_carbn31
> Expected output: clean files should remove the residual carbondata and delete 
> files
> Actual output : Residual files are not cleaned.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] carbondata issue #71: [CARBONDATA-155] Code refactor to avoid the Type Casti...

2017-06-28 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/71
  
Build Failed with Spark 1.6, Please check CI 
http://144.76.159.231:8080/job/ApacheCarbonPRBuilder/194/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #1085: [CARBONDATA-1222] Residual files created from Update...

2017-06-28 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/1085
  
Thanks for working on this issue. You always can use the same PR by 
rebasing the code with master and force pushing to this PR branch. Raising 
multiple PRs for same issue is not encouraged.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >