[GitHub] carbondata issue #978: [CARBONDATA-1109] Acquire semaphore before submit a p...

2017-06-01 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/978
  
Not required any check, entry count check handled inside

On Fri, 2 Jun 2017 at 11:13 AM, manishgupta88 
wrote:

> *@manishgupta88* commented on this pull request.
> --
>
> In
> 
processing/src/main/java/org/apache/carbondata/processing/store/CarbonFactDataHandlerColumnar.java
> :
>
> > @@ -488,13 +488,19 @@ private NodeHolder 
processDataRows(List dataRows)
>public void finish() throws CarbonDataWriterException {
>  // still some data is present in stores if entryCount is more
>  // than 0
> -producerExecutorServiceTaskList.add(producerExecutorService
> -.submit(new Producer(blockletDataHolder, dataRows, 
++writerTaskSequenceCounter, true)));
> -blockletProcessingCount.incrementAndGet();
> -processedDataCount += entryCount;
> -closeWriterExecutionService(producerExecutorService);
> -processWriteTaskSubmitList(producerExecutorServiceTaskList);
> -processingComplete = true;
> +try {
> +  semaphore.acquire();
>
> @watermen  .here a check is required for
> entryCount > 0.because we need to acquire a semaphore lock only if
> total number of rows in raw data are not exactly divisible by page
> size...in this case only we will have some extra rows to be process by
> finish method else addDataToStore method will handle all the 
rowsPlease
> refer the below code snippet
>
> public void finish() throws CarbonDataWriterException {
> // still some data is present in stores if entryCount is more
> // than 0
> if (this.entryCount > 0) {
> try {
> semaphore.acquire();
> producerExecutorServiceTaskList.add(producerExecutorService
> .submit(new Producer(blockletDataHolder, dataRows,
> ++writerTaskSequenceCounter, true)));
> blockletProcessingCount.incrementAndGet();
> processedDataCount += entryCount;
> } catch (InterruptedException e) {
> LOGGER.error(e, e.getMessage());
> throw new CarbonDataWriterException(e.getMessage(), e);
> }
> }
> closeWriterExecutionService(producerExecutorService);
> processWriteTaskSubmitList(producerExecutorServiceTaskList);
> processingComplete = true;
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
,
> or mute the thread
> 

> .
>



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #978: [CARBONDATA-1109] Acquire semaphore before sub...

2017-06-01 Thread manishgupta88
Github user manishgupta88 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/978#discussion_r119785140
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/CarbonFactDataHandlerColumnar.java
 ---
@@ -488,13 +488,19 @@ private NodeHolder processDataRows(List 
dataRows)
   public void finish() throws CarbonDataWriterException {
 // still some data is present in stores if entryCount is more
 // than 0
-producerExecutorServiceTaskList.add(producerExecutorService
-.submit(new Producer(blockletDataHolder, dataRows, 
++writerTaskSequenceCounter, true)));
-blockletProcessingCount.incrementAndGet();
-processedDataCount += entryCount;
-closeWriterExecutionService(producerExecutorService);
-processWriteTaskSubmitList(producerExecutorServiceTaskList);
-processingComplete = true;
+try {
+  semaphore.acquire();
--- End diff --

@watermen .here a check is required for entryCount > 0.because we 
need to acquire a semaphore lock only if total number of rows in raw data are 
not exactly divisible by page size...in this case only we will have some extra 
rows to be process by finish method else addDataToStore method will handle all 
the rowsPlease refer the below code snippet

public void finish() throws CarbonDataWriterException {
// still some data is present in stores if entryCount is more
// than 0
if (this.entryCount > 0) {
  try {
semaphore.acquire();
producerExecutorServiceTaskList.add(producerExecutorService
.submit(new Producer(blockletDataHolder, dataRows, 
++writerTaskSequenceCounter, true)));
blockletProcessingCount.incrementAndGet();
processedDataCount += entryCount;
  } catch (InterruptedException e) {
LOGGER.error(e, e.getMessage());
throw new CarbonDataWriterException(e.getMessage(), e);
  }
}
closeWriterExecutionService(producerExecutorService);
processWriteTaskSubmitList(producerExecutorServiceTaskList);
processingComplete = true;


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119783989
  
--- Diff: docs/useful-tips-on-carbondata.md ---
@@ -127,7 +127,7 @@ query performance. The create table command can be 
modified as below :
   TBLPROPERTIES ( 'DICTIONARY_EXCLUDE'='MSISDN,HOST,IMSI',
   'DICTIONARY_INCLUDE'='Dime_1,END_TIME,BEGIN_TIME');
 ```
-  The result of performance analysis of test-case shows reduction in query 
execution time from 15 to 3 seconds, thereby improving performance by nearly 5 
times.
+  The encodedData of performance analysis of test-case shows reduction in 
query execution time from 15 to 3 seconds, thereby improving performance by 
nearly 5 times.
--- End diff --

I think changed this file by mistake


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119783944
  
--- Diff: docs/faq.md ---
@@ -123,7 +123,7 @@ id  cityname
 3   davishenzhen
 ```
 
-As result shows, the second column is city in carbon table, but what 
inside is name, such as jack. This phenomenon is same with insert data into 
hive table.
+As encodedData shows, the second column is city in carbon table, but what 
inside is name, such as jack. This phenomenon is same with insert data into 
hive table.
--- End diff --

I think changed this file by mistake


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119783972
  
--- Diff: docs/release-guide.md ---
@@ -109,7 +109,7 @@ staging repository and promote the artifacts to Maven 
Central.
 4. Choose `User Token` from the dropdown, then click `Access User Token`. 
Copy a snippet of the 
 Maven XML configuration block.
 5. Insert this snippet twice into your global Maven `settings.xml` file, 
typically `${HOME]/
-.m2/settings.xml`. The end result should look like this, where 
`TOKEN_NAME` and `TOKEN_PASSWORD` 
+.m2/settings.xml`. The end encodedData should look like this, where 
`TOKEN_NAME` and `TOKEN_PASSWORD` 
--- End diff --

I think changed this file by mistake


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119783933
  
--- Diff: docs/dml-operation-on-carbondata.md ---
@@ -211,7 +211,7 @@ By default the above configuration will be false.
 
 ### Examples
 ```
-INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
+INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as encodedData FROM 
--- End diff --

I think changed this file by mistake


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119783237
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/MeasurePageStatistics.java
 ---
@@ -0,0 +1,88 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.carbondata.core.datastore.page.statistics;
+
+import org.apache.carbondata.core.datastore.page.ColumnPage;
+import org.apache.carbondata.core.metadata.datatype.DataType;
+
+public class MeasurePageStatistics {
--- End diff --

I think better give other name, it confuses. it is just holds the data like 
holder. so better name as VO or holder


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #948: [CARBONDATA-1093] During single pass load user data i...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/948
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2145/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119782441
  
--- Diff: LICENSE ---
@@ -157,7 +157,7 @@
   negligent acts) or agreed to in writing, shall any Contributor be
   liable to You for damages, including any direct, indirect, special,
   incidental, or consequential damages of any character arising as a
-  result of this License or out of the use or inability to use the
+  encodedData of this License or out of the use or inability to use the
--- End diff --

I think changed by mistake


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #988: [CARBONDATA-1110] put if clause out of the for clause

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/988
  
Build Success with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2144/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119781980
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/ColumnPageStatistics.java
 ---
@@ -114,6 +115,46 @@ private int getDecimalCount(double value) {
 return decimalPlaces;
   }
 
+  /**
+   * return min value as byte array
+   */
+  public byte[] minBytes() {
+return getValueAsBytes(getMin());
+  }
+
+  /**
+   * return max value as byte array
+   */
+  public byte[] maxBytes() {
+return getValueAsBytes(getMax());
+  }
+
+  /**
+   * convert value to byte array
+   */
+  private byte[] getValueAsBytes(Object value) {
+ByteBuffer b = null;
+Object max = getMax();
--- End diff --

We should use `value` not `getMax()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #948: [CARBONDATA-1093] During single pass load user data i...

2017-06-01 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/948
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119781704
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/datastore/page/statistics/ColumnPageStatistics.java
 ---
@@ -33,30 +34,30 @@
* the unique value is the non-exist value in the row,
* and will be used as storage key for null values of measures
*/
-  private Object uniqueValue;
+  private Object nonExistValue;
--- End diff --

I don't think we are using this now, better can be removed


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #971: [CARBONDATA-1015] Extract interface in data lo...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/971#discussion_r119781493
  
--- Diff: 
processing/src/main/java/org/apache/carbondata/processing/store/TablePage.java 
---
@@ -65,20 +68,20 @@
 this.pageSize = pageSize;
 keyColumnPage = new KeyColumnPage(pageSize,
 model.getSegmentProperties().getDimensionPartitions().length);
-noDictDimensionPage = new 
VarLengthColumnPage[model.getNoDictionaryCount()];
+noDictDimensionPage = new 
PrimitiveColumnPage[model.getNoDictionaryCount()];
 for (int i = 0; i < noDictDimensionPage.length; i++) {
-  noDictDimensionPage[i] = new VarLengthColumnPage(pageSize);
+  noDictDimensionPage[i] = new PrimitiveColumnPage(DataType.STRING, 
pageSize);
--- End diff --

Primitive column page should only contains primitive data types, it should 
not contain string or decimal. So better create another page for string and 
decimal.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #980: [CARBONDATA-1110] put if clause out of the for clause

2017-06-01 Thread mayunSaicmotor
Github user mayunSaicmotor commented on the issue:

https://github.com/apache/carbondata/pull/980
  
closed this pr,  raised a new pr 
https://github.com/apache/carbondata/pull/988


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #988: [CARBONDATA-1110] put if clause out of the for...

2017-06-01 Thread mayunSaicmotor
GitHub user mayunSaicmotor opened a pull request:

https://github.com/apache/carbondata/pull/988

[CARBONDATA-1110] put if clause out of the for clause

it should be better to put if clause out of the for clause

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mayunSaicmotor/incubator-carbondata 
CARBON-1110-NEW

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/988.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #988


commit 324ecffb98d58362f5e2a3560aa0ecac3956848f
Author: mayun 
Date:   2017-06-02T04:55:36Z

put if clause out of the for clause




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #980: [CARBONDATA-1110] put if clause out of the for...

2017-06-01 Thread mayunSaicmotor
Github user mayunSaicmotor closed the pull request at:

https://github.com/apache/carbondata/pull/980


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #964: [CARBONDATA-1099] Fixed bug for carbon-spark-s...

2017-06-01 Thread xuchuanyin
Github user xuchuanyin commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/964#discussion_r119761693
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/dictionary/ReverseDictionaryCache.java
 ---
@@ -43,7 +43,7 @@
* Attribute for Carbon LOGGER
*/
   private static final LogService LOGGER =
-  
LogServiceFactory.getLogService(ForwardDictionaryCache.class.getName());
+  
LogServiceFactory.getLogService(ReverseDictionaryCache.class.getName());
--- End diff --

yeah, there is no need to fix this in the current issue. I just found it 
when I submitted the code...

so, should I rollback this change and start a new issue, OR just add 
another comment in this issue about this change?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #987: [WIP] Use ColumnPage for measure encoding and compres...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/987
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2143/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #987: [WIP] Use ColumnPage for measure encoding and ...

2017-06-01 Thread jackylk
GitHub user jackylk opened a pull request:

https://github.com/apache/carbondata/pull/987

[WIP] Use ColumnPage for measure encoding and compression

[WIP] This PR should be merged after #971 

In this PR, following interfaces are added:
###ColumnPageEncoder
encoder for a ColumnPage

###EncoderStrategy, DefaultEncoderStrategy
EncoderStrategy defines the strategy to choose encoding for a ColumnPage.
DefaultEncoderStrategy is a default implementation which choose encoding 
based on data type and statistics

Other changes:
1. encoder for measure will use ColumnPage
2. compression for measure will use ColumnPage


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/jackylk/incubator-carbondata encode

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/carbondata/pull/987.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #987


commit d8a4a2b74373f1f9b1c15780231958d176a6cf93
Author: jackylk 
Date:   2017-05-27T18:14:48Z

use ColumnPage in writer

commit 705469a4c5983d832b5b4e5504155dfb934425d7
Author: jackylk 
Date:   2017-05-27T18:22:31Z

remove WriterCompressModel

commit 418278d6e93c3d2bbfd4096893c2711e1e8ee9b8
Author: jackylk 
Date:   2017-05-28T14:55:29Z

add PrimitiveColumnPage

commit f2bb1267de866fb8fe72ea51aa8b117cdc2ceb4e
Author: jackylk 
Date:   2017-05-30T00:36:20Z

add TableSpec

commit 80bc3e4551a45d01386bbbd05d307354f5da5c25
Author: jackylk 
Date:   2017-05-30T10:11:21Z

add TablePageKey

commit 2ed4fbe38b98a6002cd4106d7274cdd88ef2b112
Author: jackylk 
Date:   2017-05-30T15:21:19Z

fix testcase

commit 0bc0bad2fe98f8719a2dfd311bf0bd379c3683de
Author: jackylk 
Date:   2017-05-30T17:37:38Z

fix style:

commit 224abbd9e2137ed1f16dad3c4ad737a0d3e80d37
Author: jackylk 
Date:   2017-05-31T08:58:16Z

make PrimitiveColumnPage generic

commit 545273ac7321f1e3c44e3c96b49083ede709a0aa
Author: jackylk 
Date:   2017-05-31T11:23:35Z

fix style

commit d6a60b4e500ad9a02463b7599d0caad8dff522cd
Author: jackylk 
Date:   2017-05-31T15:45:14Z

fix style

commit 210c9cba566b6208f65302b8d5c687ea279b7a2c
Author: jackylk 
Date:   2017-06-01T18:17:33Z

add ColumnPageEncoder




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #831: Branch 1.1

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/831
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #856: log.txt

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/856
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #884: Create project-xmlns

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/884
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #980: [CARBONDATA-1110] put if clause out of the for clause

2017-06-01 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/980
  
@mayunSaicmotor  please squash all commits to one commit.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #979: [CARBONDATA-1102] resolved int,short type bug ...

2017-06-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/979


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #964: [CARBONDATA-1099] Fixed bug for carbon-spark-s...

2017-06-01 Thread chenliang613
Github user chenliang613 commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/964#discussion_r119642951
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/cache/dictionary/ReverseDictionaryCache.java
 ---
@@ -43,7 +43,7 @@
* Attribute for Carbon LOGGER
*/
   private static final LogService LOGGER =
-  
LogServiceFactory.getLogService(ForwardDictionaryCache.class.getName());
+  
LogServiceFactory.getLogService(ReverseDictionaryCache.class.getName());
--- End diff --

why need to change this code  for fix carbon-spark-shell issues?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #986: [CARBONDATA-1108][CARBONDATA-1112] Supported I...

2017-06-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/carbondata/pull/986


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #986: [CARBONDATA-1108][CARBONDATA-1112] Supported IUD for ...

2017-06-01 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/986
  
LGTM. verified at my local machine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #962: Only for test ci(vacuous PR-don't merge)

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/962
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2142/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #962: Only for test ci(vacuous PR-don't merge)

2017-06-01 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/962
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #972: [CARBONDATA-1065] Added set command in carbon to upda...

2017-06-01 Thread gvramana
Github user gvramana commented on the issue:

https://github.com/apache/carbondata/pull/972
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1117) Update SET & RESET command details in online help documentation

2017-06-01 Thread Manohar Vanam (JIRA)
Manohar Vanam created CARBONDATA-1117:
-

 Summary: Update SET & RESET command details in online help 
documentation
 Key: CARBONDATA-1117
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1117
 Project: CarbonData
  Issue Type: Sub-task
  Components: spark-integration
Reporter: Manohar Vanam
Assignee: Gururaj Shetty


Update SET & RESET command details in online help documentation
1. update syntax
2. Examples with use cases



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] carbondata issue #978: [CARBONDATA-1109] Acquire semaphore before submit a p...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/978
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2140/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #986: [CARBONDATA-1108][CARBONDATA-1112] Supported IUD for ...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/986
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2139/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #978: [CARBONDATA-1109] Cover the case when last page is no...

2017-06-01 Thread watermen
Github user watermen commented on the issue:

https://github.com/apache/carbondata/pull/978
  
@ravipesala Thanks for your solution, PR updated.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #986: [CARBONDATA-1108][CARBONDATA-1112] Supported IUD for ...

2017-06-01 Thread chenliang613
Github user chenliang613 commented on the issue:

https://github.com/apache/carbondata/pull/986
  
retest this please


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #972: [CARBONDATA-1065] Added set command in carbon to upda...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/972
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2137/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #979: [CARBONDATA-1102] resolved int,short type bug for hiv...

2017-06-01 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/979
  
LGTM, @chenliang613 please merge it.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Closed] (CARBONDATA-1116) Not able to connect with Carbonsession while starting carbon spark shell and beeline

2017-06-01 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala closed CARBONDATA-1116.
---
Resolution: Duplicate

Duplicated to https://issues.apache.org/jira/browse/CARBONDATA-1112

> Not able to connect with Carbonsession while starting carbon spark shell and 
> beeline
> 
>
> Key: CARBONDATA-1116
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1116
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.2.0
> Environment: spark 2.1
>Reporter: Vandana Yadav
>Priority: Blocker
>
> Not able to connect with Carbonsession while starting carbon spark shell and 
> beeline
> Steps to reproduce:
> 1)Start thrift-server
> a) cd $SPARK-HOME/bin
> b) ./spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
> /opt/spark/spark-2.1/carbonlib/carbondata_2.11-1.1.0-SNAPSHOT-shade-hadoop2.7.3.jar
>  hdfs://localhost:54310/opt/prestocarbonStore
> 2)Start Beeline
> a) cd $SPARK-HOME/bin
> b)./beeline
> 3) Connect with carbondata via jdbc
> !connect jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hduser
> Enter password for jdbc:hive2://localhost:1: **
> 4) Actual Result:
> Error: Could not establish connection to jdbc:hive2://localhost:1: null 
> (state=08S01,code=0)
> 0: jdbc:hive2://localhost:1 (closed)>
> 5) Expected result : it should connect successfully with carbondata
> 6)console logs:
> 17/06/01 13:03:27 INFO ThriftCLIService: Client protocol version: 
> HIVE_CLI_SERVICE_PROTOCOL_V8
> 17/06/01 13:03:27 INFO SessionState: Created local directory: 
> /tmp/addaba65-46c5-4467-a02f-2bbdfd54329a_resources
> 17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
> /tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO SessionState: Created local directory: 
> /tmp/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
> /tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a/_tmp_space.db
> 17/06/01 13:03:27 INFO HiveSessionImpl: Operation log session directory is 
> created: /tmp/hduser/operation_logs/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO CarbonSparkSqlParser: Parsing command: use default
> Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
>   at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
>   at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
>   at 
> org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
>   at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
>   ... 20 more



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CARBONDATA-1112) Cannot run queries on Spark 2.1

2017-06-01 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-1112:

Summary: Cannot run queries on Spark 2.1  (was: fix failing IUD test case)

> Cannot run queries on Spark 2.1
> ---
>
> Key: CARBONDATA-1112
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1112
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> while running any query in spark 2.1 fails with following error
> {code}
> Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
> java.lang.ExceptionInInitializerError
> at 
> org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
> at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
> at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
> at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
> at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
> at 
> org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
> at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (CARBONDATA-1112) fix failing IUD test case

2017-06-01 Thread Ravindra Pesala (JIRA)

 [ 
https://issues.apache.org/jira/browse/CARBONDATA-1112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravindra Pesala updated CARBONDATA-1112:

Description: 
while running any query in spark 2.1 fails with following error

{code}
Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
java.lang.ExceptionInInitializerError
at 
org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
at 
org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
at 
org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
at 
org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
at 
org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
{code}

> fix failing IUD test case
> -
>
> Key: CARBONDATA-1112
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1112
> Project: CarbonData
>  Issue Type: Bug
>Reporter: Rahul Kumar
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> while running any query in spark 2.1 fails with following error
> {code}
> Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
> java.lang.ExceptionInInitializerError
> at 
> org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
> at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
> at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
> at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
> at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
> at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
> at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
> at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
> at 
> org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
> at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
> at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
> at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
> at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
> at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
> at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
> at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
> at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (CARBONDATA-1116) Not able to connect with Carbonsession while starting carbon spark shell and beeline

2017-06-01 Thread chenerlu (JIRA)

[ 
https://issues.apache.org/jira/browse/CARBONDATA-1116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16032668#comment-16032668
 ] 

chenerlu commented on CARBONDATA-1116:
--

Hi, I met same issue when I ran CarbonSessionExample on latest master branch.
This issue may caused by creating a new SparkSqlParser with null as its 
parameter.
Please help check and see if same problem. [~ravi.pesala]
Thanks


> Not able to connect with Carbonsession while starting carbon spark shell and 
> beeline
> 
>
> Key: CARBONDATA-1116
> URL: https://issues.apache.org/jira/browse/CARBONDATA-1116
> Project: CarbonData
>  Issue Type: Bug
>  Components: sql
>Affects Versions: 1.2.0
> Environment: spark 2.1
>Reporter: Vandana Yadav
>Priority: Blocker
>
> Not able to connect with Carbonsession while starting carbon spark shell and 
> beeline
> Steps to reproduce:
> 1)Start thrift-server
> a) cd $SPARK-HOME/bin
> b) ./spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
> --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
> /opt/spark/spark-2.1/carbonlib/carbondata_2.11-1.1.0-SNAPSHOT-shade-hadoop2.7.3.jar
>  hdfs://localhost:54310/opt/prestocarbonStore
> 2)Start Beeline
> a) cd $SPARK-HOME/bin
> b)./beeline
> 3) Connect with carbondata via jdbc
> !connect jdbc:hive2://localhost:1
> Enter username for jdbc:hive2://localhost:1: hduser
> Enter password for jdbc:hive2://localhost:1: **
> 4) Actual Result:
> Error: Could not establish connection to jdbc:hive2://localhost:1: null 
> (state=08S01,code=0)
> 0: jdbc:hive2://localhost:1 (closed)>
> 5) Expected result : it should connect successfully with carbondata
> 6)console logs:
> 17/06/01 13:03:27 INFO ThriftCLIService: Client protocol version: 
> HIVE_CLI_SERVICE_PROTOCOL_V8
> 17/06/01 13:03:27 INFO SessionState: Created local directory: 
> /tmp/addaba65-46c5-4467-a02f-2bbdfd54329a_resources
> 17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
> /tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO SessionState: Created local directory: 
> /tmp/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
> /tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a/_tmp_space.db
> 17/06/01 13:03:27 INFO HiveSessionImpl: Operation log session directory is 
> created: /tmp/hduser/operation_logs/addaba65-46c5-4467-a02f-2bbdfd54329a
> 17/06/01 13:03:27 INFO CarbonSparkSqlParser: Parsing command: use default
> Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
> java.lang.ExceptionInInitializerError
>   at 
> org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
>   at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
>   at 
> org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
>   at 
> org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
>   at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
>   at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
>   at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
>   at 
> org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
>   at 
> org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
>   at 
> org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
>   at 
> org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
>   at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
>   at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
>   at 
> org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
>   at 
> org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
>   at 
> org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
>   ... 20 more



--
This 

[GitHub] carbondata issue #978: [CARBONDATA-1109] Cover the case when last page is no...

2017-06-01 Thread ravipesala
Github user ravipesala commented on the issue:

https://github.com/apache/carbondata/pull/978
  
@watermen Thank you for giving reproduce steps, I can reproduce it.
But the fix you have given is not right, This issue happens because of 
missing `semaphore.acquire()` in `finish` method. 
So solution is just adding 'semaphore.acquire()' in starting of `finish()` 
method solves the issue. Please update the PR.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1116) Not able to connect with Carbonsession while starting carbon spark shell and beeline

2017-06-01 Thread Vandana Yadav (JIRA)
Vandana Yadav created CARBONDATA-1116:
-

 Summary: Not able to connect with Carbonsession while starting 
carbon spark shell and beeline
 Key: CARBONDATA-1116
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1116
 Project: CarbonData
  Issue Type: Bug
  Components: sql
Affects Versions: 1.2.0
 Environment: spark 2.1
Reporter: Vandana Yadav
Priority: Blocker


Not able to connect with Carbonsession while starting carbon spark shell and 
beeline

Steps to reproduce:

1)Start thrift-server
a) cd $SPARK-HOME/bin

b) ./spark-submit --conf spark.sql.hive.thriftServer.singleSession=true --class 
org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
/opt/spark/spark-2.1/carbonlib/carbondata_2.11-1.1.0-SNAPSHOT-shade-hadoop2.7.3.jar
 hdfs://localhost:54310/opt/prestocarbonStore

2)Start Beeline
a) cd $SPARK-HOME/bin

b)./beeline

3) Connect with carbondata via jdbc
!connect jdbc:hive2://localhost:1

Enter username for jdbc:hive2://localhost:1: hduser
Enter password for jdbc:hive2://localhost:1: **

4) Actual Result:
Error: Could not establish connection to jdbc:hive2://localhost:1: null 
(state=08S01,code=0)
0: jdbc:hive2://localhost:1 (closed)>

5) Expected result : it should connect successfully with carbondata

6)console logs:
17/06/01 13:03:27 INFO ThriftCLIService: Client protocol version: 
HIVE_CLI_SERVICE_PROTOCOL_V8
17/06/01 13:03:27 INFO SessionState: Created local directory: 
/tmp/addaba65-46c5-4467-a02f-2bbdfd54329a_resources
17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
/tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
17/06/01 13:03:27 INFO SessionState: Created local directory: 
/tmp/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a
17/06/01 13:03:27 INFO SessionState: Created HDFS directory: 
/tmp/hive/hduser/addaba65-46c5-4467-a02f-2bbdfd54329a/_tmp_space.db
17/06/01 13:03:27 INFO HiveSessionImpl: Operation log session directory is 
created: /tmp/hduser/operation_logs/addaba65-46c5-4467-a02f-2bbdfd54329a
17/06/01 13:03:27 INFO CarbonSparkSqlParser: Parsing command: use default
Exception in thread "HiveServer2-Handler-Pool: Thread-84" 
java.lang.ExceptionInInitializerError
at 
org.apache.spark.sql.hive.CarbonSessionState$$anon$1.(CarbonSessionState.scala:133)
at 
org.apache.spark.sql.hive.CarbonSessionState.analyzer$lzycompute(CarbonSessionState.scala:128)
at 
org.apache.spark.sql.hive.CarbonSessionState.analyzer(CarbonSessionState.scala:127)
at 
org.apache.spark.sql.execution.QueryExecution.assertAnalyzed(QueryExecution.scala:48)
at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:63)
at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:592)
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:699)
at 
org.apache.spark.sql.hive.thriftserver.SparkSQLSessionManager.openSession(SparkSQLSessionManager.scala:83)
at 
org.apache.hive.service.cli.CLIService.openSessionWithImpersonation(CLIService.java:202)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:351)
at 
org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:246)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
at 
org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at 
org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
at 
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException
at 
org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala:90)
at 
org.apache.spark.sql.hive.CarbonIUDAnalysisRule$.(CarbonAnalysisRules.scala)
... 20 more





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[GitHub] carbondata issue #986: [CARBONDATA-1108] Supported IUD for vector reader in ...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/986
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2135/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata pull request #986: [CARBONDATA-1108] Supported IUD for vector rea...

2017-06-01 Thread ravipesala
Github user ravipesala commented on a diff in the pull request:

https://github.com/apache/carbondata/pull/986#discussion_r119545242
  
--- Diff: 
core/src/main/java/org/apache/carbondata/core/scan/collector/impl/DictionaryBasedVectorResultCollector.java
 ---
@@ -144,6 +144,8 @@ protected void 
prepareDimensionAndMeasureColumnVectors() {
 return;
   }
   fillColumnVectorDetails(columnarBatch, rowCounter, requiredRows);
+  scannedResult
+  .markFilteredRows(columnarBatch, rowCounter, requiredRows, 
columnarBatch.getRowCounter());
--- End diff --

ok


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #978: [CARBONDATA-1109] Cover the case when last page is no...

2017-06-01 Thread watermen
Github user watermen commented on the issue:

https://github.com/apache/carbondata/pull/978
  
@ravipesala You can reproduce this case by add some log and run loading 
sample.csv testcase(TestDataLoadWithFileName).
It's hard to reproducer, so I add sleep in Producer to simulate the time of 
processing data.

![image](https://cloud.githubusercontent.com/assets/1400819/26669092/19e9535e-46df-11e7-9d02-c12b05bb4a90.png)

![image](https://cloud.githubusercontent.com/assets/1400819/26669073/0c56c64a-46df-11e7-8071-6ba2e7fa5785.png)
The log lost page like below:
```
00:15:42 [Thread-65]###addDataToStore
00:15:42 [Thread-65]###addDataToStore
00:15:43 [Thread-73]###Put ---> isWriteAll:falseindex:1
00:15:44 [Thread-72]###Put ---> isWriteAll:falseindex:0
00:15:44 [Thread-71]###Get ---> isWriteAll:falseindex:0
00:15:44 [Thread-71]###Get ---> isWriteAll:falseindex:1
00:15:44 [Thread-65]###addDataToStore
00:15:44 [Thread-65]###addDataToStore
00:15:44 [Thread-65]###finish
00:15:46 [Thread-72]###Put ---> isWriteAll:falseindex:1
00:15:46 [Thread-72]###Put ---> isWriteAll:true index:0
00:15:46 [Thread-71]###Get ---> isWriteAll:true index:0// Last 
page is not be consumed at the end.
00:15:46 [Thread-71]###Get ---> isWriteAll:falseindex:1
00:15:47 [Thread-73]###Put ---> isWriteAll:falseindex:0
00:15:47 [Thread-71]###Get ---> isWriteAll:falseindex:0
```






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (CARBONDATA-1115) load csv data fail

2017-06-01 Thread hyd (JIRA)
hyd created CARBONDATA-1115:
---

 Summary: load csv data fail
 Key: CARBONDATA-1115
 URL: https://issues.apache.org/jira/browse/CARBONDATA-1115
 Project: CarbonData
  Issue Type: Bug
  Components: examples
Affects Versions: 1.2.0
 Environment: centos 7, spark2.1.0, hadoop 2.7
Reporter: hyd
 Fix For: 1.2.0


is it a bug, or my environment has problem, can anyone help me.


[root@localhost spark-2.1.0-bin-hadoop2.7]# ls /home/carbondata/sample.csv 
/home/carbondata/sample.csv
[root@localhost spark-2.1.0-bin-hadoop2.7]# ./bin/spark-shell --master 
spark://192.168.32.114:7077 --total-executor-cores 2 --executor-memory 2G
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use 
setLogLevel(newLevel).
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/opt/spark-2.1.0-bin-hadoop2.7/carbonlib/carbondata_2.11-1.1.0-shade-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/opt/spark-2.1.0-bin-hadoop2.7/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
17/06/01 14:44:54 WARN NativeCodeLoader: Unable to load native-hadoop library 
for your platform... using builtin-java classes where applicable
17/06/01 14:44:54 WARN SparkConf: 
SPARK_CLASSPATH was detected (set to './carbonlib/*').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with --driver-class-path to augment the driver classpath
 - spark.executor.extraClassPath to augment the executor classpath

17/06/01 14:44:54 WARN SparkConf: Setting 'spark.executor.extraClassPath' to 
'./carbonlib/*' as a work-around.
17/06/01 14:44:54 WARN SparkConf: Setting 'spark.driver.extraClassPath' to 
'./carbonlib/*' as a work-around.
17/06/01 14:44:54 WARN Utils: Your hostname, localhost.localdomain resolves to 
a loopback address: 127.0.0.1; using 192.168.32.114 instead (on interface em1)
17/06/01 14:44:54 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another 
address
17/06/01 14:44:59 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
Spark context Web UI available at http://192.168.32.114:4040
Spark context available as 'sc' (master = spark://192.168.32.114:7077, app id = 
app-20170601144454-0001).
Spark session available as 'spark'.
Welcome to
    __
 / __/__  ___ _/ /__
_\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
  /_/
 
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SparkSession

scala> import org.apache.spark.sql.CarbonSession._
import org.apache.spark.sql.CarbonSession._

scala> val carbon = 
SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://192.168.32.114/test")
17/06/01 14:45:35 WARN SparkContext: Using an existing SparkContext; some 
configuration may not take effect.
17/06/01 14:45:38 WARN ObjectStore: Failed to get database global_temp, 
returning NoSuchObjectException
carbon: org.apache.spark.sql.SparkSession = 
org.apache.spark.sql.CarbonSession@2165b170

scala> carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name 
string, city string, age Int) STORED BY 'carbondata'")
17/06/01 14:45:45 AUDIT CreateTable: 
[localhost.localdomain][root][Thread-1]Creating Table with Database name 
[default] and Table name [test_table]
res0: org.apache.spark.sql.DataFrame = []

scala> carbon.sql("LOAD DATA LOCAL INPATH '/home/carbondata/sample.csv' INTO 
TABLE test_table")
17/06/01 14:45:54 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 
192.168.32.114, executor 0): java.lang.ClassCastException: cannot assign 
instance of scala.collection.immutable.List$SerializationProxy to field 
org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type 
scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD
at 
java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2133)
at 
java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1305)
at 
java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2237)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2155)
at 
java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2013)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1535)
at 

[GitHub] carbondata issue #986: [CARBONDATA-1108] Supported IUD for vector reader in ...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/986
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2134/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #249: [CARBONDATA-329] constant final class changed to inte...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/249
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #942: [CARBONDATA-1084]added documentation for V3 Data Form...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/942
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #943: [CARBONDATA-1086]Added documentation for BATCH SORT S...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/943
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #964: [CARBONDATA-1099] Fixed bug for carbon-spark-shell in...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/964
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #975: [Documentation] Single pass condition for high cardin...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/975
  
Can one of the admins verify this patch?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #972: [WIP] Added set command in carbon to update propertie...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/972
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2133/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] carbondata issue #981: [CARBONDATA-1111]Improve No dictionary column Include...

2017-06-01 Thread CarbonDataQA
Github user CarbonDataQA commented on the issue:

https://github.com/apache/carbondata/pull/981
  
Build Failed  with Spark 2.1.0, Please check CI 
http://136.243.101.176:8080/job/ApacheCarbonPRBuilder/2132/



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---