[jira] [Commented] (PHOENIX-4176) Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 1.x

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156520#comment-16156520
 ] 

Samarth Jain commented on PHOENIX-4176:
---

Yes, we do this when online schema update is false:

{code}
if (!allowOnlineTableSchemaUpdate()) {
admin.disableTable(tableName);
admin.modifyTable(tableName, newDesc);
admin.enableTable(tableName);
} 
{code}

Yes, it is potentially a real production issue if we are using HBase 1.x. If 
there is sufficient time between such kind of DDL operations and the queries 
that rely on them, then the risk of running into this is low. Of course, doing 
offline schema update isn't ideal since we need to disable the table which is 
taking a downtime and a no-go. We likely will have to come up with a general 
solution for this. 

Below is the failure stacktrace for reference:

{code}
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family 
CF does not exist in region 
T000225,\x06\x00\x00,1504753367780.d3e683408d7801ced0ea2940400cbc2e. in table 
'T000225', {TABLE_ATTRIBUTES => {coprocessor$1 => 
'|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 
=> 
'|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', 
coprocessor$3 => 
'|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
coprocessor$4 => 
'|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
coprocessor$5 => 
'|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
 {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'NONE', 
REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => 
'65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
at 
org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:8031)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2675)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2660)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2654)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.newRegionScanner(RSRpcServices.java:2551)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2809)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2347)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

at 
org.apache.phoenix.end2end.FlappingAlterTableIT.testAddColumnForNewColumnFamily(FlappingAlterTableIT.java:60)
{code}

> Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 
> 1.x
> -
>
> Key: PHOENIX-4176
> URL: https://issues.apache.org/jira/browse/PHOENIX-4176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4176.patch
>
>
> In HBase 1.x, when adding a new column family, the check to detect whether 
> the HTableDescriptor is updated isn't enough. Tests that add a new column 
> family run the risk of flapping when number of regions on the table are high 
> (since the column family has to be added to all the regions). Till we figure 
> out a permanent solution for this, we can possibly decrease chances of such 
> tests from flapping by reducing number of regions/pre-splits and possibly 
> configuring hbase.online.schema.update.enable to false in our tests. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156495#comment-16156495
 ] 

James Taylor edited comment on PHOENIX-3953 at 9/7/17 5:41 AM:
---

Patch that moves check to postCompact hook. [~lhofhansl] - is it correct that 
when CompactionRequest.isAllFiles() is true, that that means a major compaction 
is happening (i.e. delete markers will be removed)?

Please review, [~lhofhansl] and/or [~vincentpoon], [~samarthjain]. The existing 
test that tests that the index is disabled after compaction is 
PartialIndexRebuilderIT.testCompactionDuringRebuild().


was (Author: jamestaylor):
Patch that moves check to postCompact hook. [~lhofhansl] - is it correct that 
when CompactionRequest.isAllFiles() is true, that that means a major compaction 
is happening (i.e. delete markers will be removed)?

Please review, [~lhofhansl] and/or [~vincentpoon]. The existing test that tests 
that the index is disabled after compaction is 
PartialIndexRebuilderIT.testCompactionDuringRebuild().

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, 
> PHOENIX-3953_addendum2.patch, PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156507#comment-16156507
 ] 

James Taylor commented on PHOENIX-3314:
---

The writes are all local, so there's not the possibility of a deadlock (of 
course there could be a different issue, though).

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156503#comment-16156503
 ] 

Samarth Jain commented on PHOENIX-3314:
---

See PHOENIX-4171 for the case where we had to disable building immutable 
indexes on server side because UPSERT SELECT was potentially deadlocking. It 
may be the same case here too.

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4176) Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 1.x

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156501#comment-16156501
 ] 

James Taylor commented on PHOENIX-4176:
---

If we turn off online schema update, then we need to disable and enable the 
table. Are we doing that in the code already? Also, since we have this on in 
production, is this potentially a real production issue that we need to deal 
with?

> Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 
> 1.x
> -
>
> Key: PHOENIX-4176
> URL: https://issues.apache.org/jira/browse/PHOENIX-4176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4176.patch
>
>
> In HBase 1.x, when adding a new column family, the check to detect whether 
> the HTableDescriptor is updated isn't enough. Tests that add a new column 
> family run the risk of flapping when number of regions on the table are high 
> (since the column family has to be added to all the regions). Till we figure 
> out a permanent solution for this, we can possibly decrease chances of such 
> tests from flapping by reducing number of regions/pre-splits and possibly 
> configuring hbase.online.schema.update.enable to false in our tests. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-3314) ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local indexes

2017-09-06 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla reassigned PHOENIX-3314:


Assignee: Rajeshbabu Chintaguntla

> ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is failing for local 
> indexes
> -
>
> Key: PHOENIX-3314
> URL: https://issues.apache.org/jira/browse/PHOENIX-3314
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
>
> The ImmutableIndexIT.testCreateIndexDuringUpsertSelect() is currently not run 
> for local indexes and when it is, it fails with the following errors:
> {code}
> Tests run: 12, Failures: 0, Errors: 2, Skipped: 4, Time elapsed: 744.655 sec 
> <<< FAILURE! - in org.apache.phoenix.end2end.index.ImmutableIndexIT
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=false](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 314.079 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> testCreateIndexDuringUpsertSelect[ImmutableIndexIT_localIndex=true,transactional=true](org.apache.phoenix.end2end.index.ImmutableIndexIT)
>   Time elapsed: 310.882 sec  <<< ERROR!
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.end2end.index.ImmutableIndexIT.testCreateIndexDuringUpsertSelect(ImmutableIndexIT.java:177)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156495#comment-16156495
 ] 

James Taylor commented on PHOENIX-3953:
---

Patch that moves check to postCompact hook. [~lhofhansl] - is it correct that 
when CompactionRequest.isAllFiles() is true, that that means a major compaction 
is happening (i.e. delete markers will be removed)?

Please review, [~lhofhansl] and/or [~vincentpoon]. The existing test that tests 
that the index is disabled after compaction is 
PartialIndexRebuilderIT.testCompactionDuringRebuild().

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, 
> PHOENIX-3953_addendum2.patch, PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4176) Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 1.x

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4176:
--
Attachment: PHOENIX-4176.patch

> Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 
> 1.x
> -
>
> Key: PHOENIX-4176
> URL: https://issues.apache.org/jira/browse/PHOENIX-4176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4176.patch
>
>
> In HBase 1.x, when adding a new column family, the check to detect whether 
> the HTableDescriptor is updated isn't enough. Tests that add a new column 
> family run the risk of flapping when number of regions on the table are high 
> (since the column family has to be added to all the regions). Till we figure 
> out a permanent solution for this, we can possibly decrease chances of such 
> tests from flapping by reducing number of regions/pre-splits and possibly 
> configuring hbase.online.schema.update.enable to false in our tests. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4176) Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 1.x

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4176:
--
Description: In HBase 1.x, when adding a new column family, the check to 
detect whether the HTableDescriptor is updated isn't enough. Tests that add a 
new column family run the risk of flapping when number of regions on the table 
are high (since the column family has to be added to all the regions). Till we 
figure out a permanent solution for this, we can possibly decrease chances of 
such tests from flapping by reducing number of regions/pre-splits and possibly 
configuring hbase.online.schema.update.enable to false in our tests.   (was: In 
HBase 1.x, when adding a new column family, the check to detect whether the 
HTableDescriptor is updated isn't enough. Tests that add a new column family 
run the risk of flapping when number of regions on the table are high (since 
the column family has to be added to all the regions). We can possibly decrease 
chances of such tests from flapping by reducing number of regions/pre-splits 
and possibly configuring hbase.online.schema.update.enable to false in our 
tests. )

> Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 
> 1.x
> -
>
> Key: PHOENIX-4176
> URL: https://issues.apache.org/jira/browse/PHOENIX-4176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4176.patch
>
>
> In HBase 1.x, when adding a new column family, the check to detect whether 
> the HTableDescriptor is updated isn't enough. Tests that add a new column 
> family run the risk of flapping when number of regions on the table are high 
> (since the column family has to be added to all the regions). Till we figure 
> out a permanent solution for this, we can possibly decrease chances of such 
> tests from flapping by reducing number of regions/pre-splits and possibly 
> configuring hbase.online.schema.update.enable to false in our tests. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4176) Reduce chances of tests flapping with ColumnFamilyNotFoundException for HBase 1.x

2017-09-06 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4176:
-

 Summary: Reduce chances of tests flapping with 
ColumnFamilyNotFoundException for HBase 1.x
 Key: PHOENIX-4176
 URL: https://issues.apache.org/jira/browse/PHOENIX-4176
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


In HBase 1.x, when adding a new column family, the check to detect whether the 
HTableDescriptor is updated isn't enough. Tests that add a new column family 
run the risk of flapping when number of regions on the table are high (since 
the column family has to be added to all the regions). We can possibly decrease 
chances of such tests from flapping by reducing number of regions/pre-splits 
and possibly configuring hbase.online.schema.update.enable to false in our 
tests. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-3953:
--
Attachment: PHOENIX-3953_addendum2.patch

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, 
> PHOENIX-3953_addendum2.patch, PHOENIX-3953.patch, PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156475#comment-16156475
 ] 

James Taylor commented on PHOENIX-4165:
---

Yes, just like in HBase.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.12.0
>
> Attachments: 4165.txt, 4165-v2.txt, 4165-v3.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4150) Adding a policy filter to whitelist the properties that allow to be passed to Phoenix

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156473#comment-16156473
 ] 

Hadoop QA commented on PHOENIX-4150:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885739/PHOENIX-4150-v2.patch
  against master branch at commit b46cbd375e3d2ee9a11644825c13937572c027cd.
  ATTACHMENT ID: 12885739

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ *  if (offendingProperties.size()>0) throw new 
IllegalArgumentException("properties not allowed. offending properties" + 
offendingProperties);
+ * Dependent modules may register their own implementations of the following 
using {@link java.util.ServiceLoader}:
+private static final PropertyPolicy DEFAULT_PROPERTY_POLICY = new 
PropertyPolicy.PropertyPolicyImpl();

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1395//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1395//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1395//console

This message is automatically generated.

> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix
> -
>
> Key: PHOENIX-4150
> URL: https://issues.apache.org/jira/browse/PHOENIX-4150
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4150-v1.patch, PHOENIX-4150-v2.patch
>
>
> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix.
> Feature proposal:
> When user getting phoenix connection via
> Connection conn = DriverManager.getConnection(connectionString, properties);
> A properties whitelist policy will essentially check each properties that 
> passed in (likely happen at PhoenixDriver.java), so that the un-allowed 
> property will result in an exception been thrown.
> Similar to HBaseFactoryProvider, proposing have a interface for whitelist 
> policy and a default impl that will by default allow all properties. User can 
> override the impl for this interface to start using whitelist feature
> [~jamestaylor]   [~alexaraujo]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156464#comment-16156464
 ] 

Hudson commented on PHOENIX-4175:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1778 (See 
[https://builds.apache.org/job/Phoenix-master/1778/])
PHOENIX-4175 Convert tests using CURRENT_SCN to not use it when possible 
(jamestaylor: rev b46cbd375e3d2ee9a11644825c13937572c027cd)
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/CreateSchemaIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/CustomEntityDataIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/UpsertSelectIT.java


> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4175_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156463#comment-16156463
 ] 

Hudson commented on PHOENIX-4173:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1778 (See 
[https://builds.apache.org/job/Phoenix-master/1778/])
PHOENIX-4173 Ensure that the rebuild fails if an index that transitions 
(jamestaylor: rev 6c5bc3bba7732357bf3fc4ab39e7fda10e97539e)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/PartialIndexRebuilderIT.java


> Ensure that the rebuild fails if an index that transitions back to disabled 
> while rebuilding
> 
>
> Key: PHOENIX-4173
> URL: https://issues.apache.org/jira/browse/PHOENIX-4173
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4173.patch, PHOENIX-4173_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156467#comment-16156467
 ] 

Lars Hofhansl commented on PHOENIX-4165:


That doesn't explain how trigger a pre-commit test. Is it like in HBase where I 
just "Submit Patch" in Jira?

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.12.0
>
> Attachments: 4165.txt, 4165-v2.txt, 4165-v3.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156461#comment-16156461
 ] 

Hudson commented on PHOENIX-4170:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1778 (See 
[https://builds.apache.org/job/Phoenix-master/1778/])
Revert "PHOENIX-4170 Remove rebuildIndexOnFailure param from (samarth: rev 
64658fe5a64e7089f5208ece25769bf644f96846)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseClientManagedTimeIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/NotQueryIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseHBaseManagedTimeIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/rpc/PhoenixServerRpcIT.java
PHOENIX-4170 Remove rebuildIndexOnFailure param from (samarth: rev 
134424ebd44f730344ff5da93a6ec3f734d77d4b)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java


> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156462#comment-16156462
 ] 

Hudson commented on PHOENIX-4171:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1778 (See 
[https://builds.apache.org/job/Phoenix-master/1778/])
PHOENIX-4171 Creating immutable index is timing out intermittently (samarth: 
rev 28aebd6af3b635c98c8f1782295ea6c85167d659)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/rpc/PhoenixServerRpcIT.java


> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4171.patch, PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> 

[jira] [Assigned] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned PHOENIX-4165:
--

 Assignee: Lars Hofhansl
Fix Version/s: 4.12.0
Affects Version/s: 4.11.0

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
> Fix For: 4.12.0
>
> Attachments: 4165.txt, 4165-v2.txt, 4165-v3.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4165:
---
Attachment: 4165-v3.txt

-v3 includes a test:

100 threads allocate chunk of memory in memory, until each of them fails an 
allocation. Make sure that all memory is used. Then all threads free their 
allocated chunks, make sure everything was released - I can't think of anything 
better to test.

This should be good to go.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt, 4165-v2.txt, 4165-v3.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156442#comment-16156442
 ] 

James Taylor commented on PHOENIX-4165:
---

Please kick off a pre commit test run as sometimes a change has impact outside 
of expected test classes. Directions are here: 
http://phoenix.apache.org/contributing.html#Local_Git_workflow

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt, 4165-v2.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156427#comment-16156427
 ] 

Lars Hofhansl commented on PHOENIX-4165:


I ran all the modified tests (i.e. all the ones that referred 
GlobalMemoryManager or any of the config getters/setter I have removed). All 
pass.

Will add a multithreaded test, and then this should be good to go.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt, 4165-v2.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4150) Adding a policy filter to whitelist the properties that allow to be passed to Phoenix

2017-09-06 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4150:

Attachment: PHOENIX-4150-v2.patch

v2 patch:
1, Added PropertyNotAllowedException to encapsulate offending properties info
2, Addressed comments used to make it follow java 1.7,

> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix
> -
>
> Key: PHOENIX-4150
> URL: https://issues.apache.org/jira/browse/PHOENIX-4150
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4150-v1.patch, PHOENIX-4150-v2.patch
>
>
> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix.
> Feature proposal:
> When user getting phoenix connection via
> Connection conn = DriverManager.getConnection(connectionString, properties);
> A properties whitelist policy will essentially check each properties that 
> passed in (likely happen at PhoenixDriver.java), so that the un-allowed 
> property will result in an exception been thrown.
> Similar to HBaseFactoryProvider, proposing have a interface for whitelist 
> policy and a default impl that will by default allow all properties. User can 
> override the impl for this interface to start using whitelist feature
> [~jamestaylor]   [~alexaraujo]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156400#comment-16156400
 ] 

Hadoop QA commented on PHOENIX-4175:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885716/PHOENIX-4175_1.patch
  against master branch at commit 28aebd6af3b635c98c8f1782295ea6c85167d659.
  ATTACHMENT ID: 12885716

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static void initTableValues(Connection conn, String tenantId, 
String tableName) throws Exception {
+"CONSTRAINT pk PRIMARY KEY (organization_id, key_prefix, 
custom_entity_data_id)) SPLIT ON ('" + tenantId + "00A','" + tenantId + 
"00B','" + tenantId + "00C')";
+Connection conn = DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+String query = "SELECT 
CREATED_BY,CREATED_DATE,CURRENCY_ISO_CODE,DELETED,DIVISION,LAST_UPDATE,LAST_UPDATE_BY,NAME,OWNER,SYSTEM_MODSTAMP,VAL0,VAL1,VAL2,VAL3,VAL4,VAL5,VAL6,VAL7,VAL8,VAL9
 FROM " + tableName + " WHERE organization_id=?";
+Connection conn = DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+String query = "SELECT KEY_PREFIX||CUSTOM_ENTITY_DATA_ID FROM " + 
tableName + " where '00A'||val0 LIKE '00A2%'";
+Connection conn = DriverManager.getConnection(getUrl(), 
PropertiesUtil.deepCopy(TEST_PROPERTIES));
+"CONSTRAINT pk PRIMARY KEY (organization_id, key_prefix, 
custom_entity_data_id)) " + (saltTable ? "salt_buckets = 2"  : "");

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1394//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1394//console

This message is automatically generated.

> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4175_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156389#comment-16156389
 ] 

Hadoop QA commented on PHOENIX-4173:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885712/PHOENIX-4173_v2.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885712

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("CREATE TABLE " + fullTableName 
+ "(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR) 
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES('a','a','0','x')");
+try (HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
 {
+// By using an INDEX_DISABLE_TIMESTAMP of 0, we prevent the 
partial index rebuilder from triggering
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('b','bb', '11','yy')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','ccc','222','zzz')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','','','')");
+IndexUtil.updateIndexState(fullIndexName, disableTime, 
metaTable, PIndexState.DISABLE);
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','e','4','z')");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ScanQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.FlappingAlterTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1393//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1393//console

This message is automatically generated.

> Ensure that the rebuild fails if an index that transitions back to disabled 
> while rebuilding
> 
>
> Key: PHOENIX-4173
> URL: https://issues.apache.org/jira/browse/PHOENIX-4173
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4173.patch, PHOENIX-4173_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4161) TableSnapshotReadsMapReduceIT shouldn't need to run its own mini cluster

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156342#comment-16156342
 ] 

James Taylor commented on PHOENIX-4161:
---

FYI, not sure if it's related, but [~jmahonin] discovered this over here[1]:
bq. In fixing the above issue, it raised another one, in that JUnit 4.12 
doesn't support parallel test execution using TemporaryFolders. It's fixed in 
JUnit 4.13 (not yet released)

[1] 
https://issues.apache.org/jira/browse/PHOENIX-4159?focusedCommentId=16155653=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16155653

> TableSnapshotReadsMapReduceIT shouldn't need to run its own mini cluster
> 
>
> Key: PHOENIX-4161
> URL: https://issues.apache.org/jira/browse/PHOENIX-4161
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Akshita Malhotra
>
> In PHOENIX-4141, I made a few attempts to get TableSnapshotReadsMapReduceIT 
> to pass. But finally had to resort to running the test in its own mini 
> cluster. I don't see any why reason we should, though. [~akshita.malhotra] - 
> can you please take a look. 
> Below are the errors I saw in logs:
> {code}
> java.lang.Exception: java.lang.IllegalArgumentException: Filesystems for 
> restore directory and HBase root directory should be the same
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.IllegalArgumentException: Filesystems for restore 
> directory and HBase root directory should be the same
>   at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:716)
>   at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnapshotResultIterator.java:77)
>   at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSnapshotResultIterator.java:73)
>   at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:126)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> {code}
> Caused by: java.lang.IllegalArgumentException: Restore directory cannot be a 
> sub directory of HBase root directory. RootDir: 
> hdfs://localhost:45485/user/jenkins/test-data/3fe1b641-9d14-4053-b3e6-a811035e34b0,
>  restoreDir: 
> hdfs://localhost:45485/user/jenkins/test-data/3fe1b641-9d14-4053-b3e6-a811035e34b0/FOO/3eb31efb-b541-4b75-b98f-4558ddf5994e
>   at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:720)
>   at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnapshotResultIterator.java:77)
>   at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSnapshotResultIterator.java:73)
>   at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:126)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156338#comment-16156338
 ] 

Hudson commented on PHOENIX-4170:
-

ABORTED: Integrated in Jenkins build Phoenix-master #1777 (See 
[https://builds.apache.org/job/Phoenix-master/1777/])
PHOENIX-4170 Remove rebuildIndexOnFailure param from (samarth: rev 
dd5642ff55cbc829765114d0be051cb48081e4a6)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsDisabledIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseHBaseManagedTimeIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseClientManagedTimeIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/NotQueryIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/rpc/PhoenixServerRpcIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ParallelStatsEnabledIT.java


> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4174) Drop tables asynchronously to reduce load on mini cluster

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156322#comment-16156322
 ] 

Hadoop QA commented on PHOENIX-4174:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885702/PHOENIX-4174.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885702

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.FunkyNamesIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1390//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1390//console

This message is automatically generated.

> Drop tables asynchronously to reduce load on mini cluster
> -
>
> Key: PHOENIX-4174
> URL: https://issues.apache.org/jira/browse/PHOENIX-4174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4174.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4174) Drop tables asynchronously to reduce load on mini cluster

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156306#comment-16156306
 ] 

Hadoop QA commented on PHOENIX-4174:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885702/PHOENIX-4174.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885702

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1391//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1391//console

This message is automatically generated.

> Drop tables asynchronously to reduce load on mini cluster
> -
>
> Key: PHOENIX-4174
> URL: https://issues.apache.org/jira/browse/PHOENIX-4174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4174.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain reassigned PHOENIX-4171:
-

   Resolution: Fixed
 Assignee: Samarth Jain
Fix Version/s: 4.12.0

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4171.patch, PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> 

[jira] [Assigned] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4175:
-

Assignee: James Taylor

> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4175_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4175:
--
Attachment: PHOENIX-4175_1.patch

Will commit this change in batches. This covers CreateSchemaIT, 
CustomEntityDataIT, and UpsertSelectIT.

> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
> Attachments: PHOENIX-4175_1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156272#comment-16156272
 ] 

James Taylor commented on PHOENIX-4175:
---

Necessary for PHOENIX-4096

> Convert tests using CURRENT_SCN to not use it when possible
> ---
>
> Key: PHOENIX-4175
> URL: https://issues.apache.org/jira/browse/PHOENIX-4175
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4175) Convert tests using CURRENT_SCN to not use it when possible

2017-09-06 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4175:
-

 Summary: Convert tests using CURRENT_SCN to not use it when 
possible
 Key: PHOENIX-4175
 URL: https://issues.apache.org/jira/browse/PHOENIX-4175
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156264#comment-16156264
 ] 

James Taylor commented on PHOENIX-4171:
---

+1 on the patch.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171.patch, PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> 

[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156263#comment-16156263
 ] 

James Taylor commented on PHOENIX-4171:
---

I suspect we're running out of handler threads during the UPSERT SELECT as 
we've only got 5 configured. I think we'll need to wait until PHOENIX-3995 is 
implemented to prevent deadlocks, and then we should be able to re-enable it.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171.patch, PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> 

[jira] [Updated] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4173:
--
Attachment: PHOENIX-4173_v2.patch

> Ensure that the rebuild fails if an index that transitions back to disabled 
> while rebuilding
> 
>
> Key: PHOENIX-4173
> URL: https://issues.apache.org/jira/browse/PHOENIX-4173
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4173.patch, PHOENIX-4173_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: PHOENIX-4171.patch

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171.patch, PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Reopened] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-3953:
---

Need to tweak this slightly. We should disable the index permanently in the 
postCompact hook instead. This is the point at which we can no longer reliably 
rebuild the index. If the rebuilder runs while compaction is working, it would 
still be fine.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156246#comment-16156246
 ] 

Samarth Jain commented on PHOENIX-4171:
---

The PhoenixServerRpcIT failure is legitimate because this patch is disabling 
server side upsert select. Either way, the patch and subsequent test runs do 
prove that there is something going on with server side upsert select. We will 
need to disable it.

FYI, [~ankit.singhal]. 

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> 

[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156236#comment-16156236
 ] 

Hadoop QA commented on PHOENIX-4171:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12885691/PHOENIX-4171_wip3_master.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885691

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1387//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1387//console

This message is automatically generated.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)

[jira] [Updated] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4165:
---
Attachment: 4165-v2.txt

WIP. Need to add a new test still.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt, 4165-v2.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156200#comment-16156200
 ] 

Lars Hofhansl commented on PHOENIX-4165:


Actually MemoryManager and MemoryChunk have good javadoc already.
We also have MemoryManagerTest. I know I've been saying we need more test, but 
this instance does not change failure scenarios, just that we do not wait for 
memory.

After talking a bit with James... Here's a more radical patch that removes all 
the wait/notify logic, all all related tests.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4174) Drop tables asynchronously to reduce load on mini cluster

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4174:
--
Attachment: PHOENIX-4174.patch

The reason behind attempting this approach is that I am seeing intermittent 
test failures happening because HMaster is not able to complete initialization 
within 200 seconds. So instead of shutting down the mini cluster, I am dropping 
the tables to decrease the memory pressure on the mini cluster. For 
ParallelStatsDisabledIT and ParallelStatsEnabledIT this can happen 
asynchronously. For other categories, we need to wait for the tables to drop 
like before.

> Drop tables asynchronously to reduce load on mini cluster
> -
>
> Key: PHOENIX-4174
> URL: https://issues.apache.org/jira/browse/PHOENIX-4174
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4174.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156157#comment-16156157
 ] 

Hadoop QA commented on PHOENIX-4171:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12885671/PHOENIX-4171_wip2_master.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885671

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1386//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1386//console

This message is automatically generated.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)

[jira] [Created] (PHOENIX-4174) Drop tables asynchronously to reduce load on mini cluster

2017-09-06 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4174:
-

 Summary: Drop tables asynchronously to reduce load on mini cluster
 Key: PHOENIX-4174
 URL: https://issues.apache.org/jira/browse/PHOENIX-4174
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: PHOENIX-4171_wip3_master.patch

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip3_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156140#comment-16156140
 ] 

Samarth Jain commented on PHOENIX-4171:
---

OK, this looks a bit promising. Let me retry again.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: (was: PHOENIX-4171_wip2_master.patch)

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156124#comment-16156124
 ] 

Hadoop QA commented on PHOENIX-4171:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12885671/PHOENIX-4171_wip2_master.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885671

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1385//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1385//console

This message is automatically generated.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip2_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)

[jira] [Commented] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156119#comment-16156119
 ] 

Hadoop QA commented on PHOENIX-4173:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885657/PHOENIX-4173.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885657

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("CREATE TABLE " + fullTableName 
+ "(k VARCHAR PRIMARY KEY, v1 VARCHAR, v2 VARCHAR, v3 VARCHAR) 
COLUMN_ENCODED_BYTES = 0, STORE_NULLS=true");
+conn.createStatement().execute("CREATE INDEX " + indexName + " ON 
" + fullTableName + " (v1, v2) INCLUDE (v3)");
+conn.createStatement().execute("UPSERT INTO " + fullTableName + " 
VALUES('a','a','0','x')");
+try (HTableInterface metaTable = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getTable(PhoenixDatabaseMetaData.SYSTEM_CATALOG_NAME_BYTES))
 {
+// By using an INDEX_DISABLE_TIMESTAMP of 0, we prevent the 
partial index rebuilder from triggering
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('b','bb', '11','yy')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','ccc','222','zzz')");
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','','','')");
+IndexUtil.updateIndexState(fullIndexName, disableTime, 
metaTable, PIndexState.DISABLE);
+conn.createStatement().execute("UPSERT INTO " + fullTableName 
+ " VALUES('a','e','4','z')");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.execute.PartialCommitIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.GroupByIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1382//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1382//console

This message is automatically generated.

> Ensure that the rebuild fails if an index that transitions back to disabled 
> while rebuilding
> 
>
> Key: PHOENIX-4173
> URL: https://issues.apache.org/jira/browse/PHOENIX-4173
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4173.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4150) Adding a policy filter to whitelist the properties that allow to be passed to Phoenix

2017-09-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156111#comment-16156111
 ] 

Ethan Wang commented on PHOENIX-4150:
-

Per we discussed [~tdsilva] 
1, forEach part is an example in the comments for instruction. I can refactor 
them if needed.
2, so PropertyPolicy is following three other service override factory 
providers. Non of them has tests written. I wonder what's the best way of 
testing them. [~jamestaylor]

> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix
> -
>
> Key: PHOENIX-4150
> URL: https://issues.apache.org/jira/browse/PHOENIX-4150
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4150-v1.patch
>
>
> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix.
> Feature proposal:
> When user getting phoenix connection via
> Connection conn = DriverManager.getConnection(connectionString, properties);
> A properties whitelist policy will essentially check each properties that 
> passed in (likely happen at PhoenixDriver.java), so that the un-allowed 
> property will result in an exception been thrown.
> Similar to HBaseFactoryProvider, proposing have a interface for whitelist 
> policy and a default impl that will by default allow all properties. User can 
> override the impl for this interface to start using whitelist feature
> [~jamestaylor]   [~alexaraujo]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4164) APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.

2017-09-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156083#comment-16156083
 ] 

Ethan Wang edited comment on PHOENIX-4164 at 9/6/17 9:51 PM:
-

P.S., For folks that want to play around with precision configuration:

In this hll implementation there are two parameters that is configurable:
NormalSetPrecision (p)
SparseSetPrecision (sp)

In short: the actually leading zero counting space = (64-p). so less p, more 
room for counting.

For detail:
Due to the reason that hyperloglog performs poorly when cardinality is low, 
[Stefan et 
al.|http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf]
 came up an idea to use a normal hash set in the scenario when cordiality is 
low so that we can achieve the best at the both worlds (except we didn't end up 
using a normal hash set, we use a sparse set instead). So, in the case when 
cardinality goes up, the counting will switch to the hyper log log, because 
"sparse has the advantage on accuracy per unit of memory at low cardinality but 
quickly falls behind."

Based on this design, this is how streamlib come up a way of using this 64-bits 
hash. Code see 
[this|https://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/cardinality/HyperLogLogPlus.java]
{code}
 * ***   <- hashed 
length of bits
 * | p bits = idx || look for leading zeros here |
 * |  sp bits = idx' |
{code}
So, during normal hll model, p is the bucket size, the 64-p is the size used 
for counting leading zeros.

Though, note that, with default p=16, there is up to 48 bits left for hll, that 
is 2^48 of hashing space. The load for a 30million data test set is relatively 
trivial.


was (Author: aertoria):
P.S., For folks that want to play around with precision configuration:

In this hll implementation there are two parameters that is configurable:
NormalSetPrecision (p)
SparseSetPrecision (sp)

In short: the actually leading zero counting space = (64-p). so less p, more 
room for counting.

For detail:
Due to the reason that hyperloglog performs poorly when cardinality is low, 
[Stefan et 
al.|http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf]
 came up an idea to use a normal hash set in the scenario when cordiality is 
low so that we can achieve the best at the both worlds (except we didn't end up 
using a normal hash set, we use a sparse set instead). So, in the case when 
cardinality goes up, the counting will switch to the hyper log log, because 
"sparse has the advantage on accuracy per unit of memory at low cardinality but 
quickly falls behind."

Based on this design, this is how streamlib come up a way of using this 64-bits 
hash. See 
[this|https://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/cardinality/HyperLogLogPlus.java]
{code}
 * ***   <- hashed 
length of bits
 * | p bits = idx || look for leading zeros here |
 * |  sp bits = idx' |
{code}
So, during normal hll model, p is the bucket size, the 64-p is the size used 
for counting leading zeros.

Though, note that, with default p=16, there is up to 48 bits left for hll, that 
is 2^48 of hashing space. The load for a 30million data test set is relatively 
trivial.

> APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.
> -
>
> Key: PHOENIX-4164
> URL: https://issues.apache.org/jira/browse/PHOENIX-4164
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Ethan Wang
>
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from test;
> +---+
> | COUNT(1)  |
> +---+
> | 26931816  |
> +---+
> 1 row selected (14.604 seconds)
> 0: jdbc:phoenix:localhost> select approx_count_distinct(v1) from test;
> ++
> | APPROX_COUNT_DISTINCT(V1)  |
> ++
> | 17221394   |
> ++
> 1 row selected (21.619 seconds)
> {code}
> The table is generated from random numbers, and the cardinality of v1 is 
> close to the number of rows.
> (I cannot run a COUNT(DISTINCT(v1)), as it uses up all memory on my machine 
> and eventually kills the regionserver - that's another story and another jira)
> [~aertoria]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4164) APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.

2017-09-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156083#comment-16156083
 ] 

Ethan Wang commented on PHOENIX-4164:
-

P.S., For folks that want to play around with precision configuration:

In this hll implementation there are two parameters that is configurable:
NormalSetPrecision (p)
SparseSetPrecision (sp)

In short: the actually leading zero counting space = (64-p). so less p, more 
room for counting.

For detail:
Due to the reason that hyperloglog performs poorly when cardinality is low, 
[Stefan et 
al.|http://static.googleusercontent.com/media/research.google.com/en/us/pubs/archive/40671.pdf]
 came up an idea to use a normal hash set in the scenario when cordiality is 
low so that we can achieve the best at the both worlds (except we didn't end up 
using a normal hash set, we use a sparse set instead). So, in the case when 
cardinality goes up, the counting will switch to the hyper log log, because 
"sparse has the advantage on accuracy per unit of memory at low cardinality but 
quickly falls behind."

Based on this design, this is how streamlib come up a way of using this 64-bits 
hash. See 
[this|https://github.com/addthis/stream-lib/blob/master/src/main/java/com/clearspring/analytics/stream/cardinality/HyperLogLogPlus.java]
{code}
 * ***   <- hashed 
length of bits
 * | p bits = idx || look for leading zeros here |
 * |  sp bits = idx' |
{code}
So, during normal hll model, p is the bucket size, the 64-p is the size used 
for counting leading zeros.

Though, note that, with default p=16, there is up to 48 bits left for hll, that 
is 2^48 of hashing space. The load for a 30million data test set is relatively 
trivial.

> APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.
> -
>
> Key: PHOENIX-4164
> URL: https://issues.apache.org/jira/browse/PHOENIX-4164
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Ethan Wang
>
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from test;
> +---+
> | COUNT(1)  |
> +---+
> | 26931816  |
> +---+
> 1 row selected (14.604 seconds)
> 0: jdbc:phoenix:localhost> select approx_count_distinct(v1) from test;
> ++
> | APPROX_COUNT_DISTINCT(V1)  |
> ++
> | 17221394   |
> ++
> 1 row selected (21.619 seconds)
> {code}
> The table is generated from random numbers, and the cardinality of v1 is 
> close to the number of rows.
> (I cannot run a COUNT(DISTINCT(v1)), as it uses up all memory on my machine 
> and eventually kills the regionserver - that's another story and another jira)
> [~aertoria]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4150) Adding a policy filter to whitelist the properties that allow to be passed to Phoenix

2017-09-06 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156048#comment-16156048
 ] 

Thomas D'Silva commented on PHOENIX-4150:
-

[~aertoria]

Can you remove usage of Java 8 forEach, and add a test as well?

> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix
> -
>
> Key: PHOENIX-4150
> URL: https://issues.apache.org/jira/browse/PHOENIX-4150
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4150-v1.patch
>
>
> Adding a policy filter to whitelist the properties that allow to be passed to 
> Phoenix.
> Feature proposal:
> When user getting phoenix connection via
> Connection conn = DriverManager.getConnection(connectionString, properties);
> A properties whitelist policy will essentially check each properties that 
> passed in (likely happen at PhoenixDriver.java), so that the un-allowed 
> property will result in an exception been thrown.
> Similar to HBaseFactoryProvider, proposing have a interface for whitelist 
> policy and a default impl that will by default allow all properties. User can 
> override the impl for this interface to start using whitelist feature
> [~jamestaylor]   [~alexaraujo]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: (was: PHOENIX-4171_wip_master.patch)

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip2_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: PHOENIX-4171_wip2_master.patch

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip2_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> 

[jira] [Commented] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16156010#comment-16156010
 ] 

Hadoop QA commented on PHOENIX-4171:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12885668/PHOENIX-4171_wip_master.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885668

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail .

Compilation errors resume:
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-clean-plugin:2.5:clean (default-clean) on 
project phoenix-core: Failed to clean project: Failed to delete 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/target
 -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :phoenix-core


Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1383//console

This message is automatically generated.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> 

[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Attachment: PHOENIX-4171_wip_master.patch

I am not positive if running upsert select on the server side is the culprit 
here. Trying it out to see if it helps timeouts.

> Creating immutable index is timing out intermittently
> -
>
> Key: PHOENIX-4171
> URL: https://issues.apache.org/jira/browse/PHOENIX-4171
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4171_wip_master.patch
>
>
> In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
> current_scn anymore when creating tables and indices. This was done with the 
> assumption that somehow current_scn is causing index creation to timeout. 
> However, even after that change, I am seeing that the tests are still 
> flapping. And they are failing because creating immutable indexes is timing 
> out. 
> Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/
> Stacktrace:
> {code}
> 2017-09-06 02:44:13,297 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
> B_STRING, A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
>   at 
> org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
>   at 
> org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
>   at 
> org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
>   at 
> org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
>   at 
> org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
>   at 
> org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
>   at 
> org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
>   at 
> org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
>   at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
>   at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> {code}
> {code}
> 2017-09-06 02:57:28,819 ERROR [main] 
> org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
> CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,  
>A_DATE) KEEP_DELETED_CELLS=false
> java.sql.SQLTimeoutException: Operation timed out.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
>  

[jira] [Commented] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155986#comment-16155986
 ] 

Hadoop QA commented on PHOENIX-4168:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885630/PHOENIX-4168.v1.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885630

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ClientTimeArithmeticQueryIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.MutableQueryIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1380//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1380//console

This message is automatically generated.

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-4168.v1.patch
>
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4164) APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155983#comment-16155983
 ] 

Lars Hofhansl commented on PHOENIX-4164:


Hmm... Lemme retry my test. I'll update tonight (do not have access to my 
personal machine right now).

> APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.
> -
>
> Key: PHOENIX-4164
> URL: https://issues.apache.org/jira/browse/PHOENIX-4164
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Ethan Wang
>
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from test;
> +---+
> | COUNT(1)  |
> +---+
> | 26931816  |
> +---+
> 1 row selected (14.604 seconds)
> 0: jdbc:phoenix:localhost> select approx_count_distinct(v1) from test;
> ++
> | APPROX_COUNT_DISTINCT(V1)  |
> ++
> | 17221394   |
> ++
> 1 row selected (21.619 seconds)
> {code}
> The table is generated from random numbers, and the cardinality of v1 is 
> close to the number of rows.
> (I cannot run a COUNT(DISTINCT(v1)), as it uses up all memory on my machine 
> and eventually kills the regionserver - that's another story and another jira)
> [~aertoria]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4165) Do not wait no new memory chunk can be allocated

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155984#comment-16155984
 ] 

Lars Hofhansl commented on PHOENIX-4165:


Agreed on both counts.

> Do not wait no new memory chunk can be allocated
> 
>
> Key: PHOENIX-4165
> URL: https://issues.apache.org/jira/browse/PHOENIX-4165
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
> Attachments: 4165.txt
>
>
> Currently the code waits for up to 10s by fault for memory to become 
> "available".
> I think it's better to fail immediately and the let the client retry rather 
> than waiting on an HBase handler thread.
> In a first iteration we can simply set the max wait time to 0 (or perhaps 
> even -1) so that we do not attempt to wait but fail immediately. All using 
> code should already deal with InsufficientMemoryExceptions, since they can 
> already happen right now,
> In a second step I'd suggest to actually remove the waiting code and config 
> option completely.
> [~jamestaylor]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4169) Explicitly cap timeout for index disable RPC on compaction

2017-09-06 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4169:
---
Priority: Critical  (was: Major)

> Explicitly cap timeout for index disable RPC on compaction
> --
>
> Key: PHOENIX-4169
> URL: https://issues.apache.org/jira/browse/PHOENIX-4169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
>
> In PHOENIX-3953 we're marking the mutable global index as disabled with an 
> index_disable_timestamp of 0 from the compaction hook.This is a potentially a 
> server-server RPC, and HConnectionManager#setServerSideHConnectionRetries 
> makes it such that the HBase client config on the server side has 10 times 
> the number of retries, lasting hours.
> To avoid a hung coprocessor hook, we should explicitly cap the number of 
> retries here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4169) Explicitly cap timeout for index disable RPC on compaction

2017-09-06 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155981#comment-16155981
 ] 

Lars Hofhansl commented on PHOENIX-4169:


Marking "critical" for 4.12.

> Explicitly cap timeout for index disable RPC on compaction
> --
>
> Key: PHOENIX-4169
> URL: https://issues.apache.org/jira/browse/PHOENIX-4169
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
>
> In PHOENIX-3953 we're marking the mutable global index as disabled with an 
> index_disable_timestamp of 0 from the compaction hook.This is a potentially a 
> server-server RPC, and HConnectionManager#setServerSideHConnectionRetries 
> makes it such that the HBase client config on the server side has 10 times 
> the number of retries, lasting hours.
> To avoid a hung coprocessor hook, we should explicitly cap the number of 
> retries here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4171:
--
Description: 
In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
current_scn anymore when creating tables and indices. This was done with the 
assumption that somehow current_scn is causing index creation to timeout. 
However, even after that change, I am seeing that the tests are still flapping. 
And they are failing because creating immutable indexes is timing out. 
Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/

Stacktrace:

{code}
2017-09-06 02:44:13,297 ERROR [main] 
org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
B_STRING, A_DATE) KEEP_DELETED_CELLS=false
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
at 
org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at 
org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
{code}

{code}
2017-09-06 02:57:28,819 ERROR [main] 
org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,
 A_DATE) KEEP_DELETED_CELLS=false
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 

[jira] [Commented] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155963#comment-16155963
 ] 

Hadoop QA commented on PHOENIX-4170:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12885643/PHOENIX-4170.patch
  against master branch at commit ad52201e07670d342ef33c5e8bd2ee595fe559cc.
  ATTACHMENT ID: 12885643

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public MutableIndexFailureIT(boolean transactional, boolean 
localIndex, boolean isNamespaceMapped, Boolean disableIndexOnWriteFailure, 
boolean failRebuildTask, Boolean throwIndexWriteFailure) {
+@Parameters(name = 
"MutableIndexFailureIT_transactional={0},localIndex={1},isNamespaceMapped={2},disableIndexOnWriteFailure={3},failRebuildTask={4},throwIndexWriteFailure={5}")
 // name is used by failsafe as file name in reports

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.GroupByIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1381//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1381//console

This message is automatically generated.

> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4173:
--
Attachment: PHOENIX-4173.patch

Here's the patch I mentioned yesterday, [~vincentpoon].

> Ensure that the rebuild fails if an index that transitions back to disabled 
> while rebuilding
> 
>
> Key: PHOENIX-4173
> URL: https://issues.apache.org/jira/browse/PHOENIX-4173
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4173.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4173) Ensure that the rebuild fails if an index that transitions back to disabled while rebuilding

2017-09-06 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4173:
-

 Summary: Ensure that the rebuild fails if an index that 
transitions back to disabled while rebuilding
 Key: PHOENIX-4173
 URL: https://issues.apache.org/jira/browse/PHOENIX-4173
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.12.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4172) Retry index rebuild if time stamp of index changes while rebuilding

2017-09-06 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4172:
-

 Summary: Retry index rebuild if time stamp of index changes while 
rebuilding
 Key: PHOENIX-4172
 URL: https://issues.apache.org/jira/browse/PHOENIX-4172
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.12.0


We currently base our decision to retry the index rebuild on the 
INDEX_DISABLE_TIMESTAMP changing. This works fine when we disable the index on 
a write failure, because the transition from DISABLE -> INACTIVE will fail and 
we'll try the rebuild again later. However, this is not the case when the index 
is left ACTIVE. As an additional safeguard, we should fail the rebuild and 
retry again in the next polling.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155917#comment-16155917
 ] 

Josh Elser commented on PHOENIX-4168:
-

Thanks for the ping, James!

[~alexaraujo], letting these RemoteUserExtractors be pluggable was definitely a 
design goal (and custom implementations would be awesome!). If you have the 
motivation to build some implementations for x509 (or other things), I'd love 
to help shepherd those into Avatica as well for others to re-use.

That said, this seems like a nice, straightforward change to PQS that hooks 
into Phoenix's existing InstanceResolver class with a test class! +1 pending QA

One minor nit:

{code}
+Assert.assertTrue(extractor instanceof 
QueryServer.PhoenixRemoteUserExtractor);
{code}

It would be nice to include an error message on this call that informs us what 
kind of object {{extractor}} actually was.

Looking forward to seeing what else you have in mind to build on top of this :)

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-4168.v1.patch
>
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155908#comment-16155908
 ] 

James Taylor commented on PHOENIX-4168:
---

[~elserj] is the best person to review PQS patches IMHO - please make sure to 
ping him. [~rahulshrivastava] has some recent experience too.

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-4168.v1.patch
>
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155907#comment-16155907
 ] 

Samarth Jain commented on PHOENIX-4170:
---

I guess the question would be whether we see a value in testing index failure 
scenarios when rebuilder is off? My hunch is no because we are disallowing DMLs 
with SCNs which is what users would likely want to do to somewhat mimic index 
rebuilding. We still have the case parameterized for the case when index 
rebuilder fails.

> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155901#comment-16155901
 ] 

James Taylor commented on PHOENIX-4170:
---

Would MutableIndexFailureIT still test the scenario of the rebuilder be off? If 
so, we may still need this if statement:
{code}
@@ -340,15 +336,9 @@ public class MutableIndexFailureIT extends BaseTest {
 if (!failRebuildTask) {
 // re-enable index table
 FailingRegionObserver.FAIL_WRITE = false;
-if (rebuildIndexOnWriteFailure) {
-runRebuildTask(conn);
-// wait for index to be rebuilt automatically
-checkStateAfterRebuild(conn, fullIndexName, 
PIndexState.ACTIVE);
-} else {
-// simulate replaying failed mutation
-replayMutations();
-}
-
+runRebuildTask(conn);
+// wait for index to be rebuilt automatically
+checkStateAfterRebuild(conn, fullIndexName, 
PIndexState.ACTIVE);
{code}

> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155897#comment-16155897
 ] 

Hudson commented on PHOENIX-4159:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1776 (See 
[https://builds.apache.org/job/Phoenix-master/1776/])
PHOENIX-4159 phoenix-spark tests are failing (jmahonin: rev 
ad52201e07670d342ef33c5e8bd2ee595fe559cc)
* (edit) 
phoenix-spark/src/it/scala/org/apache/phoenix/spark/AbstractPhoenixSparkIT.scala
* (edit) phoenix-spark/pom.xml


> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
>   at 
> 

[jira] [Created] (PHOENIX-4171) Creating immutable index is timing out intermittently

2017-09-06 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4171:
-

 Summary: Creating immutable index is timing out intermittently
 Key: PHOENIX-4171
 URL: https://issues.apache.org/jira/browse/PHOENIX-4171
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain


In PHOENIX-4151, I converted all the tests extending BaseQueryIT to not use 
current_scn anymore when creating tables and indices. This was done with the 
assumption that somehow current_scn is causing index creation to timeout. 
However, even after that change, I am seeing that the tests are still flapping. 
And they are failing because creating immutable indexes is timing out. 
Sample run: https://builds.apache.org/job/PreCommit-PHOENIX-Build/1379/

Stacktrace:

{code}
2017-09-06 02:44:13,297 ERROR [main] 
org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
CREATE INDEX T000205 ON T000204 (a_integer DESC) INCLUDE (A_STRING, 
B_STRING, A_DATE) KEEP_DELETED_CELLS=false
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 
org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at 
org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at 
org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:734)
at 
org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at 
org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:117)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3359)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1282)
at 
org.apache.phoenix.schema.MetaDataClient.buildIndexAtTimeStamp(MetaDataClient.java:1222)
at 
org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1588)
at 
org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:393)
at 
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:376)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:374)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:363)
at 
org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1707)
at org.apache.phoenix.end2end.BaseQueryIT.(BaseQueryIT.java:139)
at org.apache.phoenix.end2end.NotQueryIT.(NotQueryIT.java:56)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
{code}

{code}
2017-09-06 02:57:28,819 ERROR [main] 
org.apache.phoenix.end2end.BaseQueryIT(141): Exception while creating index: 
CREATE INDEX T000350 ON T000349 (a_integer, a_string) INCLUDE (B_STRING,
 A_DATE) KEEP_DELETED_CELLS=false
java.sql.SQLTimeoutException: Operation timed out.
at 
org.apache.phoenix.exception.SQLExceptionCode$15.newException(SQLExceptionCode.java:399)
at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:932)
at 
org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:846)
at 
org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at 
org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at 
org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at 

[jira] [Updated] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4170:
--
Attachment: PHOENIX-4170.patch

[~jamestaylor], please review.

> Remove rebuildIndexOnFailure param from MutableIndexFailureIT
> -
>
> Key: PHOENIX-4170
> URL: https://issues.apache.org/jira/browse/PHOENIX-4170
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4170.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4170) Remove rebuildIndexOnFailure param from MutableIndexFailureIT

2017-09-06 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4170:
-

 Summary: Remove rebuildIndexOnFailure param from 
MutableIndexFailureIT
 Key: PHOENIX-4170
 URL: https://issues.apache.org/jira/browse/PHOENIX-4170
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4169) Explicitly cap timeout for index disable RPC on compaction

2017-09-06 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4169:
-

 Summary: Explicitly cap timeout for index disable RPC on compaction
 Key: PHOENIX-4169
 URL: https://issues.apache.org/jira/browse/PHOENIX-4169
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
Reporter: Vincent Poon
Assignee: Vincent Poon


In PHOENIX-3953 we're marking the mutable global index as disabled with an 
index_disable_timestamp of 0 from the compaction hook.This is a potentially a 
server-server RPC, and HConnectionManager#setServerSideHConnectionRetries makes 
it such that the HBase client config on the server side has 10 times the number 
of retries, lasting hours.
To avoid a hung coprocessor hook, we should explicitly cap the number of 
retries here.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4168:
-
Attachment: PHOENIX-4168.v1.patch

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-4168.v1.patch
>
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4168:
-
Attachment: (was: PHOENIX-4168.v1.patch)

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Alex Araujo (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alex Araujo updated PHOENIX-4168:
-
Attachment: PHOENIX-4168.v1.patch

Patch for master. FYI [~apurtell] [~jamestaylor]

> Pluggable Remote User Extraction for Phoenix Query Server
> -
>
> Key: PHOENIX-4168
> URL: https://issues.apache.org/jira/browse/PHOENIX-4168
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Attachments: PHOENIX-4168.v1.patch
>
>
> PQS supports impersonation by pulling a user's identity from an HTTP 
> parameter. Make this pluggable to allow other forms of extraction (for 
> example, pulling the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155773#comment-16155773
 ] 

James Taylor commented on PHOENIX-3953:
---

If you could file a new JIRA and take it, that'd be much appreciated, 
[~vincentpoon]. I'm RMing for 4.12 and need to tie up the remaining lose ends 
so we can get a release out.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155761#comment-16155761
 ] 

Vincent Poon commented on PHOENIX-3953:
---

[~jamestaylor] in PHOENIX-3948 , I found that HBase 0.98 has a bug (or 
"feature") where HConnectionManager#setServerSideHConnectionRetries makes it 
such that the HBase client config on the server side has 10 times the number of 
retries, such that it will retry for hours.  Worth double checking if we need 
to lower it here too, inside the compaction critical path

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155739#comment-16155739
 ] 

Josh Mahonin commented on PHOENIX-4159:
---

Pushed to master, 4.x-HBase-1.2, 4.x-HBase-1.1, 4.x-HBase-0.98

> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
>   at 

[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155730#comment-16155730
 ] 

James Taylor commented on PHOENIX-3953:
---

Yes, the INDEX_DISABLE_TIMESTAMP is set to 0 which is the special value meaning 
"manual intervention is required at this point". It won't transition out of 
this state without being rebuilt (see also PHOENIX-4162 which fixed a corner 
case of this).

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4168) Pluggable Remote User Extraction for Phoenix Query Server

2017-09-06 Thread Alex Araujo (JIRA)
Alex Araujo created PHOENIX-4168:


 Summary: Pluggable Remote User Extraction for Phoenix Query Server
 Key: PHOENIX-4168
 URL: https://issues.apache.org/jira/browse/PHOENIX-4168
 Project: Phoenix
  Issue Type: Improvement
Reporter: Alex Araujo
Assignee: Alex Araujo
Priority: Minor


PQS supports impersonation by pulling a user's identity from an HTTP parameter. 
Make this pluggable to allow other forms of extraction (for example, pulling 
the identity out of an X509Certificate).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155722#comment-16155722
 ] 

Vincent Poon edited comment on PHOENIX-3953 at 9/6/17 5:17 PM:
---

[~jamestaylor] Is there a way for us to signal that this happened, perhaps by 
setting the indexDisableTimestamp to a special negative value?  At a minimum, I 
think we should add a log line to indicate this.  Otherwise from an operator 
perspective, we would see a disabled index and have to triage why the rebuilder 
didn't fix it.

Perhaps a new index state would make it even clearer


was (Author: vincentpoon):
[~jamestaylor] Is there a way for us to signal that this happened, perhaps by 
setting the indexDisableTimestamp to a special negative value?  At a minimum, I 
think we should add a log line to indicate this.  Otherwise from an operator 
perspective, we would see a disabled index and have to triage why the rebuilder 
didn't fix it.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3953) Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction

2017-09-06 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155722#comment-16155722
 ] 

Vincent Poon commented on PHOENIX-3953:
---

[~jamestaylor] Is there a way for us to signal that this happened, perhaps by 
setting the indexDisableTimestamp to a special negative value?  At a minimum, I 
think we should add a log line to indicate this.  Otherwise from an operator 
perspective, we would see a disabled index and have to triage why the rebuilder 
didn't fix it.

> Clear INDEX_DISABLED_TIMESTAMP and disable index on compaction
> --
>
> Key: PHOENIX-3953
> URL: https://issues.apache.org/jira/browse/PHOENIX-3953
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3953_addendum1.patch, PHOENIX-3953.patch, 
> PHOENIX-3953_v2.patch
>
>
> To guard against a compaction occurring (which would potentially clear delete 
> markers and puts that the partial index rebuild process counts on to properly 
> catch up an index with the data table), we should clear the 
> INDEX_DISABLED_TIMESTAMP and mark the index as disabled. This could be done 
> in the post compaction coprocessor hook. At this point, a manual rebuild of 
> the index would be required.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155677#comment-16155677
 ] 

Samarth Jain commented on PHOENIX-4159:
---

Looks great! Thanks, [~jmahonin]

> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
>   at 

[jira] [Commented] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread Josh Mahonin (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155653#comment-16155653
 ] 

Josh Mahonin commented on PHOENIX-4159:
---

Patch attached. The specific issue had to do with not properly deleting the 
'TemporaryFolder' in the underlying BaseHBaseManagedTimeIT when used across two 
tests, not were we properly invoking the underlying 'cleanUpAfterTest' method. 
Scala isn't wonderful about extending abstract Java classes, and Scalatest 
doesn't seem to support the @ClassRule annotation properly, so we're mocking 
the behaviour ourselves. It's not perfect, but should work well enough and is 
back to the originally intended behaviour.

In fixing the above issue, it raised another one, in that JUnit 4.12 doesn't 
support parallel test execution using TemporaryFolders. It's fixed in JUnit 
4.13 (not yet released), so I've disabled the parallel testing for now. The 
phoenix-spark integration tests take 4m48s on my machine.

For review [~samarthjain]

> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> 

[jira] [Commented] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155651#comment-16155651
 ] 

James Taylor commented on PHOENIX-4159:
---

+1. Thanks so much for the quick turnaround on this, [~jmahonin].

> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
>   at 

[jira] [Updated] (PHOENIX-4159) phoenix-spark tests are failing

2017-09-06 Thread Josh Mahonin (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Mahonin updated PHOENIX-4159:
--
Attachment: PHOENIX-4159.patch

> phoenix-spark tests are failing
> ---
>
> Key: PHOENIX-4159
> URL: https://issues.apache.org/jira/browse/PHOENIX-4159
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
> Attachments: PHOENIX-4159.patch
>
>
> In few of the runs where we were able to get successful test runs for 
> phoenix-core, we ran into failures for the phoenix-spark module. 
> Sample run - https://builds.apache.org/job/Phoenix-master/1762/console
> [~jmahonin] - would you mind taking a look. Copy pasting here a possibly 
> relevant stacktrace in case the link is no longer working:
> {code}
> Formatting using clusterid: testClusterID
> 1[ScalaTest-4] ERROR org.apache.hadoop.hdfs.MiniDFSCluster  - IOE 
> creating namenodes. Permissions dump:
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/data
>   permissions: 
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data':
>  
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data
>   permissions: drwx
> path 
> '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark': 
>   
> absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace/Phoenix-master': 
>   absolute:/home/jenkins/jenkins-slave/workspace/Phoenix-master
>   permissions: drwx
> path '/home/jenkins/jenkins-slave/workspace': 
>   absolute:/home/jenkins/jenkins-slave/workspace
>   permissions: drwx
> path '/home/jenkins/jenkins-slave': 
>   absolute:/home/jenkins/jenkins-slave
>   permissions: drwx
> path '/home/jenkins': 
>   absolute:/home/jenkins
>   permissions: drwx
> path '/home': 
>   absolute:/home
>   permissions: dr-x
> path '/': 
>   absolute:/
>   permissions: dr-x
> java.io.IOException: Cannot create directory 
> /home/jenkins/jenkins-slave/workspace/Phoenix-master/phoenix-spark/target/test-data/fa615cb3-a0d9-4c9e-90eb-acd0c7d46d9b/dfscluster_1ce5f5c4-f355-4111-a763-4ab777941386/dfs/name1/current
>   at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:337)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:548)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:569)
>   at 
> org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:161)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:991)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:342)
>   at 
> org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.java:176)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:973)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:811)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742)
>   

[jira] [Commented] (PHOENIX-4156) Fix flapping MutableIndexFailureIT

2017-09-06 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155632#comment-16155632
 ] 

James Taylor commented on PHOENIX-4156:
---

Yes, let's remove it. I'll remove that feature in a separate JIRA as it was 
never released.

> Fix flapping MutableIndexFailureIT
> --
>
> Key: PHOENIX-4156
> URL: https://issues.apache.org/jira/browse/PHOENIX-4156
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4156_v1.patch, PHOENIX-4156_v2.patch, 
> PHOENIX-4156_v3.patch, PHOENIX-4156_v4.patch, PHOENIX-4156_v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4156) Fix flapping MutableIndexFailureIT

2017-09-06 Thread Samarth Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155622#comment-16155622
 ] 

Samarth Jain commented on PHOENIX-4156:
---

[~jamestaylor] - looks like the test is still flapping a bit but only for one 
case:

{code}
testIndexWriteFailure[MutableIndexFailureIT_transactional=false,localIndex=false,isNamespaceMapped=false,disableIndexOnWriteFailure=false,rebuildIndexOnWriteFailure=false,failRebuildTask=false,throwIndexWriteFailure=null]
{code}

It looks like in the case when rebuildIndexOnWriteFailure=false, we are 
replaying mutations by issuing DMLs with SCN:

{code}
private void replayMutations() throws SQLException {
Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES);
for (int i = 0; i < exceptions.size(); i++) {
CommitException e = exceptions.get(i);
long ts = e.getServerTimestamp();
props.setProperty(PhoenixRuntime.REPLAY_AT_ATTRIB, 
Long.toString(ts));
try (Connection conn = DriverManager.getConnection(getUrl(), 
props)) {
if (i == 0) {
updateTable(conn, false);
} else if (i == 1) {
updateTableAgain(conn, false);
} else {
fail();
}
}
}
}
{code}

Considering we are working on disallowing DMLs with SCN, it makes this test 
case invalid. I guess it also makes the mode rebuildIndexOnWriteFailure=false 
also not as useful because now clients won't be able to reply the mutations to 
get the index back in sync. Should we remove this case altogether from this 
test?


> Fix flapping MutableIndexFailureIT
> --
>
> Key: PHOENIX-4156
> URL: https://issues.apache.org/jira/browse/PHOENIX-4156
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4156_v1.patch, PHOENIX-4156_v2.patch, 
> PHOENIX-4156_v3.patch, PHOENIX-4156_v4.patch, PHOENIX-4156_v5.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4164) APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.

2017-09-06 Thread Ethan Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1616#comment-1616
 ] 

Ethan Wang commented on PHOENIX-4164:
-

can you provide a sample of your v1? I made my local test table with 30 million 
rows, approx count gives me around 0.005 inaccuracy.

{code}
0: jdbc:phoenix:localhost:2181:/hbase> select count(id) from test;
++
| COUNT(ID)  |
++
| 3000   |
++
0: jdbc:phoenix:localhost:2181:/hbase> select approx_count_distinct(id) from 
test;
++
| APPROX_COUNT_DISTINCT(ID)  |
++
| 30048464   |
++
{code}

> APPROX_COUNT_DISTINCT becomes imprecise at 20m unique values.
> -
>
> Key: PHOENIX-4164
> URL: https://issues.apache.org/jira/browse/PHOENIX-4164
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Ethan Wang
>
> {code}
> 0: jdbc:phoenix:localhost> select count(*) from test;
> +---+
> | COUNT(1)  |
> +---+
> | 26931816  |
> +---+
> 1 row selected (14.604 seconds)
> 0: jdbc:phoenix:localhost> select approx_count_distinct(v1) from test;
> ++
> | APPROX_COUNT_DISTINCT(V1)  |
> ++
> | 17221394   |
> ++
> 1 row selected (21.619 seconds)
> {code}
> The table is generated from random numbers, and the cardinality of v1 is 
> close to the number of rows.
> (I cannot run a COUNT(DISTINCT(v1)), as it uses up all memory on my machine 
> and eventually kills the regionserver - that's another story and another jira)
> [~aertoria]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4003) Document how to use snapshots for MR

2017-09-06 Thread Peter Conrad (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1615#comment-1615
 ] 

Peter Conrad commented on PHOENIX-4003:
---

Let me know if I can help here.

> Document how to use snapshots for MR
> 
>
> Key: PHOENIX-4003
> URL: https://issues.apache.org/jira/browse/PHOENIX-4003
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Akshita Malhotra
>
> Now that PHOENIX-3744 is resolved and released, we should update our website 
> to let users know how to take advantage of this cool new feature (i.e. new 
> snapshot argument to IndexTool). This could be added to a couple of 
> placeshttp://phoenix.apache.org/phoenix_mr.html and maybe here 
> http://phoenix.apache.org/pig_integration.html (is there a way to use 
> snapshots through our Pig integration? If not we should file a JIRA and do 
> this).
> Directions to update the website are here: 
> http://phoenix.apache.org/building_website.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4167) Phoenix SELECT query returns duplicate data in the same varchar/char column if a trim() is applied on the column AND a distinct arbitrary column is generated in the que

2017-09-06 Thread Pulkit Bhardwaj (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pulkit Bhardwaj updated PHOENIX-4167:
-
Description: 
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));
{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present


{code:sql}
0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+
{code}


4. Now run the following query


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++
{code}


As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
{code}


Would generate


{code:sql}
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+
{code}


A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen


{code:sql}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++
{code}


2. If I remove the trim() again the issue is not seen


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column" ,nam, nam from 
test_select;
+--+-+-+
| test_column  |   NAM   |   NAM   |
+--+-+-+
| arbitrary| user  | user  |
+--+-+-+
{code}



  was:
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));
{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present


{code:sql}
0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+
{code}


4. Now run the following query


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++
{code}


As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
{code}


Would generate


{code:sql}
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+
{code}


A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen


{code:sql}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++
{code}


2. If I remove the trim() again the 

[jira] [Commented] (PHOENIX-3460) Phoenix Spark plugin cannot find table with a Namespace prefix

2017-09-06 Thread Stas Sukhanov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16155492#comment-16155492
 ] 

Stas Sukhanov commented on PHOENIX-3460:


Hi, I have the same problem and conducted some investigation. There is a 
problem in {{org.apache.phoenix.util.PhoenixRuntime}} in 
[getTable|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java#L442]
 and 
[generateColumnInfo|https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixRuntime.java#L469]
 methods when one uses namespace and not schema (e.g. "namespace:table").

Code in phoenix-spark calls {{generateColumnInfo}} that removes quotes by 
calling {{SchemaUtil.normalizeFullTableName(tableName)}} and passes call to 
{{getTable}}. When {{getTable}} fails to find table in cache it goes to 
fallback (see catch block). Without quotas that block treats namespace as 
schema and fails by throwing exception with origin table name. Unfortunately 
there is no good workaround. One option is to call 
{{MetaDataClient.updateCache}} manually beforehand and fill up the cache then 
{{getTable}} works on driver but you most likely get exception on workers.

In our project we included phoenix-core in shaded jar and replaced 
{{PhoenixRuntime}} with our implementation that doesn't convert namespace to 
schema.



> Phoenix Spark plugin cannot find table with a Namespace prefix
> --
>
> Key: PHOENIX-3460
> URL: https://issues.apache.org/jira/browse/PHOENIX-3460
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: HDP 2.5
>Reporter: Xindian Long
>  Labels: namespaces, phoenix, spark
> Fix For: 4.7.0
>
>
> I am testing some code using Phoenix Spark plug in to read a Phoenix table 
> with a namespace prefix in the table name (the table is created as a phoenix 
> table not a hbase table), but it returns an TableNotFoundException.
> The table is obviously there because I can query it using plain phoenix sql 
> through Squirrel. In addition, using spark sql to query it has no problem at 
> all.
> I am running on the HDP 2.5 platform, with phoenix 4.7.0.2.5.0.0-1245
> The problem does not exist at all when I was running the same code on HDP 2.4 
> cluster, with phoenix 4.4.
> Neither does the problem occur when I query a table without a namespace 
> prefix in the DB table name, on HDP 2.5
> The log is in the attached file: tableNoFound.txt
> My testing code is also attached.
> The weird thing is in the attached code, if I run testSpark alone it gives 
> the above exception, but if I run the testJdbc first, and followed by 
> testSpark, both of them work.
>  After changing to create table by using
> create table ACME.ENDPOINT_STATUS
> The phoenix-spark plug in seems working. I also find some weird behavior,
> If I do both the following
> create table ACME.ENDPOINT_STATUS ...
> create table "ACME:ENDPOINT_STATUS" ...
> Both table shows up in phoenix, the first one shows as Schema ACME, and table 
> name ENDPOINT_STATUS, and the later on shows as scheme none, and table name 
> ACME:ENDPOINT_STATUS.
> However, in HBASE, I only see one table ACME:ENDPOINT_STATUS. In addition, 
> upserts in the table ACME.ENDPOINT_STATUS show up in the other table, so is 
> the other way around.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4167) Phoenix SELECT query returns duplicate data in the same varchar/char column if a trim() is applied on the column AND a distinct arbitrary column is generated in the que

2017-09-06 Thread Pulkit Bhardwaj (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pulkit Bhardwaj updated PHOENIX-4167:
-
Description: 
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));
{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present


{code:sql}
0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+
{code}


4. Now run the following query


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++
{code}


As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following


{code:sql}
0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
{code}


Would generate


{code:sql}
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+
{code}


A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen


{code:sql}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++
{code}


2. If I remove the trim() again the issue is not seen


{code:sql}
0: jdbc:phoenix:> select  trim(nam), trim(nam) from test_select;
+++
| TRIM(NAM)  | TRIM(NAM)  |
+++
| user | user |
+++
{code}



  was:
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));

{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present


{code:sql}
0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+
{code}


4. Now run the following query

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++

As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;

Would generate

+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+

A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen

0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++

2. If I remove the trim() again the issue is not seen

0: jdbc:phoenix:> select  trim(nam), trim(nam) from test_select;
+++
| TRIM(NAM)  | TRIM(NAM)  |

[jira] [Updated] (PHOENIX-4167) Phoenix SELECT query returns duplicate data in the same varchar/char column if a trim() is applied on the column AND a distinct arbitrary column is generated in the que

2017-09-06 Thread Pulkit Bhardwaj (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pulkit Bhardwaj updated PHOENIX-4167:
-
Description: 
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));

{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present


{code:sql}
0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+
{code}


4. Now run the following query

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++

As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;

Would generate

+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+

A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen

0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++

2. If I remove the trim() again the issue is not seen

0: jdbc:phoenix:> select  trim(nam), trim(nam) from test_select;
+++
| TRIM(NAM)  | TRIM(NAM)  |
+++
| user | user |
+++


  was:
1. Created a simple table in phoenix

{code:sql}
create table test_select(nam VARCHAR(20), address VARCHAR(20), id BIGINT 
constraint my_pk primary key (id));

{code}

2. Insert a sample row

{code:sql}
upsert into test_select (nam, address,id) values('user','place',1);
{code}

3. Confirm that the row is present

0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| user  | place   | 1   |
+-+--+-+

4. Now run the following query

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam) from test_select;

This would generate the following output

+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| arbitrary  | useruser  | useruser  |
+--+++

As we can see the output for the trim(name) which should have been 'user' is 
actually printed as 'useruser'

The concatenation to the string is actually the number of times the column is 
printed.
The following

0: jdbc:phoenix:> select distinct 'arbitrary' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;

Would generate

+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| arbitrary  | useruseruser  | useruseruser  | useruseruser  |
+--+---+---+---+

A couple of things to notice

1. If I remove the —— distinct 'harshit' as "test_column" ——  The issue is not 
seen

0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| user | user | user |
++++

2. If I remove the trim() again the issue is not seen

0: jdbc:phoenix:> select  trim(nam), trim(nam) from test_select;
+++
| TRIM(NAM)  | TRIM(NAM)  |
+++
| user | user |
+++



> Phoenix SELECT query returns duplicate data in 

  1   2   >