[jira] [Updated] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT for 4.x-HBase-0.98

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4140:
--
Summary: Disable HiveTezIT and HiveMapReduceIT for 4.x-HBase-0.98  (was: 
Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times)

> Disable HiveTezIT and HiveMapReduceIT for 4.x-HBase-0.98
> 
>
> Key: PHOENIX-4140
> URL: https://issues.apache.org/jira/browse/PHOENIX-4140
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4140.patch
>
>
> I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
> Locally, on my laptop too, I was unable to get these tests to run 
> successfully. 
> See a sample run where they failed - 
> https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console
> On my laptop, these tests failed with an OOM. I had to override the permgen 
> memory to 256m to get the tests to even start. 
> FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4141) Fix flapping TableSnapshotReadsMapReduceIT

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-4141.
---
   Resolution: Fixed
Fix Version/s: 4.12.0

> Fix flapping TableSnapshotReadsMapReduceIT
> --
>
> Key: PHOENIX-4141
> URL: https://issues.apache.org/jira/browse/PHOENIX-4141
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4141.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4141) Fix flapping TableSnapshotReadsMapReduceIT

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4141:
--
Attachment: PHOENIX-4141.patch

> Fix flapping TableSnapshotReadsMapReduceIT
> --
>
> Key: PHOENIX-4141
> URL: https://issues.apache.org/jira/browse/PHOENIX-4141
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4141.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4141) Fix flapping TableSnapshotReadsMapReduceIT

2017-08-29 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4141:
-

 Summary: Fix flapping TableSnapshotReadsMapReduceIT
 Key: PHOENIX-4141
 URL: https://issues.apache.org/jira/browse/PHOENIX-4141
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4133) [hive] ColumnInfo list should be reordered and filtered refer the hive tables

2017-08-29 Thread ZhuQQ (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ZhuQQ updated PHOENIX-4133:
---
Description: 
In some case, we create hive tables with different order, and may not contains 
all columns in the phoenix tables, then we found `INSERT INTO test SELECT ...` 
not works well.

For example:
{code:sql}
-- In Phoenix:
CREATE TABLE IF NOT EXISTS test (
 key1 VARCHAR NOT NULL,
 key2 INTEGER NOT NULL,
 key3 VARCHAR,
 pv BIGINT,
 uv BIGINT,
 CONSTRAINT PK PRIMARY KEY (key1, key2, key3)
);
{code}
{code:sql}
-- In Hive:
CREATE EXTERNAL TABLE test.test_part (
 key1 string,
 key2 int,
 pv bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
  "phoenix.table.name" = "test",
  "phoenix.zookeeper.quorum" = "localhost",
  "phoenix.zookeeper.znode.parent" = "/hbase",
  "phoenix.zookeeper.client.port" = "2181",
  "phoenix.rowkeys" = "key1,key2",
  "phoenix.column.mapping" = "key1:key1,key2:key2,pv:pv"
);
CREATE EXTERNAL TABLE test.test_uv (
 key1 string,
 key2 int,
 key3 string,
 uv bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
  "phoenix.table.name" = "test",
  "phoenix.zookeeper.quorum" = "localhost",
  "phoenix.zookeeper.znode.parent" = "/hbase",
  "phoenix.zookeeper.client.port" = "2181",
  "phoenix.rowkeys" = "key1,key2,key3",
  "phoenix.column.mapping" = "key1:key1,key2:key2,key3:key3,uv:uv"
);
{code}

Then insert to {{test.test_part}}:
{code:sql}
INSERT INTO test.test_part SELECT 'some key', 20170828,80;
{code}
throws error: 
{code:java}
ERROR 203 (22005): Type mismatch. BIGINT cannot be coerced to VARCHAR
{code}
And insert to {{test.test_uv}}:
{code:sql}
INSERT INTO test.test_uv SELECT 'some key',20170828,'linux',11;
{code}
Job executed successfully, but pv is overrided to 11 and uv is still NULL.

PS: haven't test other versions, but by checking the latest source code, new 
versions may also have same problems


  was:
In some case, we create hive tables with different order, and may not contains 
all columns in the phoenix tables, then we found `INSERT INTO test SELECT ...` 
not works well.

For example:
{code:sql}
-- In Phoenix:
CREATE TABLE IF NOT EXISTS test (
 key1 VARCHAR NOT NULL,
 key2 INTEGER NOT NULL,
 key3 VARCHAR,
 pv BIGINT,
 uv BIGINT,
 CONSTRAINT PK PRIMARY KEY (key1, key2, key3)
);
{code}
{code:sql}
-- In Hive:
CREATE EXTERNAL TABLE test.test_part (
 key1 string,
 key2 int,
 pv bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
  "phoenix.table.name" = "test",
  "phoenix.zookeeper.quorum" = "localhost",
  "phoenix.zookeeper.znode.parent" = "/hbase",
  "phoenix.zookeeper.client.port" = "2181",
  "phoenix.rowkeys" = "key1,key2",
  "phoenix.column.mapping" = "key1:key1,key2:key2,pv:pv"
);
CREATE EXTERNAL TABLE test.test_uv (
 key1 string,
 key2 int,
 key3 string,
 app_version string,
 channel string,
 uv bigint
)
STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
TBLPROPERTIES (
  "phoenix.table.name" = "test",
  "phoenix.zookeeper.quorum" = "localhost",
  "phoenix.zookeeper.znode.parent" = "/hbase",
  "phoenix.zookeeper.client.port" = "2181",
  "phoenix.rowkeys" = "key1,key2,key3",
  "phoenix.column.mapping" = "key1:key1,key2:key2,key3:key3,uv:uv"
);
{code}

Then insert to {{test.test_part}}:
{code:sql}
INSERT INTO test.test_part SELECT 'some key', 20170828,80;
{code}
throws error: 
{code:java}
ERROR 203 (22005): Type mismatch. BIGINT cannot be coerced to VARCHAR
{code}
And insert to {{test.test_uv}}:
{code:sql}
INSERT INTO test.test_uv SELECT 'some key',20170828,'linux',11;
{code}
Job executed successfully, but pv is overrided to 11 and uv is still NULL.

PS: haven't test other versions, but by checking the latest source code, new 
versions may also have same problems



> [hive] ColumnInfo list should be reordered and filtered refer the hive tables
> -
>
> Key: PHOENIX-4133
> URL: https://issues.apache.org/jira/browse/PHOENIX-4133
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.9.0
>Reporter: ZhuQQ
>
> In some case, we create hive tables with different order, and may not 
> contains all columns in the phoenix tables, then we found `INSERT INTO test 
> SELECT ...` not works well.
> For example:
> {code:sql}
> -- In Phoenix:
> CREATE TABLE IF NOT EXISTS test (
>  key1 VARCHAR NOT NULL,
>  key2 INTEGER NOT NULL,
>  key3 VARCHAR,
>  pv BIGINT,
>  uv BIGINT,
>  CONSTRAINT PK PRIMARY KEY (key1, key2, key3)
> );
> {code}
> {code:sql}
> -- In Hive:
> CREATE EXTERNAL TABLE test.test_part (
>  key1 string,
>  key2 int,
>  pv bigint
> )
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler'
> 

[jira] [Commented] (PHOENIX-4080) The error message for version mismatch is not accurate.

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146489#comment-16146489
 ] 

Hadoop QA commented on PHOENIX-4080:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12884366/PHOENIX-4080-v3.patch
  against master branch at commit 437402d4850bebcd769858b756c1e08abf544b00.
  ATTACHMENT ID: 12884366

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
62 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StringBuilder buf = new StringBuilder("Newer Phoenix clients can't 
communicate with older Phoenix servers. The following servers require an 
updated " + QueryConstants.DEFAULT_COPROCESS_JAR_NAME + " to be put in the 
classpath of HBase: ");
+.setMessage("Ensure that " + 
QueryConstants.DEFAULT_COPROCESS_JAR_NAME + " is put on the classpath of HBase 
in every region server: " + t.getMessage())

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1320//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1320//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1320//console

This message is automatically generated.

> The error message for version mismatch is not accurate.
> ---
>
> Key: PHOENIX-4080
> URL: https://issues.apache.org/jira/browse/PHOENIX-4080
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.11.0
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4080.patch, PHOENIX-4080-v2.patch, 
> PHOENIX-4080-v3.patch
>
>
> When accessing a 4.10 running cluster with 4.11 client, it referred as 
> The following servers require an updated phoenix.jar to be put in the 
> classpath of HBase: region=SYSTEM.CATALOG
> It should be phoenix-[version]-server.jar rather than phoenix.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146484#comment-16146484
 ] 

Hudson commented on PHOENIX-4140:
-

SUCCESS: Integrated in Jenkins build Phoenix-master #1756 (See 
[https://builds.apache.org/job/Phoenix-master/1756/])
PHOENIX-4140 Disable HiveTezIT and HiveMapReduceIT since they don't work 
(samarth: rev 4ca7a0791841ba504ac3daac9fc6e8fec0c148e6)
* (edit) phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveMapReduceIT.java
* (edit) phoenix-hive/src/it/java/org/apache/phoenix/hive/HiveTezIT.java


> Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times
> -
>
> Key: PHOENIX-4140
> URL: https://issues.apache.org/jira/browse/PHOENIX-4140
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4140.patch
>
>
> I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
> Locally, on my laptop too, I was unable to get these tests to run 
> successfully. 
> See a sample run where they failed - 
> https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console
> On my laptop, these tests failed with an OOM. I had to override the permgen 
> memory to 256m to get the tests to even start. 
> FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4096) Disallow DML operations on connections with CURRENT_SCN set

2017-08-29 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4096:
--
Attachment: PHOENIX-4096_wip.patch

Converted 3 of 23 unit tests (along with cleaning up a couple of others).

> Disallow DML operations on connections with CURRENT_SCN set
> ---
>
> Key: PHOENIX-4096
> URL: https://issues.apache.org/jira/browse/PHOENIX-4096
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4096_wip.patch
>
>
> We should make a connection read-only if CURRENT_SCN is set. It's really a 
> bad idea to go back in time and update data and it won't work with secondary 
> indexing, potentially leading to your index and table getting out of sync.
> For testing purposes, where we need to control the timestamp, we should rely 
> on the EnvironmentEdgeManager instead to control the current time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times

2017-08-29 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146435#comment-16146435
 ] 

Sergey Soldatov commented on PHOENIX-4140:
--

[~samarthjain] Are you running it with 0.98 branch only? Those tests create 2 
miniclusters and somehow the HBase minicluster from 0.98 doesn't want to live 
with another dfs cluster. I believe I've already created a JIRA to disable it 
for 0.98 branch. 

> Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times
> -
>
> Key: PHOENIX-4140
> URL: https://issues.apache.org/jira/browse/PHOENIX-4140
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4140.patch
>
>
> I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
> Locally, on my laptop too, I was unable to get these tests to run 
> successfully. 
> See a sample run where they failed - 
> https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console
> On my laptop, these tests failed with an OOM. I had to override the permgen 
> memory to 256m to get the tests to even start. 
> FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain resolved PHOENIX-4140.
---
   Resolution: Fixed
Fix Version/s: 4.12.0

> Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times
> -
>
> Key: PHOENIX-4140
> URL: https://issues.apache.org/jira/browse/PHOENIX-4140
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4140.patch
>
>
> I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
> Locally, on my laptop too, I was unable to get these tests to run 
> successfully. 
> See a sample run where they failed - 
> https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console
> On my laptop, these tests failed with an OOM. I had to override the permgen 
> memory to 256m to get the tests to even start. 
> FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4140:
--
Attachment: PHOENIX-4140.patch

> Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times
> -
>
> Key: PHOENIX-4140
> URL: https://issues.apache.org/jira/browse/PHOENIX-4140
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4140.patch
>
>
> I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
> Locally, on my laptop too, I was unable to get these tests to run 
> successfully. 
> See a sample run where they failed - 
> https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console
> On my laptop, these tests failed with an OOM. I had to override the permgen 
> memory to 256m to get the tests to even start. 
> FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4140) Disable HiveTezIT and HiveMapReduceIT since they don't work most of the times

2017-08-29 Thread Samarth Jain (JIRA)
Samarth Jain created PHOENIX-4140:
-

 Summary: Disable HiveTezIT and HiveMapReduceIT since they don't 
work most of the times
 Key: PHOENIX-4140
 URL: https://issues.apache.org/jira/browse/PHOENIX-4140
 Project: Phoenix
  Issue Type: Bug
Reporter: Samarth Jain
Assignee: Samarth Jain


I have never seen the HiveTezIT and HiveMapReduceIT complete successfully. 
Locally, on my laptop too, I was unable to get these tests to run successfully. 

See a sample run where they failed - 
https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1647/console

On my laptop, these tests failed with an OOM. I had to override the permgen 
memory to 256m to get the tests to even start. 

FYI, [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4080) The error message for version mismatch is not accurate.

2017-08-29 Thread Ethan Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Wang updated PHOENIX-4080:

Attachment: PHOENIX-4080-v3.patch

Thanks [~gjacoby]

> The error message for version mismatch is not accurate.
> ---
>
> Key: PHOENIX-4080
> URL: https://issues.apache.org/jira/browse/PHOENIX-4080
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.11.0
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4080.patch, PHOENIX-4080-v2.patch, 
> PHOENIX-4080-v3.patch
>
>
> When accessing a 4.10 running cluster with 4.11 client, it referred as 
> The following servers require an updated phoenix.jar to be put in the 
> classpath of HBase: region=SYSTEM.CATALOG
> It should be phoenix-[version]-server.jar rather than phoenix.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4132) TestIndexWriter hangs on 1.8 JRE

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146269#comment-16146269
 ] 

Hadoop QA commented on PHOENIX-4132:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12884332/PHOENIX-4132_v2.patch
  against master branch at commit fc659488361c91b569f15a26dcbab5cbb24c276b.
  ATTACHMENT ID: 12884332

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
62 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+HTableInterfaceReference ht1 = new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName));
+HTableInterfaceReference ht2 = new HTableInterfaceReference(new 
ImmutableBytesPtr(tableName2));

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.MutableQueryIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1319//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1319//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1319//console

This message is automatically generated.

> TestIndexWriter hangs on 1.8 JRE
> 
>
> Key: PHOENIX-4132
> URL: https://issues.apache.org/jira/browse/PHOENIX-4132
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4132_4.x-HBase-0.98.patch, PHOENIX-4132_v2.patch
>
>
> Below is the jstack of the threads:
> {code}
> "main" #1 prio=5 os_prio=31 tid=0x7fdd3f805000 nid=0x1c03 waiting on 
> condition [0x79bda000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00bb8b0> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:280)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:189)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:175)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter.testFailureOnRunningUpdateAbortsPending(TestIndexWriter.java:212)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> 

[jira] [Commented] (PHOENIX-4132) TestIndexWriter hangs on 1.8 JRE

2017-08-29 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146265#comment-16146265
 ] 

Hudson commented on PHOENIX-4132:
-

FAILURE: Integrated in Jenkins build Phoenix-master #1755 (See 
[https://builds.apache.org/job/Phoenix-master/1755/])
PHOENIX-4132 TestIndexWriter hangs on 1.8 JRE (samarth: rev 
437402d4850bebcd769858b756c1e08abf544b00)
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleIndexWriter.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestIndexWriter.java
* (edit) 
phoenix-core/src/test/java/org/apache/phoenix/hbase/index/write/TestParalleWriterIndexCommitter.java


> TestIndexWriter hangs on 1.8 JRE
> 
>
> Key: PHOENIX-4132
> URL: https://issues.apache.org/jira/browse/PHOENIX-4132
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4132_4.x-HBase-0.98.patch, PHOENIX-4132_v2.patch
>
>
> Below is the jstack of the threads:
> {code}
> "main" #1 prio=5 os_prio=31 tid=0x7fdd3f805000 nid=0x1c03 waiting on 
> condition [0x79bda000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00bb8b0> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:280)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:189)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:175)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter.testFailureOnRunningUpdateAbortsPending(TestIndexWriter.java:212)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> 

[jira] [Commented] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146213#comment-16146213
 ] 

James Taylor commented on PHOENIX-3815:
---

+1. Thanks, [~vincentpoon]. Looks good.

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-3815:
--
Attachment: PHOENIX-3815.master.v2.patch

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-3815:
--
Attachment: (was: PHOENIX-3815.master.v2.patch)

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-3815:
--
Attachment: PHOENIX-3815.0.98.v2.patch
PHOENIX-3815.master.v2.patch

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.0.98.v2.patch, 
> PHOENIX-3815.master.v2.patch, PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3496) Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146183#comment-16146183
 ] 

James Taylor commented on PHOENIX-3496:
---

Thanks for the patch, [~rajeshbabu]. I think you only want to check for 
DEFAULT_LOCAL_INDEX_COLUMN_FAMILY_BYTES here if there are no other column 
families (as otherwise it wouldn't be created):
{code}
+private void checkForLocalIndexColumnFamilies(Region region,
+List indexMaintainers) throws 
NoSuchColumnFamilyException {
+HTableDescriptor tableDesc = region.getTableDesc();
+if 
(tableDesc.getFamily(QueryConstants.DEFAULT_LOCAL_INDEX_COLUMN_FAMILY_BYTES) == 
null) {
+throw new NoSuchColumnFamilyException("Column family "
++ QueryConstants.DEFAULT_LOCAL_INDEX_COLUMN_FAMILY
++ " does not exist in region " + region + " in table " + 
tableDesc);
+}
{code}
To confirm, please add a test that adds a local index to a table which 
explicitly defines a column family like this:
{code}
CREATE TABLE t (k VARCHAR PRIMARY KEY, a.v BIGINT);
CREATE LOCAL INDEX i ON t(a.v);
{code}

If you throw ColumnFamilyNotFoundException instead of 
NoSuchColumnFamilyException then Phoenix will wrap/unwrap it automatically and 
you can catch ColumnFamilyNotFoundException instead of doing this:
{code}
+} catch(PhoenixIOException pio) {
+if (pio.getCause() instanceof 
NoSuchColumnFamilyException
+&& 
scanPair.getFirst().getAttribute(LOCAL_INDEX_BUILD) != null) {
{code}

> Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping
> ---
>
> Key: PHOENIX-3496
> URL: https://issues.apache.org/jira/browse/PHOENIX-3496
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3496.patch
>
>
> The test has been passing consistently on the "4.*-HBase-0.98" branches. 
> However, it has been flapping pretty regularly on the master branch and the 
> "4.*-HBase-1.1" branches.
> I ran the test locally a few number of times and it did flap. I did notice 
> that in cases where it failed, the logs also had a RegionOpeningException. 
> For example:
> {code}
> org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> TEST_TABLET60,\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.04f765e5d906bbd193b38a9f8c20e478.
>  is opening on localhost,55599,1479426313446
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2908)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2385)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> And the test failure:
> {code}
> ava.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family L#0 does not exist in region 
> TEST_TABLET60,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.47319064397e6e25e8e7cc992ebce3e6.
>  in table 'TEST_TABLET60', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => 
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>   at 
> 

[jira] [Commented] (PHOENIX-4110) ParallelRunListener should monitor number of tables and not number of tests

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146140#comment-16146140
 ] 

James Taylor commented on PHOENIX-4110:
---

[~samarthjain] - I don't see an increment of TABLE_COUNTER. Also, 30 seems like 
an extremely low value. Do we really need it to be that low?

> ParallelRunListener should monitor number of tables and not number of tests
> ---
>
> Key: PHOENIX-4110
> URL: https://issues.apache.org/jira/browse/PHOENIX-4110
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4110.patch, 
> PHOENIX-4110_V2_4.x-HBase-0.98.patch, PHOENIX-4110_v3_4.x-HBase-0.98.patch, 
> PHOENIX-4110_v3.patch
>
>
> ParallelRunListener today monitors the number of tests that have been run to 
> determine when mini cluster should be shut down. This helps prevent our test 
> JVM forks running in OOM. A better heuristic would be to instead check the 
> number of tables that were created by tests. This way when a particular test 
> class has created lots of tables, we can shut down the mini cluster sooner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4110) ParallelRunListener should monitor number of tables and not number of tests

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146127#comment-16146127
 ] 

Hadoop QA commented on PHOENIX-4110:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12884306/PHOENIX-4110_v3_4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
fc659488361c91b569f15a26dcbab5cbb24c276b.
  ATTACHMENT ID: 12884306

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
59 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-hive/target/failsafe-reports/TEST-org.apache.phoenix.hive.HiveMapReduceIT
./phoenix-hive/target/failsafe-reports/TEST-org.apache.phoenix.hive.HiveTezIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1318//testReport/
Javadoc warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1318//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1318//console

This message is automatically generated.

> ParallelRunListener should monitor number of tables and not number of tests
> ---
>
> Key: PHOENIX-4110
> URL: https://issues.apache.org/jira/browse/PHOENIX-4110
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4110.patch, 
> PHOENIX-4110_V2_4.x-HBase-0.98.patch, PHOENIX-4110_v3_4.x-HBase-0.98.patch, 
> PHOENIX-4110_v3.patch
>
>
> ParallelRunListener today monitors the number of tests that have been run to 
> determine when mini cluster should be shut down. This helps prevent our test 
> JVM forks running in OOM. A better heuristic would be to instead check the 
> number of tables that were created by tests. This way when a particular test 
> class has created lots of tables, we can shut down the mini cluster sooner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4132) TestIndexWriter hangs on 1.8 JRE

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146118#comment-16146118
 ] 

James Taylor commented on PHOENIX-4132:
---

+1

> TestIndexWriter hangs on 1.8 JRE
> 
>
> Key: PHOENIX-4132
> URL: https://issues.apache.org/jira/browse/PHOENIX-4132
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4132_4.x-HBase-0.98.patch, PHOENIX-4132_v2.patch
>
>
> Below is the jstack of the threads:
> {code}
> "main" #1 prio=5 os_prio=31 tid=0x7fdd3f805000 nid=0x1c03 waiting on 
> condition [0x79bda000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00bb8b0> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:280)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:189)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:175)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter.testFailureOnRunningUpdateAbortsPending(TestIndexWriter.java:212)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> {code}
> "pool-8-thread-1" #25 prio=5 os_prio=31 tid=0x7fdd3ef1a000 nid=0x130b 
> waiting on condition [0x7bf44000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00add50> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 

[jira] [Updated] (PHOENIX-4137) Document IndexScrutinyTool

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4137:
--
Attachment: (was: secondary_indexing.md)

> Document IndexScrutinyTool
> --
>
> Key: PHOENIX-4137
> URL: https://issues.apache.org/jira/browse/PHOENIX-4137
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: secondary_indexing.md
>
>
> Now that PHOENIX-2460 has been committed, we need to update our website 
> documentation to describe how to use it. For an overview of updating the 
> website, see http://phoenix.apache.org/building_website.html. For 
> IndexScrutinyTool, it's probably enough to add a section in 
> https://phoenix.apache.org/secondary_indexing.html (which lives in 
> ./site/source/src/site/markdown/secondary_indexing.md) describing the purpose 
> and possible arguments to the MR job. Something similar to the table for our 
> bulk loader here: 
> https://phoenix.apache.org/bulk_dataload.html#Loading_via_MapReduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4137) Document IndexScrutinyTool

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4137:
--
Attachment: secondary_indexing.md

> Document IndexScrutinyTool
> --
>
> Key: PHOENIX-4137
> URL: https://issues.apache.org/jira/browse/PHOENIX-4137
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: secondary_indexing.md
>
>
> Now that PHOENIX-2460 has been committed, we need to update our website 
> documentation to describe how to use it. For an overview of updating the 
> website, see http://phoenix.apache.org/building_website.html. For 
> IndexScrutinyTool, it's probably enough to add a section in 
> https://phoenix.apache.org/secondary_indexing.html (which lives in 
> ./site/source/src/site/markdown/secondary_indexing.md) describing the purpose 
> and possible arguments to the MR job. Something similar to the table for our 
> bulk loader here: 
> https://phoenix.apache.org/bulk_dataload.html#Loading_via_MapReduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-2460) Implement scrutiny command to validate whether or not an index is in sync with the data table

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-2460:
--
Attachment: secondary_indexing.md

> Implement scrutiny command to validate whether or not an index is in sync 
> with the data table
> -
>
> Key: PHOENIX-2460
> URL: https://issues.apache.org/jira/browse/PHOENIX-2460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-2460.patch, secondary_indexing.md
>
>
> We should have a process that runs to verify that an index is valid against a 
> data table and potentially fixes it if discrepancies are found. This could 
> either be a MR job or a low priority background task.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4137) Document IndexScrutinyTool

2017-08-29 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16146082#comment-16146082
 ] 

Vincent Poon commented on PHOENIX-4137:
---

[~jamestaylor] Attached the documentation update

> Document IndexScrutinyTool
> --
>
> Key: PHOENIX-4137
> URL: https://issues.apache.org/jira/browse/PHOENIX-4137
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: secondary_indexing.md
>
>
> Now that PHOENIX-2460 has been committed, we need to update our website 
> documentation to describe how to use it. For an overview of updating the 
> website, see http://phoenix.apache.org/building_website.html. For 
> IndexScrutinyTool, it's probably enough to add a section in 
> https://phoenix.apache.org/secondary_indexing.html (which lives in 
> ./site/source/src/site/markdown/secondary_indexing.md) describing the purpose 
> and possible arguments to the MR job. Something similar to the table for our 
> bulk loader here: 
> https://phoenix.apache.org/bulk_dataload.html#Loading_via_MapReduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4137) Document IndexScrutinyTool

2017-08-29 Thread Vincent Poon (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4137:
--
Attachment: secondary_indexing.md

> Document IndexScrutinyTool
> --
>
> Key: PHOENIX-4137
> URL: https://issues.apache.org/jira/browse/PHOENIX-4137
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: secondary_indexing.md
>
>
> Now that PHOENIX-2460 has been committed, we need to update our website 
> documentation to describe how to use it. For an overview of updating the 
> website, see http://phoenix.apache.org/building_website.html. For 
> IndexScrutinyTool, it's probably enough to add a section in 
> https://phoenix.apache.org/secondary_indexing.html (which lives in 
> ./site/source/src/site/markdown/secondary_indexing.md) describing the purpose 
> and possible arguments to the MR job. Something similar to the table for our 
> bulk loader here: 
> https://phoenix.apache.org/bulk_dataload.html#Loading_via_MapReduce.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4132) TestIndexWriter hangs on 1.8 JRE

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4132:
--
Summary: TestIndexWriter hangs on 1.8 JRE  (was: TestIndexWriter causes 
builds to hang sometimes)

> TestIndexWriter hangs on 1.8 JRE
> 
>
> Key: PHOENIX-4132
> URL: https://issues.apache.org/jira/browse/PHOENIX-4132
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4132_4.x-HBase-0.98.patch, PHOENIX-4132_v2.patch
>
>
> Below is the jstack of the threads:
> {code}
> "main" #1 prio=5 os_prio=31 tid=0x7fdd3f805000 nid=0x1c03 waiting on 
> condition [0x79bda000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00bb8b0> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:280)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:189)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:175)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter.testFailureOnRunningUpdateAbortsPending(TestIndexWriter.java:212)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> {code}
> "pool-8-thread-1" #25 prio=5 os_prio=31 tid=0x7fdd3ef1a000 nid=0x130b 
> waiting on condition [0x7bf44000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00add50> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at 

[jira] [Updated] (PHOENIX-4132) TestIndexWriter causes builds to hang sometimes

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4132:
--
Attachment: PHOENIX-4132_v2.patch

Turned out this issue only happens on 1.8 JRE. And my previous patch didn't 
completely fix it. This one does. And with this patch, we shouldn't have 
hanging unit tests for 1.8. Integration tests are a different beast :)

> TestIndexWriter causes builds to hang sometimes
> ---
>
> Key: PHOENIX-4132
> URL: https://issues.apache.org/jira/browse/PHOENIX-4132
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4132_4.x-HBase-0.98.patch, PHOENIX-4132_v2.patch
>
>
> Below is the jstack of the threads:
> {code}
> "main" #1 prio=5 os_prio=31 tid=0x7fdd3f805000 nid=0x1c03 waiting on 
> condition [0x79bda000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0007a00bb8b0> (a 
> com.google.common.util.concurrent.AbstractFuture$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at 
> com.google.common.util.concurrent.AbstractFuture$Sync.get(AbstractFuture.java:280)
>   at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:116)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submit(BaseTaskRunner.java:66)
>   at 
> org.apache.phoenix.hbase.index.parallel.BaseTaskRunner.submitUninterruptible(BaseTaskRunner.java:99)
>   at 
> org.apache.phoenix.hbase.index.write.ParallelWriterIndexCommitter.write(ParallelWriterIndexCommitter.java:197)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:189)
>   at 
> org.apache.phoenix.hbase.index.write.IndexWriter.write(IndexWriter.java:175)
>   at 
> org.apache.phoenix.hbase.index.write.TestIndexWriter.testFailureOnRunningUpdateAbortsPending(TestIndexWriter.java:212)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:272)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:236)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:386)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:323)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:143)
> {code}
> {code}
> "pool-8-thread-1" #25 prio=5 os_prio=31 tid=0x7fdd3ef1a000 nid=0x130b 
> waiting on condition [0x7bf44000]
>java.lang.Thread.State: WAITING 

[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145978#comment-16145978
 ] 

James Taylor commented on PHOENIX-4130:
---

Not tying up a handler thread while waiting for RS->RS call is an improvement, 
though. Perhaps we should brainstorm on the "no write on index failure" idea in 
a separate JIRA? If the RS hosting the SYSTEM.CATALOG cannot be reached, then 
no Phoenix queries will work. Not sure if it makes sense to work around this in 
this one particular scenario.

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145962#comment-16145962
 ] 

James Taylor commented on PHOENIX-3815:
---

Thanks for the info, [~vincentpoon]. I think we can remove 
ParallelWriterIndexCommitter. For the "leave index active" case, I was planning 
on the client not continuing to attempt further index updates until the partial 
rebuilder catches up the index. Also, PHOENIX-4130 will prevent any server to 
server retries.


> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-08-29 Thread Rahul Shrivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145898#comment-16145898
 ] 

Rahul Shrivastava edited comment on PHOENIX-4138 at 8/29/17 6:57 PM:
-

Andrew,
We can use a config to control this feature and have default for them. 
In FAT client, if the client does not have the config setup, the default is 
could be a specific number like 20 ( indexes per table). Otherwise they can 
provide their specified number for this config.

On thin client which talks to PQS, it is enforced at PQS level and again their 
is a default. 

So, there is never a case when client can bypass this check. 

Let me know if you still feel server side is better to enforce it. 


was (Author: rahulshrivastava):
 If we do this on server side, we need an valid SQL exception code for such a 
scenario . TOO_MANY_INDICES ? 

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-08-29 Thread Rahul Shrivastava (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145898#comment-16145898
 ] 

Rahul Shrivastava commented on PHOENIX-4138:


 If we do this on server side, we need an valid SQL exception code for such a 
scenario . TOO_MANY_INDICES ? 

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4080) The error message for version mismatch is not accurate.

2017-08-29 Thread Geoffrey Jacoby (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145840#comment-16145840
 ] 

Geoffrey Jacoby commented on PHOENIX-4080:
--

[~aertoria] - one last small nit. Could you please fix the commit message on 
your patch so that:
1. The spelling on "amending" is correct and
2. The extraneous extra lines and "PHOENIX-4080"s are gone

Thanks!

> The error message for version mismatch is not accurate.
> ---
>
> Key: PHOENIX-4080
> URL: https://issues.apache.org/jira/browse/PHOENIX-4080
> Project: Phoenix
>  Issue Type: Wish
>Affects Versions: 4.11.0
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4080.patch, PHOENIX-4080-v2.patch
>
>
> When accessing a 4.10 running cluster with 4.11 client, it referred as 
> The following servers require an updated phoenix.jar to be put in the 
> classpath of HBase: region=SYSTEM.CATALOG
> It should be phoenix-[version]-server.jar rather than phoenix.jar



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3815) Only disable indexes on which write failures occurred

2017-08-29 Thread Vincent Poon (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145787#comment-16145787
 ] 

Vincent Poon commented on PHOENIX-3815:
---

[~jamestaylor] now that we've lowered the index write timeout (PHOENIX-3948), I 
suppose TrackingWriter isn't as bad.  However, there are still some corner 
cases where it will be noticeably slower than ParallelWriter.  Imagine all 
threads in the writer pool (default of 10) are in use and there is heavy write 
traffic - the writes will then be executed serially as threads become 
available, and you'd have to wait for each to fail in the worst case where all 
indexes are failing.

But I think these might be relatively rare cases.  Also if the failure policy 
is to disable the index, then it doesn't matter too much.  Either way, we 
should at a minimum extract out the common code in the classes if we're not 
going to remove one - they pretty much do the same thing but use a different 
TaskRunner.

If the failure policy is to leave the index enabled, then you might care about 
failing fast, as you could face repeated failures if something is wrong with 
the index RS.  We could also have a rate counter for # of failures in a given 
time window.  If # of failures exceeds that, we fail fast like ParallelWriter.  
Otherwise, use TrackingWriter logic in the normal happy case.  But again, only 
matters if you plan to leave the index enabled.

> Only disable indexes on which write failures occurred
> -
>
> Key: PHOENIX-3815
> URL: https://issues.apache.org/jira/browse/PHOENIX-3815
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Vincent Poon
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3815.v1.patch
>
>
> We currently disable all indexes if any of them fail to be written to. We 
> really only should disable the one in which the write failed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (PHOENIX-4021) Remove CachingHTableFactory

2017-08-29 Thread hanzhi (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

hanzhi updated PHOENIX-4021:

Comment: was deleted

(was: Will this issue effect global index in the production env?)

> Remove CachingHTableFactory
> ---
>
> Key: PHOENIX-4021
> URL: https://issues.apache.org/jira/browse/PHOENIX-4021
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>  Labels: globalMutableSecondaryIndex
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4021.patch
>
>
> CachingHTableFactory is used as a performance optimization when writing to 
> global indexes so that HTable instances are cached and later automatically 
> cleaned up, rather than instantiated each time we write to an index.
> This should be removed for two reasons:
> 1. It opens us up to race conditions, because HTables aren't threadsafe, but 
> CachingHTableFactory doesn't guard against two threads both grabbing the same 
> HTable and using it simultaneously. Since all ops going through a region 
> share the same IndexWriter and ParallelWriterIndexCommitter, and hence the 
> same CachingHTableFactory, that means separate operations can both be holding 
> the same HTable. 
> 2. According to discussion on PHOENIX-3159, and offline discussions I've had 
> with [~apurtell], HBase 1.x and above make creating throwaway HTable 
> instances cheap so the caching is no longer needed.
> For 4.x-HBase-1.x and master, we should remove CachingHTableFactory, and for 
> 4.x-HBase-0.98, we should either get rid of it (if it's not too much of a 
> perf hit) or at least make it threadsafe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4110) ParallelRunListener should monitor number of tables and not number of tests

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4110:
--
Attachment: PHOENIX-4110_v3_4.x-HBase-0.98.patch

> ParallelRunListener should monitor number of tables and not number of tests
> ---
>
> Key: PHOENIX-4110
> URL: https://issues.apache.org/jira/browse/PHOENIX-4110
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4110.patch, 
> PHOENIX-4110_V2_4.x-HBase-0.98.patch, PHOENIX-4110_v3_4.x-HBase-0.98.patch, 
> PHOENIX-4110_v3.patch
>
>
> ParallelRunListener today monitors the number of tests that have been run to 
> determine when mini cluster should be shut down. This helps prevent our test 
> JVM forks running in OOM. A better heuristic would be to instead check the 
> number of tables that were created by tests. This way when a particular test 
> class has created lots of tables, we can shut down the mini cluster sooner.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4125) Shutdown and start metrics system when mini cluster is shutdown and started

2017-08-29 Thread Samarth Jain (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Samarth Jain updated PHOENIX-4125:
--
Attachment: PHOENIX-4125_4.x-HBase-0.98.patch

> Shutdown and start metrics system when mini cluster is shutdown and started
> ---
>
> Key: PHOENIX-4125
> URL: https://issues.apache.org/jira/browse/PHOENIX-4125
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
> Attachments: PHOENIX-4125_4.x-HBase-0.98.patch
>
>
> While doing analysis for PHOENIX-4110, I noticed that the metrics system is 
> keeping a lot of garbage around even though we frequently restart the mini 
> cluster. This is because the metrics system is a singleton and is only 
> shutdown when the JVM is shutdown. We should figure out a way to either 
> disable it, or start it only for tests (primarily Tracing related) that use 
> it, or tie it's lifecycle to the mini-cluster's lifecycle.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145598#comment-16145598
 ] 

Csaba Skrabak commented on PHOENIX-4139:


org.apache.phoenix.jdbc.PhoenixResultSet#getString(int) calls getValue on a 
projector object that looks the same as the other column's projector. 
ColumnProjector object identifies itself with table name and column name, 
column index as an information is lost.
org.apache.phoenix.compile.ExpressionProjector#getValue returns the weird 
string containing all matching columns zero separatedly.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145578#comment-16145578
 ] 

Csaba Skrabak edited comment on PHOENIX-4139 at 8/29/17 4:20 PM:
-

[^PHOENIX-4139.patch] Patch contains the test only, which reproduced the issue.


was (Author: cskrabak):
Patch contains the test only, which reproduced the issue.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Csaba Skrabak updated PHOENIX-4139:
---
Attachment: PHOENIX-4139.patch

Patch contains the test only, which reproduced the issue.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Attachments: PHOENIX-4139.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++++
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-08-29 Thread Csaba Skrabak (JIRA)
Csaba Skrabak created PHOENIX-4139:
--

 Summary: select distinct with identical aggregations return weird 
values 
 Key: PHOENIX-4139
 URL: https://issues.apache.org/jira/browse/PHOENIX-4139
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.12.0
 Environment: minicluster
Reporter: Csaba Skrabak
Assignee: Csaba Skrabak
Priority: Minor


>From sme-hbase hipchat room:
Pulkit Bhardwaj·10:31

i'm seeing a weird issue with phoenix, appreciate some thoughts

Created a simple table in phoenix
{noformat}
0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
VARCHAR(20), id BIGINT
. . . . . . . . > constraint my_pk primary key (id));

0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
values('pulkit','badaun',1);

0: jdbc:phoenix:> select * from test_select;
+-+--+-+
|   NAM   | ADDRESS  | ID  |
+-+--+-+
| pulkit  | badaun   | 1   |
+-+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
test_select;
+--+-+
| test_column  |   NAM   |
+--+-+
| harshit  | pulkit  |
+--+-+


0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam) from test_select;
+--+++
| test_column  |   TRIM(NAM)|   TRIM(NAM)|
+--+++
| harshit  | pulkitpulkit  | pulkitpulkit  |
+--+++
{noformat}

When I apply a trim on the nam column and use it multiple times, the output has 
the cell data duplicated!
{noformat}
0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
trim(nam), trim(nam) from test_select;
+--+---+---+---+
| test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
TRIM(NAM)   |
+--+---+---+---+
| harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | pulkitpulkitpulkit 
 |
+--+---+---+---+
{noformat}

Wondering if someone has seen this before??

One thing to note is, if I remove the —— distinct 'harshit' as "test_column" —— 
 The issue is not seen
{noformat}
0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
++++
| TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
++++
| pulkit | pulkit | pulkit |
++++
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-08-29 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145535#comment-16145535
 ] 

Josh Elser commented on PHOENIX-3655:
-

That sounds like a good first-pass idea to me!

> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.12.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3655) Metrics for PQS

2017-08-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145261#comment-16145261
 ] 

Andrew Purtell commented on PHOENIX-3655:
-

At a minimum I think the PQS should surface the same metrics provided by the 
fat client to the same metrics reporting systems supported by the fat client. 
Perhaps that can help constrain the scope of the initial implementation. 

> Metrics for PQS
> ---
>
> Key: PHOENIX-3655
> URL: https://issues.apache.org/jira/browse/PHOENIX-3655
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
> Fix For: 4.12.0
>
> Attachments: MetricsforPhoenixQueryServerPQS.pdf
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> Phoenix Query Server runs a separate process compared to its thin client. 
> Metrics collection is currently done by PhoenixRuntime.java i.e. at Phoenix 
> driver level. We need the following
> 1. For every jdbc statement/prepared statement/ run by PQS , we need 
> capability to collect metrics at PQS level and push the data to external sink 
> i.e. file, JMX , other external custom sources. 
> 2. Besides this global metrics could be periodically collected and pushed to 
> the sink. 
> 2. PQS can be configured to turn on metrics collection and type of collect ( 
> runtime or global) via hbase-site.xml
> 3. Sink could be configured via an interface in hbase-site.xml. 
> All metrics definition https://phoenix.apache.org/metrics.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3496) Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping

2017-08-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16145164#comment-16145164
 ] 

Hadoop QA commented on PHOENIX-3496:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12884200/PHOENIX-3496.patch
  against master branch at commit fc659488361c91b569f15a26dcbab5cbb24c276b.
  ATTACHMENT ID: 12884200

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
62 warning messages.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+TableName physicalTableName = 
SchemaUtil.getPhysicalTableName(tableName.getBytes(), isNamespaceMapped);
+HBaseAdmin admin = driver.getConnectionQueryServices(getUrl(), 
TestUtil.TEST_PROPERTIES).getAdmin();
+conn1.createStatement().execute("CREATE LOCAL INDEX " + indexName + " 
ON " + tableName + "(v1)");
+ResultSet rs = conn1.createStatement().executeQuery("SELECT COUNT(*) 
FROM " + indexTableName);
+if(DELAY_OPEN && 
!c.getEnvironment().getRegion().getTableDesc().getTableName().isSystemTable()) {
+&& 
scanPair.getFirst().getAttribute(LOCAL_INDEX_BUILD) != null) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.salted.SaltedTableVarLengthRowKeyIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DerivedTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SequenceIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CustomEntityDataIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexReplicationIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CreateTableIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.OrderByIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.FirstValuesFunctionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatementHintsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RowValueConstructorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.hadoop.hbase.regionserver.wal.WALRecoveryRegionPostOpenIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.PowerFunctionEnd2EndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.InListIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IsNullIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.RollbackIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StatsCollectorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DynamicUpsertIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.QueryExecWithoutSCNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SubqueryUsingSortMergeJoinIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RenewLeaseIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TopNIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.EvaluationOfORIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.CsvBulkLoadToolIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.monitoring.PhoenixMetricsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConvertTimezoneFunctionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AutoPartitionViewsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.StringIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AppendOnlySchemaIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DistinctCountIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ReadIsolationLevelIT

[jira] [Commented] (PHOENIX-4130) Avoid server retries for mutable indexes

2017-08-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144906#comment-16144906
 ] 

Andrew Purtell commented on PHOENIX-4130:
-

This may make things better for the average case but if we will have to wait 
under lock for the failed index timestamp update aren't we still liable to 
block all handlers on waits if under an error condition we pile up failures? 

Instead of setting a timestamp upon failure can we track timestamps with an 
inverse semantic when the index write succeeds? This may be a lot harder 
because the absence of a timestamp update is the indication somehow that the 
index is not up to date, but the likelihood of piling up during failure is nil 
because upon failure somewhere we need no writes to succeed. 

> Avoid server retries for mutable indexes
> 
>
> Key: PHOENIX-4130
> URL: https://issues.apache.org/jira/browse/PHOENIX-4130
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Lars Hofhansl
>
> Had some discussions with [~jamestaylor], [~samarthjain], and [~vincentpoon], 
> during which I suggested that we can possibly eliminate retry loops happening 
> at the server that cause the handler threads to be stuck potentially for 
> quite a while (at least multiple seconds to ride over common scenarios like 
> splits).
> Instead we can do the retries at the Phoenix client that.
> So:
> # The index updates are not retried on the server. (retries = 0)
> # A failed index update would set the failed index timestamp but leave the 
> index enabled.
> # Now the handler thread is done, it throws an appropriate exception back to 
> the client.
> # The Phoenix client can now retry. When those retries fail the index is 
> disabled (if the policy dictates that) and throw the exception back to its 
> caller.
> So no more waiting is needed on the server, handler threads are freed 
> immediately.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4138) Create a hard limit on number of indexes per table

2017-08-29 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144900#comment-16144900
 ] 

Andrew Purtell commented on PHOENIX-4138:
-

Don't we want this enforced server side in the metadata endpoint? Otherwise the 
limit can be missed if the client doesn't support the feature or the client 
side configuration is changed. 

> Create a hard limit on number of indexes per table
> --
>
> Key: PHOENIX-4138
> URL: https://issues.apache.org/jira/browse/PHOENIX-4138
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> There should be a config parameter to impose a hard limit on number of 
> indexes per table. There is a SQL Exception 
> https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/exception/SQLExceptionCode.java#L260
>  , but it gets triggered on the server side  
> (https://github.com/apache/phoenix/blob/master/phoenix-core/src/main/java/org/apache/phoenix/coprocessor/MetaDataEndpointImpl.java#L1589)
>  . 
> We need a client side limit that can be configured via Phoenix config 
> parameter. Something like if user create more than lets say 30 indexes per 
> table, it would not allow more index creation for the that specific table. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4131) UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can deadlock

2017-08-29 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144862#comment-16144862
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4131:
--

[~samarthjain] are you working on it or you want me to take  a look?

> UngroupedAggregateRegionObserver.preClose() and doPostScannerOpen() can 
> deadlock
> 
>
> Key: PHOENIX-4131
> URL: https://issues.apache.org/jira/browse/PHOENIX-4131
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>
> On my local test run I saw that the tests were not completing because the 
> mini cluster couldn't shut down. So I took a jstack and discovered the 
> following deadlock:
> {code}
> "RS:0;samarthjai-wsm4:59006" #16265 prio=5 os_prio=31 tid=0x7fafa6327000 
> nid=0x37b3f runnable [0x7000115f5000]
>java.lang.Thread.State: RUNNABLE
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.preClose(UngroupedAggregateRegionObserver.java:1201)
>   - locked <0x00072bc406b8> (a java.lang.Object)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$4.call(RegionCoprocessorHost.java:494)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1673)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1749)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preClose(RegionCoprocessorHost.java:490)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:2843)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegionIgnoreErrors(HRegionServer.java:2805)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.closeUserRegions(HRegionServer.java:2423)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:1052)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.runRegionServer(MiniHBaseCluster.java:157)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.access$000(MiniHBaseCluster.java:110)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer$1.run(MiniHBaseCluster.java:141)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:360)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1637)
>   at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:334)
>   at 
> org.apache.hadoop.hbase.MiniHBaseCluster$MiniHBaseClusterRegionServer.run(MiniHBaseCluster.java:139)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> {code}
> "RpcServer.FifoWFPBQ.default.handler=3,queue=0,port=59006" #16246 daemon 
> prio=5 os_prio=31 tid=0x7fafae856000 nid=0x1abdb waiting for monitor 
> entry [0x7000102bc000]
>java.lang.Thread.State: BLOCKED (on object monitor)
>   at 
> org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver.doPostScannerOpen(UngroupedAggregateRegionObserver.java:734)
>   - waiting to lock <0x00072bc406b8> (a java.lang.Object)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.overrideDelegate(BaseScannerRegionObserver.java:236)
>   at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:281)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2629)
>   - locked <0x00072b625a90> (a 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2833)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:34950)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2339)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)
> {code}
> preClose() has the object monitor and is waiting for scanReferencesCount to 
> go down to 0. doPostScannerOpen() is trying to acquire the same lock so that 
> it can reduce the scanReferencesCount to 0.
> I think this bug was introduced in PHOENIX-3111 to solve other deadlocks. 
> FYI, [~rajeshbabu], [~sergey.soldatov], [~enis], [~lhofhansl].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3496) Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping

2017-08-29 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16144861#comment-16144861
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-3496:
--

[~jamestaylor] [~samarthjain] Here is the patch to retry index building when 
server throws NoSuchColumnFamilyException after creating local index.Please 
review.

> Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping
> ---
>
> Key: PHOENIX-3496
> URL: https://issues.apache.org/jira/browse/PHOENIX-3496
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3496.patch
>
>
> The test has been passing consistently on the "4.*-HBase-0.98" branches. 
> However, it has been flapping pretty regularly on the master branch and the 
> "4.*-HBase-1.1" branches.
> I ran the test locally a few number of times and it did flap. I did notice 
> that in cases where it failed, the logs also had a RegionOpeningException. 
> For example:
> {code}
> org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> TEST_TABLET60,\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.04f765e5d906bbd193b38a9f8c20e478.
>  is opening on localhost,55599,1479426313446
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2908)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2385)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> And the test failure:
> {code}
> ava.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family L#0 does not exist in region 
> TEST_TABLET60,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.47319064397e6e25e8e7cc992ebce3e6.
>  in table 'TEST_TABLET60', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => 
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7649)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2543)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2527)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>   at 
> org.apache.phoenix.end2end.index.LocalIndexIT.testLocalIndexRoundTrip(LocalIndexIT.java:166)
> {code}
> [~rajeshbabu], can you please take a look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3496) Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping

2017-08-29 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3496:
-
Attachment: PHOENIX-3496.patch

> Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping
> ---
>
> Key: PHOENIX-3496
> URL: https://issues.apache.org/jira/browse/PHOENIX-3496
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3496.patch
>
>
> The test has been passing consistently on the "4.*-HBase-0.98" branches. 
> However, it has been flapping pretty regularly on the master branch and the 
> "4.*-HBase-1.1" branches.
> I ran the test locally a few number of times and it did flap. I did notice 
> that in cases where it failed, the logs also had a RegionOpeningException. 
> For example:
> {code}
> org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> TEST_TABLET60,\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.04f765e5d906bbd193b38a9f8c20e478.
>  is opening on localhost,55599,1479426313446
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2908)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2385)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> And the test failure:
> {code}
> ava.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family L#0 does not exist in region 
> TEST_TABLET60,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.47319064397e6e25e8e7cc992ebce3e6.
>  in table 'TEST_TABLET60', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => 
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7649)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2543)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2527)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>   at 
> org.apache.phoenix.end2end.index.LocalIndexIT.testLocalIndexRoundTrip(LocalIndexIT.java:166)
> {code}
> [~rajeshbabu], can you please take a look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-3496) Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping

2017-08-29 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-3496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-3496:
-
Fix Version/s: 4.12.0

> Figure out why LocalIndexIT#testLocalIndexRoundTrip is flapping
> ---
>
> Key: PHOENIX-3496
> URL: https://issues.apache.org/jira/browse/PHOENIX-3496
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Rajeshbabu Chintaguntla
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3496.patch
>
>
> The test has been passing consistently on the "4.*-HBase-0.98" branches. 
> However, it has been flapping pretty regularly on the master branch and the 
> "4.*-HBase-1.1" branches.
> I ran the test locally a few number of times and it did flap. I did notice 
> that in cases where it failed, the logs also had a RegionOpeningException. 
> For example:
> {code}
> org.apache.hadoop.hbase.exceptions.RegionOpeningException: Region 
> TEST_TABLET60,\x07\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.04f765e5d906bbd193b38a9f8c20e478.
>  is opening on localhost,55599,1479426313446
>   at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2908)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.getRegion(RSRpcServices.java:1053)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2385)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> And the test failure:
> {code}
> ava.util.concurrent.ExecutionException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column 
> family L#0 does not exist in region 
> TEST_TABLET60,\x09\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1479426366023.47319064397e6e25e8e7cc992ebce3e6.
>  in table 'TEST_TABLET60', {TABLE_ATTRIBUTES => {coprocessor$1 => 
> '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', 
> coprocessor$2 => 
> '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|',
>  coprocessor$3 => 
> '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', 
> coprocessor$4 => 
> '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', 
> coprocessor$5 => 
> '|org.apache.phoenix.hbase.index.Indexer|805306366|index.builder=org.apache.phoenix.index.PhoenixIndexBuilder,org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec'},
>  {NAME => '0', DATA_BLOCK_ENCODING => 'FAST_DIFF', BLOOMFILTER => 'ROW', 
> REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '1', TTL => 
> 'FOREVER', MIN_VERSIONS => '0', KEEP_DELETED_CELLS => 'FALSE', BLOCKSIZE => 
> '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:7649)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2543)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2527)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2406)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
>   at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
>   at java.lang.Thread.run(Thread.java:745)
>   at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>   at 
> org.apache.phoenix.end2end.index.LocalIndexIT.testLocalIndexRoundTrip(LocalIndexIT.java:166)
> {code}
> [~rajeshbabu], can you please take a look.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)