[jira] [Commented] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-07-11 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16541179#comment-16541179
 ] 

Hadoop QA commented on PHOENIX-3534:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12931252/PHOENIX-3534-v3.patch
  against master branch at commit aee568beb02cdf983bb10889902c338ea016e6c9.
  ATTACHMENT ID: 12931252

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 29 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("CREATE LOCAL INDEX " + 
generateUniqueName() + " ON " + tableName + "(KV1)");
+assertEquals(4, getIndexOfPkColumn(phxConn, 
IndexUtil.getIndexColumnName(null, "k2"), fullView2IndexName));
+assertEquals(5, getIndexOfPkColumn(phxConn, 
IndexUtil.getIndexColumnName(null, "k3"), fullView2IndexName));
+assertEquals(4, getIndexOfPkColumn(phxConn, 
IndexUtil.getIndexColumnName(null, "k2"), fullView3IndexName));
+assertEquals(5, getIndexOfPkColumn(phxConn, 
IndexUtil.getIndexColumnName(null, "k3"), fullView3IndexName));
+String view1DDL = "CREATE VIEW " + view1 + " ( VIEW_COL1 
DECIMAL(10,2), VIEW_COL2 CHAR(256)) AS SELECT * FROM " + baseTable;
+String divergedViewDDL = "CREATE VIEW " + divergedView + " ( 
VIEW_COL1 DECIMAL(10,2), VIEW_COL2 CHAR(256)) AS SELECT * FROM " + baseTable;
+String indexDDL = "CREATE INDEX " + divergedViewIndex + " ON " + 
divergedView + " (V1) include (V3)";
+assertTableDefinition(tenant1Conn, view1, PTableType.VIEW, 
baseTable, 0, 7, 5,  "PK1", "V1", "V2", "V3", "KV", "PK2", "VIEW_COL1", 
"VIEW_COL2");
+assertTableDefinition(tenant2Conn, divergedView, PTableType.VIEW, 
baseTable, 1, 6, DIVERGED_VIEW_BASE_COLUMN_COUNT, "PK1", "V1", "V3", "PK2", 
"VIEW_COL1", "VIEW_COL2");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexIT

 {color:red}-1 core zombie tests{color}.  There are 4 zombie test(s):   
at 
org.apache.phoenix.end2end.DeleteIT.testPointDeleteRowFromTableWithImmutableIndex(DeleteIT.java:403)
at 
org.apache.phoenix.end2end.DeleteIT.testPointDeleteRowFromTableWithImmutableIndex(DeleteIT.java:376)
at 
org.apache.phoenix.end2end.DefaultColumnValueIT.testDefaultColumnValue(DefaultColumnValueIT.java:66)

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1926//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1926//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1926//console

This message is automatically generated.

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-3534-v2.patch, PHOENIX-3534-v3.patch, 
> PHOENIX-3534.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-3534) Support multi region SYSTEM.CATALOG table

2018-07-11 Thread Hadoop QA (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3534?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hadoop QA updated PHOENIX-3534:
---
Attachment: PHOENIX-3534-v2.patch

> Support multi region SYSTEM.CATALOG table
> -
>
> Key: PHOENIX-3534
> URL: https://issues.apache.org/jira/browse/PHOENIX-3534
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-3534-v2.patch, PHOENIX-3534.patch
>
>
> Currently Phoenix requires that the SYSTEM.CATALOG table is single region 
> based on the server-side row locks being held for operations that impact a 
> table and all of it's views. For example, adding/removing a column from a 
> base table pushes this change to all views.
> As an alternative to making the SYSTEM.CATALOG transactional (PHOENIX-2431), 
> when a new table is created we can do a lazy cleanup  of any rows that may be 
> left over from a failed DDL call (kudos to [~lhofhansl] for coming up with 
> this idea). To implement this efficiently, we'd need to also do PHOENIX-2051 
> so that we can efficiently find derived views.
> The implementation would rely on an optimistic concurrency model based on 
> checking our sequence numbers for each table/view before/after updating. Each 
> table/view row would be individually locked for their change (metadata for a 
> view or table cannot span regions due to our split policy), with the sequence 
> number being incremented under lock and then returned to the client.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-01-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16325437#comment-16325437
 ] 

Hadoop QA commented on PHOENIX-4528:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12906012/PHOENIX-4528.001.patch
  against master branch at commit 27d6582827b9306e66d3bfd430c6186ac165fb08.
  ATTACHMENT ID: 12906012

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
+verifyAllowed(grantPermissions("RX", regularUser1, 
surroundWithDoubleQuotes(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE), true), 
superUser1);
+// Use AccessControlClient API's if the 
accessController is an instance of 
org.apache.hadoop.hbase.security.access.AccessController
+
userPermissions.addAll(AccessControlClient.getUserPermissions(connection, 
tableName.getNameAsString()));
+connection, 
AuthUtil.toGroupEntry(tableName.getNamespaceAsString(;
+
getUserPermsFromUserDefinedAccessController(userPermissions, connection, 
(AccessControlService.Interface) service);
+private void getUserPermsFromUserDefinedAccessController(final 
List userPermissions, Connection connection, 
AccessControlService.Interface service) {
+AccessControlProtos.GetUserPermissionsRequest.Builder 
builderTablePerms = AccessControlProtos.GetUserPermissionsRequest
+AccessControlProtos.GetUserPermissionsRequest 
requestTablePerms = builderTablePerms.build();

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1709//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1709//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1709//console

This message is automatically generated.

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.repro-test.diff
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4523) phoenix.schema.isNamespaceMappingEnabled problem

2018-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16323638#comment-16323638
 ] 

Hadoop QA commented on PHOENIX-4523:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12905805/PHOENIX-4523.001.4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
27d6582827b9306e66d3bfd430c6186ac165fb08.
  ATTACHMENT ID: 12905805

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+void createSysMutexTableIfNotExists(HBaseAdmin admin, ReadOnlyProps 
props) throws IOException, SQLException {
+// Check for both SYSTEM.MUTEX and SYSTEM:MUTEX and donot proceed 
if either of them exists
+if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME) || 
admin.tableExists(TableName.valueOf(
+
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,PhoenixDatabaseMetaData.SYSTEM_MUTEX_TABLE_NAME)))
 {
+if 
(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
AccessDeniedException.class)) ||
+
!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
TableExistsException.class))) {
+logger.info(String.format("Destination Table %s already 
exists. No migration needed.", destTableName));
+
doNothing().when(cqs).createSysMutexTableIfNotExists(any(HBaseAdmin.class), 
any(ReadOnlyProps.class));
+
when(cqs.getSystemTableNamesInDefaultNamespace(any(HBaseAdmin.class))).thenReturn(Collections.
 emptyList());

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1708//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1708//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1708//console

This message is automatically generated.

> phoenix.schema.isNamespaceMappingEnabled problem
> 
>
> Key: PHOENIX-4523
> URL: https://issues.apache.org/jira/browse/PHOENIX-4523
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Flavio Pompermaier
>Assignee: Karan Mehta
> Fix For: 5.0.0, 4.14.0, 4.13.2-cdh5.11.2
>
> Attachments: PHOENIX-4523.001.4.x-HBase-0.98.patch, 
> PHOENIX-4523.001.patch
>
>
> I'm using Phoenix 4.13 for CDH 5.11.2 parcel and enabling schemas made my 
> code unusable.
> I think that this is not a bug of the CDH release, but of all 4.13.x releases.
> I have many parallel Phoenix connections and I always get the following 
> exception:
> {code:java}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM:MUTEX
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2492)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {code}
> This is caused by the fact that all the times the SYSTEM tables are 
> recreated, and this cannot be done simultaneously.
> Trying to debug the issue I found that in 
> ConnectionQueryServicesImpl.createSysMutexTable() the call to 
> getSystemTableNames() always return an empty array and the SYSTEM:MUTEX  
> table is always recreated.
> This because getSystemTableNames() doesn't consider the case when system 
> tables have namespace enabled. Right now that method tries to get all tables 
> starting with *SYSTEM.\**, while it should try to get the list of *SYSTEM:\** 
> tables..
> I hope t

[jira] [Commented] (PHOENIX-4523) phoenix.schema.isNamespaceMappingEnabled problem

2018-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321967#comment-16321967
 ] 

Hadoop QA commented on PHOENIX-4523:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12905619/PHOENIX-4523.001.patch
  against master branch at commit 01642d5f948fb01f61e65d1bd58ff2661a8918db.
  ATTACHMENT ID: 12905619

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+void createSysMutexTableIfNotExists(HBaseAdmin admin, ReadOnlyProps 
props) throws IOException, SQLException {
+if(admin.tableExists(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME) || 
admin.tableExists(TableName.valueOf(
+
PhoenixDatabaseMetaData.SYSTEM_SCHEMA_NAME,PhoenixDatabaseMetaData.SYSTEM_MUTEX_TABLE_NAME)))
 {
+
if(!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
AccessDeniedException.class)) ||
+
!Iterables.isEmpty(Iterables.filter(Throwables.getCausalChain(e), 
org.apache.hadoop.hbase.TableNotFoundException.class))) {
+logger.info(String.format("Destination Table %s already 
exists. No migration needed.", destTableName));
+
doNothing().when(cqs).createSysMutexTableIfNotExists(any(HBaseAdmin.class), 
any(ReadOnlyProps.class));
+
when(cqs.getSystemTableNamesInDefaultNamespace(any(HBaseAdmin.class))).thenReturn(Collections.
 emptyList());

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1706//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1706//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1706//console

This message is automatically generated.

> phoenix.schema.isNamespaceMappingEnabled problem
> 
>
> Key: PHOENIX-4523
> URL: https://issues.apache.org/jira/browse/PHOENIX-4523
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.1
>Reporter: Flavio Pompermaier
>Assignee: Karan Mehta
> Attachments: PHOENIX-4523.001.patch
>
>
> I'm using Phoenix 4.13 for CDH 5.11.2 parcel and enabling schemas made my 
> code unusable.
> I think that this is not a bug of the CDH release, but of all 4.13.x releases.
> I have many parallel Phoenix connections and I always get the following 
> exception:
> {code:java}
> Caused by: java.sql.SQLException: 
> org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hbase.TableExistsException):
>  SYSTEM:MUTEX
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2492)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>   at java.sql.DriverManager.getConnection(DriverManager.java:664)
>   at java.sql.DriverManager.getConnection(DriverManager.java:270)
> {code}
> This is caused by the fact that all the times the SYSTEM tables are 
> recreated, and this cannot be done simultaneously.
> Trying to debug the issue I found that in 
> ConnectionQueryServicesImpl.createSysMutexTable() the call to 
> getSystemTableNames() always return an empty array and the SYSTEM:MUTEX  
> table is always recreated.
> This because getSystemTableNames() doesn't consider the case when system 
> tables have namespace enabled. Right now that method tries to get all tables 
> starting with *SYSTEM.\**, while it should try to get the list of *SYSTEM:\** 
> tables..
> I hope this could get fixed very soon,
> Flavio



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4526) PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys

2018-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16321914#comment-16321914
 ] 

Hadoop QA commented on PHOENIX-4526:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12905625/PHOENIX-4526.patch
  against master branch at commit 01642d5f948fb01f61e65d1bd58ff2661a8918db.
  ATTACHMENT ID: 12905625

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1707//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1707//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1707//console

This message is automatically generated.

> PhoenixStorageHandler doesn't work with upper case in phoenix.rowkeys
> -
>
> Key: PHOENIX-4526
> URL: https://issues.apache.org/jira/browse/PHOENIX-4526
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Choi JaeHwan
>  Labels: HivePhoenix
> Fix For: 4.13.2
>
> Attachments: PHOENIX-4526.patch
>
>
> If you write phoenix rowkey in uppercase, you will get the following error.
> The field column name changes from hive to lowercase, but not to the 
> phoenix.rowkeys property.
> {code}
> CREATE TABLE `PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:WEIGHT,HEIGHT:HEIGHT,CHILD:CHILD,IS_MALE:IS_MALE,PHONE:PHONE,EMAIL:EMAIL,CREATE_TIME:CREATE_TIME"
>   ,"ndap.table.storageType"="PHOENIX"
>   ,"phoenix.table.options"="SALT_BUCKETS=10,DATA_BLOCK_ENCODING='DIFF'"
> )
> {code}
> {code}
> 2018-01-04T10:37:50,186 INFO  [HiveServer2-Background-Pool: Thread-10310]: 
> ql.Driver (Driver.java:execute(1735)) - Executing 
> command(queryId=hive_20180104103750_424baf0b-141a-450c-ae78-8f9be8a743a8): 
> CREATE TABLE `jackdb`.`PROFILE_PHOENIX_CLONE4` (
>   USER_ID STRING COMMENT 'from deserializer'
>   ,MARRIED STRING COMMENT 'from deserializer'
>   ,USER_NAME STRING COMMENT 'from deserializer'
>   ,BIRTH STRING COMMENT 'from deserializer'
>   ,WEIGHT FLOAT COMMENT 'from deserializer'
>   ,HEIGHT DOUBLE COMMENT 'from deserializer'
>   ,CHILD STRING COMMENT 'from deserializer'
>   ,IS_MALE BOOLEAN COMMENT 'from deserializer'
>   ,PHONE STRING COMMENT 'from deserializer'
>   ,EMAIL STRING COMMENT 'from deserializer'
>   ,CREATE_TIME TIMESTAMP COMMENT 'from deserializer'
> ) COMMENT '한글 HBase 테이블'
> STORED BY 'org.apache.phoenix.hive.PhoenixStorageHandler' 
> TBLPROPERTIES (
>   "phoenix.table.name"="jackdb_PROFILE_PHOENIX_CLONE4"
>   ,"phoenix.zookeeper.quorum"="qa3.nexr.com,qa4.nexr.com,qa5.nexr.com"
>   ,"phoenix.rowkeys"="USER_ID,MARRIED"
>   ,"phoenix.zookeeper.client.port"="2181"
>   ,"phoenix.zookeeper.znode.parent"="/hbase"
>   
> ,"phoenix.column.mapping"="USER_ID:USER_ID,MARRIED:MARRIED,USER_NAME:USER_NAME,BIRTH:BIRTH,WEIGHT:

[jira] [Commented] (PHOENIX-4514) A incorrect key object is used in SequenceManager#validateSequences

2018-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16320020#comment-16320020
 ] 

Hadoop QA commented on PHOENIX-4514:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12905409/PHOENIX.4514.v0.patch
  against master branch at commit 01642d5f948fb01f61e65d1bd58ff2661a8918db.
  ATTACHMENT ID: 12905409

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1705//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1705//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1705//console

This message is automatically generated.

> A incorrect key object is used in SequenceManager#validateSequences
> ---
>
> Key: PHOENIX-4514
> URL: https://issues.apache.org/jira/browse/PHOENIX-4514
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Minor
> Attachments: PHOENIX.4514.v0.patch
>
>
> nextSequences.get( i ) -> nextSequences.get( i ).getSequenceKey()
> {code:title=SequenceManager.java}
> for (int i = 0; i < nextSequences.size(); i++) {
> sequencePosition[i] = 
> sequenceMap.get(nextSequences.get(i)).getIndex();
> }
> {code}
> It won't cause bug since the impl of SequenceAllocation#hashCode is equal 
> with SequenceKey#hashCode. However, it still a potential bug so a fix is 
> necessary I believe.
> {code:title=SequenceAllocation.java}
> @Override
> public int hashCode() {
> return sequenceKey.hashCode();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4508) Wrong query plan generation

2018-01-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16315174#comment-16315174
 ] 

Hadoop QA commented on PHOENIX-4508:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904968/PHOENIX-4508.patch
  against master branch at commit 2136b002c37db478ffea11233f9ebb80276d2594.
  ATTACHMENT ID: 12904968

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1704//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1704//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1704//console

This message is automatically generated.

> Wrong query plan generation
> ---
>
> Key: PHOENIX-4508
> URL: https://issues.apache.org/jira/browse/PHOENIX-4508
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2
>Reporter: Flavio Pompermaier
>Assignee: Maryann Xue
>  Labels: planner, query
> Attachments: PHOENIX-4508.patch
>
>
> In my Phoenix tables I found that one query ens successfully while another 
> one, logically equal, does not (unless that I don't apply some tuning to 
> timeouts).
> The 2 queries extract the same data but, while the first query terminates the 
> second does not.
> PS:  without the USE_SORT_MERGE_JOIN both queries weren't working
> 
> h2. First query
> {code:sql}
> SELECT /*+ USE_SORT_MERGE_JOIN */ COUNT(*) 
> FROM PEOPLE ds JOIN MYTABLE l ON ds.PERSON_ID = l.LOCALID
> WHERE l.EID IS NULL AND l.DSID = 'PEOPLE' AND l.HAS_CANDIDATES = FALSE;
> {code}
> +---+-+++
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  
> EST_INFO_TS   |
> +---+-+++
> | SORT-MERGE-JOIN (INNER) TABLES  
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT 42-CHUNK 6168903 ROWS 1132461 BYTES PARALLEL 3-WAY FULL SCAN 
> OVER PEOPLE | 14155777900 | 12077867   | 
> 1513754378759  |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT MERGE SORT   
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | AND (SKIP MERGE)
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT 15-CHUNK 5908964 ROWS 2831155679 BYTES PARALLEL 15-WAY RANGE 
> SCAN OVER MYTABLE [0] - [2]  | 14155777900 | 12077867   | 
> 1513754378759  |
> | SERVER FILTER BY (EID IS NULL AND DSID = 'PEOPLE' AND 
> HAS_CANDIDATES = false)   | 14155777900 | 12077867   
> | 1513754378759  |
> | SERVER SORTED BY [L.LOCALID]
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT MERGE SORT   
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> | CLIENT AGGREGATE INTO SINGLE ROW
>   | 14155777900 | 12077867   | 
> 1513754378759  |
> +-

[jira] [Commented] (PHOENIX-4522) Fail to remove the schema from client-side cache

2018-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16314992#comment-16314992
 ] 

Hadoop QA commented on PHOENIX-4522:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904956/PHOENIX-4522.v0.patch
  against master branch at commit 2136b002c37db478ffea11233f9ebb80276d2594.
  ATTACHMENT ID: 12904956

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1703//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1703//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1703//console

This message is automatically generated.

> Fail to remove the schema from client-side cache
> 
>
> Key: PHOENIX-4522
> URL: https://issues.apache.org/jira/browse/PHOENIX-4522
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Priority: Critical
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4522.v0.patch
>
>
> We always pass the wrong key to remove the schema
> {code:PMetaDataImpl.java}
> @Override
> public void removeSchema(PSchema schema, long schemaTimeStamp) {
> 
> this.metaData.schemas.remove(SchemaUtil.getSchemaKey(schema.getSchemaName()));
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4516) Phoenix Kafka plugin returns the following message Exception in thread "main" org.apache.kafka.common.config.ConfigException: Missing required configuration "partitio

2018-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16311530#comment-16311530
 ] 

Hadoop QA commented on PHOENIX-4516:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904620/PHOENIX-4516-00.patch
  against master branch at commit 93306e9e28a0f13cbac87055c30fb9a781ae3345.
  ATTACHMENT ID: 12904620

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1702//console

This message is automatically generated.

> Phoenix Kafka plugin returns the following message Exception in thread "main" 
> org.apache.kafka.common.config.ConfigException: Missing required 
> configuration "partition.assignment.strategy" which has no default value.
> 
>
> Key: PHOENIX-4516
> URL: https://issues.apache.org/jira/browse/PHOENIX-4516
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
> Environment: the Phoenix Kafka plugin webpage has incomplete example 
> for filling out consumer properties files.
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Trivial
> Fix For: 4.13.1
>
> Attachments: PHOENIX-4516-00.patch
>
>
> getting the following message
> {noformat}
> Exception in thread "main" org.apache.kafka.common.config.ConfigException: 
> Missing required configuration "partition.assignment.strategy" which has no 
> default value.
>   at org.apache.kafka.common.config.ConfigDef.parse(ConfigDef.java:124)
>   at 
> org.apache.kafka.common.config.AbstractConfig.(AbstractConfig.java:48)
>   at 
> org.apache.kafka.clients.consumer.ConsumerConfig.(ConsumerConfig.java:194)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:430)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:413)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:400)
>   at 
> org.apache.phoenix.kafka.consumer.PhoenixConsumer.intializeKafka(PhoenixConsumer.java:128)
>   at 
> org.apache.phoenix.kafka.consumer.PhoenixConsumer.intializeKafka(PhoenixConsumer.java:99)
>   at 
> org.apache.phoenix.kafka.consumer.PhoenixConsumer.(PhoenixConsumer.java:68)
>   at 
> org.apache.phoenix.kafka.consumer.PhoenixConsumerTool.run(PhoenixConsumerTool.java:98)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
>   at 
> org.apache.phoenix.kafka.consumer.PhoenixConsumerTool.main(PhoenixConsumerTool.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:239)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4513) Fix the recursive call in ExecutableExplainStatement#getOperation

2018-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16310477#comment-16310477
 ] 

Hadoop QA commented on PHOENIX-4513:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904471/PHOENIX-4513.v0.patch
  against master branch at commit 93306e9e28a0f13cbac87055c30fb9a781ae3345.
  ATTACHMENT ID: 12904471

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1701//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1701//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1701//console

This message is automatically generated.

> Fix the recursive call in ExecutableExplainStatement#getOperation
> -
>
> Key: PHOENIX-4513
> URL: https://issues.apache.org/jira/browse/PHOENIX-4513
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4513.v0.patch
>
>
> {code}
> @Override
> public Operation getOperation() {
>   return this.getOperation();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2018-01-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16309591#comment-16309591
 ] 

Hadoop QA commented on PHOENIX-4496:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904390/PHOENIX-4496.patch
  against master branch at commit f7142879f33cae236e0530a8ed4eeaad1542d66a.
  ATTACHMENT ID: 12904390

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 javac{color}.  The patch appears to cause mvn compile goal to 
fail .

Compilation errors resume:
[ERROR] COMPILATION ERROR : 
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java:[104,49]
  is not 
abstract and does not override abstract method 
filterKeyValue(org.apache.hadoop.hbase.Cell) in 
org.apache.hadoop.hbase.filter.Filter
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.0:compile (default-compile) on 
project phoenix-core: Compilation failure
[ERROR] 
/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/scanner/ScannerBuilder.java:[104,49]
  is not 
abstract and does not override abstract method 
filterKeyValue(org.apache.hadoop.hbase.Cell) in 
org.apache.hadoop.hbase.filter.Filter
[ERROR] 
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :phoenix-core


Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1700//console

This message is automatically generated.

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4496.patch
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4510) performance.py fails to run due to message invalid or corrupt jarfile

2018-01-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308912#comment-16308912
 ] 

Hadoop QA commented on PHOENIX-4510:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904287/PHOENIX-4510-00.patch
  against master branch at commit f7142879f33cae236e0530a8ed4eeaad1542d66a.
  ATTACHMENT ID: 12904287

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1697//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1697//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1697//console

This message is automatically generated.

> performance.py fails to run due to message invalid or corrupt jarfile
> -
>
> Key: PHOENIX-4510
> URL: https://issues.apache.org/jira/browse/PHOENIX-4510
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: mvn: 3.5.2
> jdk: 1.8.0_144
> OSX: 10.12.6
> git: f7142879f33cae236e0530a8ed4eeaad1542d66a
>Reporter: Artem Ervits
>Assignee: Artem Ervits
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4510-00.patch
>
>
> {noformat}
> /usr/hdp/current/phoenix-client/bin/performance.py 
> sme-datastorage0.field.hortonworks.com,sme-datastorage1.field.hortonworks.com,sme-datastorage2.field.hortonworks.com:2181:/hbase-unsecure
>  10
> Phoenix Performance Evaluation Script 1.0
> -
> Creating performance table...
> 18/01/02 21:18:27 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> no rows upserted
> Time: 4.929 sec(s)
> Query # 1 - Count - SELECT COUNT(1) FROM PERFORMANCE_10;
> Query # 2 - Group By First PK - SELECT HOST FROM PERFORMANCE_10 GROUP BY HOST;
> Query # 3 - Group By Second PK - SELECT DOMAIN FROM PERFORMANCE_10 GROUP BY 
> DOMAIN;
> Query # 4 - Truncate + Group By - SELECT TRUNC(DATE,'DAY') DAY FROM 
> PERFORMANCE_10 GROUP BY TRUNC(DATE,'DAY');
> Query # 5 - Filter + Count - SELECT COUNT(1) FROM PERFORMANCE_10 WHERE 
> CORE<10;
> Generating and upserting data...
> Error: Invalid or corrupt jarfile /tmp/data_3U0HpD.csv
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4509) performance.sh should be called performance.py

2018-01-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308887#comment-16308887
 ] 

Hadoop QA commented on PHOENIX-4509:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904285/PHOENIX-4509-00.patch
  against master branch at commit f7142879f33cae236e0530a8ed4eeaad1542d66a.
  ATTACHMENT ID: 12904285

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AbsFunctionEnd2EndIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1696//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1696//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1696//console

This message is automatically generated.

> performance.sh should be called performance.py
> --
>
> Key: PHOENIX-4509
> URL: https://issues.apache.org/jira/browse/PHOENIX-4509
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: mvn: 3.5.2
> jdk: 1.8.0_144
> OSX: 10.12.6
> git: f7142879f33cae236e0530a8ed4eeaad1542d66a
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Trivial
>  Labels: newbie
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4509-00.patch
>
>
> performance.py script references an unknown performance.sh script in the 
> command line arguments.
> {noformat}
>  ./performance.py
> Performance script arguments not specified. Usage: performance.sh  
> 
> Example: performance.sh localhost 10
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4512) Account for change in Cell.DataType->Cell.Type (HBASE-19626)

2018-01-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308881#comment-16308881
 ] 

Hadoop QA commented on PHOENIX-4512:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12904296/PHOENIX-4512.001.patch
  against master branch at commit f7142879f33cae236e0530a8ed4eeaad1542d66a.
  ATTACHMENT ID: 12904296

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1699//console

This message is automatically generated.

> Account for change in Cell.DataType->Cell.Type (HBASE-19626)
> 
>
> Key: PHOENIX-4512
> URL: https://issues.apache.org/jira/browse/PHOENIX-4512
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4512.001.patch
>
>
> Some more compilation issues on the tail of Cell changes
> {noformat}
> [INFO] -
> [ERROR] COMPILATION ERROR :
> [INFO] -
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[28,36]
>  cannot find symbol
>   symbol:   class DataType
>   location: interface org.apache.hadoop.hbase.Cell
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[79,42]
>  cannot find symbol
>   symbol:   class DataType
>   location: class org.apache.phoenix.util.PhoenixKeyValueUtil
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/IndexUtil.java:[664,24]
>  cannot find symbol
>   symbol: class DataType
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[60,60]
>  cannot find symbol
>   symbol:   variable DataType
>   location: class org.apache.phoenix.util.PhoenixKeyValueUtil
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[67,43]
>  cannot find symbol
>   symbol:   variable DataType
>   location: class org.apache.phoenix.util.PhoenixKeyValueUtil
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/util/PhoenixKeyValueUtil.java:[74,26]
>  cannot find symbol
>   symbol:   variable DataType
>   location: class org.apache.phoenix.util.PhoenixKeyValueUtil
> [INFO] 6 errors
> {noformat}
> FYI [~sergey.soldatov]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4511) grammatical mistake with bulk_dataload.html

2018-01-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4511?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16308796#comment-16308796
 ] 

Hadoop QA commented on PHOENIX-4511:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12904289/PHOENIX-4511-00.patch
  against master branch at commit f7142879f33cae236e0530a8ed4eeaad1542d66a.
  ATTACHMENT ID: 12904289

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1698//console

This message is automatically generated.

> grammatical mistake with bulk_dataload.html
> ---
>
> Key: PHOENIX-4511
> URL: https://issues.apache.org/jira/browse/PHOENIX-4511
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: svn: 1819904
>Reporter: Artem Ervits
>Assignee: Artem Ervits
>Priority: Trivial
>  Labels: newbie
> Fix For: 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4511-00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4493) Fix DropColumnIT

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304255#comment-16304255
 ] 

Hadoop QA commented on PHOENIX-4493:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903764/PHOENIX-4493_v1.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903764

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1695//console

This message is automatically generated.

> Fix DropColumnIT
> 
>
> Key: PHOENIX-4493
> URL: https://issues.apache.org/jira/browse/PHOENIX-4493
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4493.patch, PHOENIX-4493_v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4501) Fix IndexUsageIT

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304253#comment-16304253
 ] 

Hadoop QA commented on PHOENIX-4501:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903762/PHOENIX-4501.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903762

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1694//console

This message is automatically generated.

> Fix IndexUsageIT
> 
>
> Key: PHOENIX-4501
> URL: https://issues.apache.org/jira/browse/PHOENIX-4501
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4501.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304095#comment-16304095
 ] 

Hadoop QA commented on PHOENIX-4487:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903737/PHOENIX-4487_v2.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903737

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+if (currentServerSideTableTimeStamp <= 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_10_0 &&
+if (acquiredMutexLock = 
acquireUpgradeMutex(currentServerSideTableTimeStamp, mutexRowKey)) {
+TableName mutexName = 
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, 
props);
+if 
(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME.equals(mutexName) || 
!tableNames.contains(mutexName)) {

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//console

This message is automatically generated.

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch, PHOENIX-4487_v2.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4499) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303963#comment-16303963
 ] 

Hadoop QA commented on PHOENIX-4499:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903726/PHOENIX4437-4.x-HBase-1.2.patch
  against 4.x-HBase-1.2 branch at commit 
34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903726

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StatementPlan compilePlan = compilableStmt.compilePlan(stmt, 
Sequence.ValueOp.VALIDATE_SEQUENCE);
+// For a QueryPlan, we need to get its optimized plan; for a 
MutationPlan, its enclosed QueryPlan
+compilePlan = 
stmt.getConnection().getQueryServices().getOptimizer().optimize(stmt, dataPlan);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2
> --
>
> Key: PHOENIX-4499
> URL: https://issues.apache.org/jira/browse/PHOENIX-4499
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Pedro Boado
>Assignee: Maryann Xue
> Fix For: 4.14.0
>
> Attachments: PHOENIX4437-4.x-HBase-1.2.patch
>
>
> Cloned for applying patch to 4.x-HBase-1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4493) Fix DropColumnIT

2017-12-25 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303330#comment-16303330
 ] 

Hadoop QA commented on PHOENIX-4493:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903662/PHOENIX-4493.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903662

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1690//console

This message is automatically generated.

> Fix DropColumnIT
> 
>
> Key: PHOENIX-4493
> URL: https://issues.apache.org/jira/browse/PHOENIX-4493
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4493.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4489) HBase Connection leak in Phoenix MR Jobs

2017-12-24 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302889#comment-16302889
 ] 

Hadoop QA commented on PHOENIX-4489:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903551/PHOENIX-4489.001.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903551

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+.getAttributesMap() + " [scanCache, 
cacheBlock, scanBatch] : [" +
+aScan.getCaching() + ", " + 
aScan.getCacheBlocks() + ", " + aScan
+psplits.add(new 
PhoenixInputSplit(Collections.singletonList(aScan), regionSize, 
regionLocation));
+.get(0).getStartRow()) + " ~ " + 
Bytes.toStringBinary(scans.get(scans
+.get(0).getAttributesMap() + " [scanCache, 
cacheBlock, scanBatch] : " +
+"[" + scans.get(0).getCaching() + ", " + 
scans.get(0).getCacheBlocks()

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1689//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1689//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1689//console

This message is automatically generated.

> HBase Connection leak in Phoenix MR Jobs
> 
>
> Key: PHOENIX-4489
> URL: https://issues.apache.org/jira/browse/PHOENIX-4489
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4489.001.patch
>
>
> Phoenix MR jobs uses a custom class {{PhoenixInputFormat}} to determine the 
> splits and the parallelism of the work. The class directly opens up a HBase 
> connection, which is not closed after the usage. Independently running MR 
> jobs should not have any concern, however jobs that run through Phoenix-Spark 
> can cause leak issues if this is left unclosed (since those jobs run as a 
> part of same JVM). 
> Apart from this, the connection should be instantiated with 
> {{HBaseFactoryProvider.getHConnectionFactory()}} instead of the default one. 
> It can be useful if a separate client is trying to run jobs and wants to 
> provide a custom implementation of {{HConnection}}. 
> [~jmahonin] Any ideas?
> [~jamestaylor] [~vincentpoon] Any concerns around this?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4488) Cache config parameters for MetaDataEndPointImpl during initialization

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16302016#comment-16302016
 ] 

Hadoop QA commented on PHOENIX-4488:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903454/PHOENIX-4488.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903454

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1688//console

This message is automatically generated.

> Cache config parameters for MetaDataEndPointImpl during initialization
> --
>
> Key: PHOENIX-4488
> URL: https://issues.apache.org/jira/browse/PHOENIX-4488
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4488.patch
>
>
> For example, see this code (which is called often):
> {code}
> boolean blockWriteRebuildIndex = 
> env.getConfiguration().getBoolean(QueryServices.INDEX_FAILURE_BLOCK_WRITE, 
> QueryServicesOptions.DEFAULT_INDEX_FAILURE_BLOCK_WRITE);
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301957#comment-16301957
 ] 

Hadoop QA commented on PHOENIX-4487:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903451/PHOENIX-4487.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903451

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+if (currentServerSideTableTimeStamp <= 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_10_0 &&
+if (acquiredMutexLock = 
acquireUpgradeMutex(currentServerSideTableTimeStamp, mutexRowKey)) {
+TableName mutexName = 
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, 
props);
+if 
(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME.equals(mutexName) || 
!tableNames.contains(mutexName)) {

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1687//console

This message is automatically generated.

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16301858#comment-16301858
 ] 

Hadoop QA commented on PHOENIX-4466:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903435/PHOENIX-4466-v2.patch
  against master branch at commit 412329a7415302831954891285d291055328c28b.
  ATTACHMENT ID: 12903435

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1686//console

This message is automatically generated.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4466-v2.patch, PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.r

[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300904#comment-16300904
 ] 

Hadoop QA commented on PHOENIX-4382:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903332/PHOENIX-4382.v3.master.patch
  against master branch at commit 412329a7415302831954891285d291055328c28b.
  ATTACHMENT ID: 12903332

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+table.getImmutableStorageScheme() == 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS
+private  void testValues(boolean immutable, PDataType 
dataType, List testData) throws Exception {
+public SingleCellColumnExpression(PColumn column, String displayName, 
QualifierEncodingScheme encodingScheme, ImmutableStorageScheme 
immutableStorageScheme) {
+}, dataColRef.getFamily(), dataColRef.getQualifier(), 
encodingScheme, immutableStorageScheme);
+KeyValueColumnExpression kvExp = scheme != 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN ? new 
SingleCellColumnExpression(scheme)
+return new PArrayDataTypeEncoder(byteStream, oStream, 
numElements, type, SortOrder.ASC, false, getSerializationVersion());
+// array serialization format where bytes are immutable (does not support 
prepend/append or sorting)
+if (serializationVersion == IMMUTABLE_SERIALIZATION_VERSION || 
serializationVersion == IMMUTABLE_SERIALIZATION_V2) {
+if (isNullValue(arrayIndex, bytes, initPos, 
serializationVersion, useShort, indexOffset, currOffset, elementLength)) {
+int separatorBytes =  serializationVersion == 
PArrayDataType.SORTABLE_SERIALIZATION_VERSION ? 3 : 0;

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1685//console

This message is automatically generated.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4398) Change QueryCompiler get column expressions process from serial to parallel.

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16300555#comment-16300555
 ] 

Hadoop QA commented on PHOENIX-4398:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903294/PHOENIX-4398_V1.patch
  against master branch at commit 9355a4d262d31d8d65e1467bcc351bb99760e11d.
  ATTACHMENT ID: 12903294

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static Configuration config = 
HBaseFactoryProvider.getConfigurationFactory().getConfiguration();
+private static boolean use_compile_parallel = 
config.getBoolean(USE_COMPILE_COLUMN_EXPRESSION_PARALLEL,
+expressions[i++] = ((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression();
+return new ExpressionOrder(((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression(), order);
+public static final String USE_COMPILE_COLUMN_EXPRESSION_PARALLEL = 
"phoenix.use.columnexpression.parallel";
+public static final String COMPILE_COLUMN_EXPRESSION_PARALLEL_THREAD = 
"phoenix.columnexpression.parallel.thread";

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1684//console

This message is automatically generated.

> Change QueryCompiler get column expressions process from serial to parallel.
> 
>
> Key: PHOENIX-4398
> URL: https://issues.apache.org/jira/browse/PHOENIX-4398
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Albert Lee
> Fix For: 4.11.0, 4.13.0
>
> Attachments: PHOENIX-4398.patch, PHOENIX-4398_V1.patch
>
>
> When QueryCompiler compile a select sql, the process of getting column 
> expressions is a serial process. The performance is ok when the table is 
> narrow. But when compile a wide table(e.g. 130 columns in my use case), The 
> time-consuming of this step is very high, over 70ms. So I change 
> TupleProjector(PTable projectedTable) from serial for loop to parallel future.
> Because this is just modify code performance, not add new feture, so there is 
> no unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16299850#comment-16299850
 ] 

Hadoop QA commented on PHOENIX-4437:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903178/PHOENIX-4437.patch
  against master branch at commit 9355a4d262d31d8d65e1467bcc351bb99760e11d.
  ATTACHMENT ID: 12903178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StatementPlan compilePlan = compilableStmt.compilePlan(stmt, 
Sequence.ValueOp.VALIDATE_SEQUENCE);
+// For a QueryPlan, we need to get its optimized plan; for a 
MutationPlan, its enclosed QueryPlan
+compilePlan = 
stmt.getConnection().getQueryServices().getOptimizer().optimize(stmt, dataPlan);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1683//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4473) Exception when Adding new columns to base table and view diverge

2017-12-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298493#comment-16298493
 ] 

Hadoop QA commented on PHOENIX-4473:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903026/PHOENIX-4473.patch
  against master branch at commit 9d8be0e9214ba3680a81c399c5da316c1b91c99b.
  ATTACHMENT ID: 12903026

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1682//console

This message is automatically generated.

> Exception when Adding new columns to base table and view diverge
> 
>
> Key: PHOENIX-4473
> URL: https://issues.apache.org/jira/browse/PHOENIX-4473
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4473.patch
>
>
> {code}
> [ERROR] 
> testAddPKColumnToBaseTableWhoseViewsHaveIndices(org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT)
>   Time elapsed: 11.102 s  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 4 bytes, but had 1
>   at 
> org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT.testAddPKColumnToBaseTableWhoseViewsHaveIndices(AlterMultiTenantTableWithViewsIT.java:371)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
> org.apache.hadoop.hbase.DoNotRetryIOException: T01_VIEW2: 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 4 bytes, but had 1
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:557)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7950)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2339)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2321)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:325)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:305)
> Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 201 
> (22000): Illegal data. Expected length of at least 4 bytes, but had 1
>   at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
>   at 
> org.apache.phoenix.schema.types.PInteger$IntCodec.decodeInt(PInteger.java:183)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumnToTable(MetaDataEndpointImpl.java:705)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1005)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:571)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3197)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3142)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:656)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:997)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:571)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3197)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3142)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:523)
>   ... 9 more
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> len

[jira] [Commented] (PHOENIX-4468) Looking up a parent index table of a child view from a different client fails.

2017-12-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297737#comment-16297737
 ] 

Hadoop QA commented on PHOENIX-4468:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12902948/PHOENIX-4468-v3-4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
9d8be0e9214ba3680a81c399c5da316c1b91c99b.
  ATTACHMENT ID: 12902948

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conf.set(QueryServices.EXTRA_JDBC_ARGUMENTS_ATTRIB, 
QueryServicesOptions.DEFAULT_EXTRA_JDBC_ARGUMENTS);
+public Connection createConnection(boolean isMultiTenant, boolean 
isDifferentClient) throws SQLException {
+conn.createStatement().execute("CREATE SEQUENCE " + sequenceName + " 
START WITH 3 INCREMENT BY 2 CACHE 5");
+query = "SELECT CURRENT_VALUE FROM \"SYSTEM\".\"SEQUENCE\" WHERE 
SEQUENCE_SCHEMA=? AND SEQUENCE_NAME=?";
+public void helpTestViewParentIndexLookupMutipleClients(boolean 
isMultiTenant) throws Exception {
+globalConn.createStatement().execute("CREATE INDEX " + 
baseTableIndexName + " ON " + baseTableName + " (V2) INCLUDE (v1, V3)");
+String viewDDL = "CREATE VIEW " + viewName + " AS SELECT * FROM " 
+ baseTableName + " WHERE V1 = 'X'";
+String expectedTableName = baseTableIndexName + 
QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR + viewName;
+   if (c.contains(QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR) ) { 
throw new RuntimeException("Table or schema name cannot contain hash"); }
+tenantId, PNameImpl.EMPTY_NAME, tableName, 
table.getType(), table.getIndexState(), timeStamp,

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexReplicationIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1681//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1681//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1681//console

This message is automatically generated.

> Looking up a parent index table of a child view from a different client 
> fails. 
> ---
>
> Key: PHOENIX-4468
> URL: https://issues.apache.org/jira/browse/PHOENIX-4468
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4468-4.x-HBase-0.98.patch, 
> PHOENIX-4468-v2-4.x-HBase-0.98.patch, PHOENIX-4468-v3-4.x-HBase-0.98.patch
>
>
> When you execute a query on a view, phoenix will use any indexes on the base 
> table that have all the required columns. We create a new PTable based on the 
> parent table index and tack on the view statement (to ensure we only see rows 
> that we can access from the view). This PTable is added to the client side 
> connection metadata cache. This table is not available on the server side (in 
> SYSTEM.CATALOG).
> If you lookup the parent index table from a different client (that never ran 
> a query on the view), it will fail with a TableNotFoundException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Upsert of some big values not correct for immutable tables

2017-12-19 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297304#comment-16297304
 ] 

Hadoop QA commented on PHOENIX-4382:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12902900/PHOENIX-4382.v2.master.patch
  against master branch at commit 9d8be0e9214ba3680a81c399c5da316c1b91c99b.
  ATTACHMENT ID: 12902900

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+table.getImmutableStorageScheme() == 
ImmutableStorageScheme.SINGLE_CELL_ARRAY_WITH_OFFSETS
+private  void testValues(boolean immutable, PDataType 
dataType, List testData) throws Exception {
+public SingleCellColumnExpression(PColumn column, String displayName, 
QualifierEncodingScheme encodingScheme, ImmutableStorageScheme 
immutableStorageScheme) {
+}, dataColRef.getFamily(), dataColRef.getQualifier(), 
encodingScheme, immutableStorageScheme);
+KeyValueColumnExpression kvExp = scheme != 
PTable.ImmutableStorageScheme.ONE_CELL_PER_COLUMN ? new 
SingleCellColumnExpression(scheme)
+return new PArrayDataTypeEncoder(byteStream, oStream, 
numElements, type, SortOrder.ASC, false, getSerializationVersion());
+// array serialization format where bytes are immutable (does not support 
prepend/append or sorting)
+if (serializationVersion == IMMUTABLE_SERIALIZATION_VERSION || 
serializationVersion == IMMUTABLE_SERIALIZATION_V2) {
+if (isNullValue(arrayIndex, bytes, initPos, 
serializationVersion, useShort, indexOffset, currOffset, elementLength)) {
+int separatorBytes =  serializationVersion == 
PArrayDataType.SORTABLE_SERIALIZATION_VERSION ? 3 : 0;

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.TxWriteFailureIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1679//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1679//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1679//console

This message is automatically generated.

> Upsert of some big values not correct for immutable tables
> --
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.  Numbers smaller than that 
> are fine (until you get closer to Short.MIN_VALUE...)
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296268#comment-16296268
 ] 

Hadoop QA commented on PHOENIX-4437:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902776/PHOENIX-4437.patch
  against master branch at commit 9d8be0e9214ba3680a81c399c5da316c1b91c99b.
  ATTACHMENT ID: 12902776

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1678//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4437) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan()

2017-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16296159#comment-16296159
 ] 

Hadoop QA commented on PHOENIX-4437:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902747/PHOENIX-4437.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902747

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StatementPlan compilePlan = compilableStmt.compilePlan(stmt, 
Sequence.ValueOp.VALIDATE_SEQUENCE);
+compilePlan = 
stmt.getConnection().getQueryServices().getOptimizer().optimize(stmt, dataPlan);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1677//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1677//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1677//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan()
> 
>
> Key: PHOENIX-4437
> URL: https://issues.apache.org/jira/browse/PHOENIX-4437
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
> Attachments: PHOENIX-4437.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4468) Looking up a parent index table of a child view from a different client fails.

2017-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16295701#comment-16295701
 ] 

Hadoop QA commented on PHOENIX-4468:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12902700/PHOENIX-4468-4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902700

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   if (c.contains(QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR) 
) { throw new RuntimeException("Table or schema name cannot contain hash"); }
+tenantId, PNameImpl.EMPTY_NAME, tableName, 
table.getType(), table.getIndexState(), timeStamp,
+String viewName = SchemaUtil.getTableNameFromFullName(name, 
QueryConstants.CHILD_VIEW_INDEX_NAME_SEPARATOR);
+MetaDataMutationResult result = new 
MetaDataClient(pconn).updateCache(schemaName, tableName);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ViewIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.ChildViewsUseParentViewIndexIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1676//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1676//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1676//console

This message is automatically generated.

> Looking up a parent index table of a child view from a different client 
> fails. 
> ---
>
> Key: PHOENIX-4468
> URL: https://issues.apache.org/jira/browse/PHOENIX-4468
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4468-4.x-HBase-0.98.patch
>
>
> When you execute a query on a view, phoenix will use any indexes on the base 
> table that have all the required columns. We create a new PTable based on the 
> parent table index and tack on the view statement (to ensure we only see rows 
> that we can access from the view). This PTable is added to the client side 
> connection metadata cache. This table is not available on the server side (in 
> SYSTEM.CATALOG).
> If you lookup the parent index table from a different client (that never ran 
> a query on the view), it will fail with a TableNotFoundException.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4466) java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data

2017-12-18 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16294687#comment-16294687
 ] 

Hadoop QA commented on PHOENIX-4466:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902599/PHOENIX-4466.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902599

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixClientRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1675//console

This message is automatically generated.

> java.lang.RuntimeException: response code 500 - Executing a spark job to 
> connect to phoenix query server and load data
> --
>
> Key: PHOENIX-4466
> URL: https://issues.apache.org/jira/browse/PHOENIX-4466
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP-2.6.3
>Reporter: Toshihiro Suzuki
>Assignee: Toshihiro Suzuki
>Priority: Minor
> Attachments: PHOENIX-4466.patch
>
>
> Steps to reproduce are as follows:
> 1. Start spark shell with 
> {code}
> spark-shell --jars /usr/hdp/current/phoenix-client/phoenix-thin-client.jar 
> {code}
> 2. Ran the following to load data 
> {code}
> scala> val query = 
> sqlContext.read.format("jdbc").option("driver","org.apache.phoenix.queryserver.client.Driver").option("url","jdbc:phoenix:thin:url=http://  query server 
> hostname>:8765;serialization=PROTOBUF").option("dbtable","").load 
> {code}
> This failed with the following exception 
> {code:java}
> java.sql.SQLException: While closing connection
>   at org.apache.calcite.avatica.Helper.createException(Helper.java:39)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.close(AvaticaConnection.java:156)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRDD$.resolveTable(JDBCRDD.scala:153)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.JDBCRelation.(JDBCRelation.scala:91)
>   at 
> org.apache.spark.sql.execution.datasources.jdbc.DefaultSource.createRelation(DefaultSource.scala:57)
>   at 
> org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:158)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:119)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:25)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:30)
>   at $iwC$$iwC$$iwC$$iwC$$iwC$$iwC.(:32)
>   at $iwC$$iwC$$iwC$$iwC$$iwC.(:34)
>   at $iwC$$iwC$$iwC$$iwC.(:36)
>   at $iwC$$iwC$$iwC.(:38)
>   at $iwC$$iwC.(:40)
>   at $iwC.(:42)
>   at (:44)
>   at .(:48)
>   at .()
>   at .(:7)
>   at .()
>   at $print()
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
>   at 
> org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1346)
>   at 
> org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
>   at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
>   at 
> org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857)
>   at 
> org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902)
>   at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814)
>   at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657)
>   at org.apache

[jira] [Commented] (PHOENIX-4370) Surface hbase metrics from perconnection to global metrics

2017-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16294413#comment-16294413
 ] 

Hadoop QA commented on PHOENIX-4370:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902570/PHOENIX-4370-v1.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902570

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+
GLOBAL_HBASE_COUNT_REMOTE_RPC_CALLS.update(scanMetricsMap.get(REMOTE_RPC_CALLS_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_MILLS_BETWEEN_NEXTS.update(scanMetricsMap.get(MILLIS_BETWEEN_NEXTS_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_NOT_SERVING_REGION_EXCEPTION.update(scanMetricsMap.get(NOT_SERVING_REGION_EXCEPTION_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_BYTES_REGION_SERVER_RESULTS.update(scanMetricsMap.get(BYTES_IN_RESULTS_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_BYTES_IN_REMOTE_RESULTS.update(scanMetricsMap.get(BYTES_IN_REMOTE_RESULTS_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_SCANNED_REGIONS.update(scanMetricsMap.get(REGIONS_SCANNED_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_REMOTE_RPC_RETRIES.update(scanMetricsMap.get(REMOTE_RPC_RETRIES_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_ROWS_SCANNED.update(scanMetricsMap.get(COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME));
+
GLOBAL_HBASE_COUNT_ROWS_FILTERED.update(scanMetricsMap.get(COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME));

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DropSchemaIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1673//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1673//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1673//console

This message is automatically generated.

> Surface hbase metrics from perconnection to global metrics
> --
>
> Key: PHOENIX-4370
> URL: https://issues.apache.org/jira/browse/PHOENIX-4370
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Ethan Wang
>Assignee: Ethan Wang
> Attachments: PHOENIX-4370-v1.patch
>
>
> Surface hbase metrics from perconnection to global metrics
> Currently in phoenix client side, HBASE metrics are recorded and surfaced at 
> Per Connection level. PHOENIX-4370 allow it to be aggregated at global level, 
> i.e., aggregate across all connections within in one JVM so that user can 
> evaluate it as a stable metrics periodically.
> COUNT_RPC_CALLS("rp", "Number of RPC calls"),
> COUNT_REMOTE_RPC_CALLS("rr", "Number of remote RPC calls"),
> COUNT_MILLS_BETWEEN_NEXTS("n", "Sum of milliseconds between sequential 
> next calls"),
> COUNT_NOT_SERVING_REGION_EXCEPTION("nsr", "Number of 
> NotServingRegionException caught"),
> COUNT_BYTES_REGION_SERVER_RESULTS("rs", "Number of bytes in Result 
> objects from region servers"),
> COUNT_BYTES_IN_REMOTE_RESULTS("rrs", "Number of bytes in Result objects 
> from remote region servers"),
> COUNT_SCANNED_REGIONS("rg", "Number of regions scanned"),
> COUNT_RPC_RETRIES("rpr", "Number of RPC retries"),
> COUNT_REMOTE_RPC_RETRIES("rrr", "Number of remote RPC retries"),
> COUNT_ROWS_SCANNED("ws", "Number of rows scanned"),
> COUNT_ROWS_FILTERED("wf", "Number of rows filtered");



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4464) Sync with branch 4.x-HBase-1.2

2017-12-17 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16294393#comment-16294393
 ] 

Hadoop QA commented on PHOENIX-4464:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: http:

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1674//console

This message is automatically generated.

> Sync with branch 4.x-HBase-1.2
> --
>
> Key: PHOENIX-4464
> URL: https://issues.apache.org/jira/browse/PHOENIX-4464
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Attachments: PHOENIX-4464.tar.gz
>
>
> Sync branch 4.x-cdh5.11.2 with 4.x-HBase-1.2 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4461) Sync branch 4.x-HBase-1.2

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293971#comment-16293971
 ] 

Hadoop QA commented on PHOENIX-4461:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12902525/0001-PHOENIX-4342-Surface-QueryPlan-in-MutationPlan.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902525

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1672//console

This message is automatically generated.

> Sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4461
> URL: https://issues.apache.org/jira/browse/PHOENIX-4461
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
> Attachments: 
> 0001-PHOENIX-4342-Surface-QueryPlan-in-MutationPlan.patch, 
> 0002-PHOENIX-4361-Remove-redundant-argument-in-separateAn.patch, 
> 0003-PHOENIX-4386-Calculate-the-estimatedSize-of-Mutation.patch, 
> 0004-PHOENIX-4386-Calculate-the-estimatedSize-of-Mutation.patch, 
> 0005-Revert-PHOENIX-4386-Calculate-the-estimatedSize-of-M.patch, 
> 0006-PHOENIX-4198-Remove-the-need-for-users-to-have-acces.patch, 
> 0007-PHOENIX-672-Add-GRANT-and-REVOKE-commands-using-HBas.patch, 
> 0008-PHOENIX-4288-Indexes-not-used-when-ordering-by-prima.patch, 
> 0009-PHOENIX-4322-DESC-primary-key-column-with-variable-l.patch, 
> 0010-PHOENIX-3050-Handle-DESC-columns-in-child-parent-joi.patch, 
> 0011-PHOENIX-4386-Calculate-the-estimatedSize-of-Mutation.patch, 
> 0012-PHOENIX-3837-Feature-enabling-to-set-property-on-an-.patch, 
> 0013-PHOENIX-4415-Ignore-CURRENT_SCN-property-if-set-in-P.patch, 
> 0014-PHOENIX-4424-Allow-users-to-create-DEFAULT-and-HBASE.patch
>
>
> Ticket for requesting a full test run for PR #289, syncing 4.x-HBase-1.2 with 
> master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4462) Fix license issues with rat plugin

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293951#comment-16293951
 ] 

Hadoop QA commented on PHOENIX-4462:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12902511/PHOENIX-4462-4.x-cdh5.11.2-v2.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902511

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1671//console

This message is automatically generated.

> Fix license issues with rat plugin
> --
>
> Key: PHOENIX-4462
> URL: https://issues.apache.org/jira/browse/PHOENIX-4462
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Attachments: PHOENIX-4462-4.x-cdh5.11.2-v2.patch
>
>
> RAT plugin is failing with some files (~10) when creating a release package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4453) [CDH] Thin client fails with missing library

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293894#comment-16293894
 ] 

Hadoop QA commented on PHOENIX-4453:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902500/PHOENIX-4453.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902500

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixClientRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1670//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1670//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1670//console

This message is automatically generated.

> [CDH] Thin client fails with missing library
> 
>
> Key: PHOENIX-4453
> URL: https://issues.apache.org/jira/browse/PHOENIX-4453
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: Centos 6  +  4.13.1-cdh5.11.2 rc0 
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Attachments: PHOENIX-4453.patch
>
>
> sqlline-thin client cannot start because of a dependency problem
> {code}
> [cloudera@quickstart bin]$ ./phoenix-sqlline-thin.py 
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none 
> org.apache.phoenix.queryserver.client.Driver
> Connecting to 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.1-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.0-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/org/apache/http/config/Lookup
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:190)
>   at 
> org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.instantiateClient(AvaticaHttpClientFactoryImpl.java:112)
>   at 
> org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.getClient(AvaticaHttpClientFactoryImpl.java:68)
>   at 
> org.apache.calcite.avatica.remote.Driver.getHttpClient(Driver.java:160)
>   at 
> org.apache.calcite.avatica.remote.Driver.createService(Driver.java:123)
>   at org.apache.calcite.avatica.remote.Driver.createMeta(Driver.java:97)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>   at 
> org.apache.calcite.avatica.AvaticaJdbc41Factory$AvaticaJdbc41Connection.(AvaticaJdbc41Factory.java:105)
>   at 
> org.apache.calcite.avatica.AvaticaJdbc41Factory.newConnection(AvaticaJdbc41Factory.java:62)
>   at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>   at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:165)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHa

[jira] [Commented] (PHOENIX-4462) Fix license issues with rat plugin

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293794#comment-16293794
 ] 

Hadoop QA commented on PHOENIX-4462:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902493/PHOENIX-4462.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902493

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1669//console

This message is automatically generated.

> Fix license issues with rat plugin
> --
>
> Key: PHOENIX-4462
> URL: https://issues.apache.org/jira/browse/PHOENIX-4462
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Attachments: PHOENIX-4462.patch
>
>
> RAT plugin is failing with some files (~10) when creating a release package.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4454) [CDH] Heavy client fails when used from a standalone machine

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293771#comment-16293771
 ] 

Hadoop QA commented on PHOENIX-4454:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902490/PHOENIX-4454.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902490

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1668//console

This message is automatically generated.

> [CDH] Heavy client fails when used from a standalone machine
> 
>
> Key: PHOENIX-4454
> URL: https://issues.apache.org/jira/browse/PHOENIX-4454
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: Windows 7 + DB Visualizer + Heavy client
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Attachments: PHOENIX-4454.patch
>
>
> Client provided with the distribution doesn't work when used out of the HBase 
> cluster.  
> {code}
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.mapred.JobConf
>    at java.lang.Class.forName0(Native Method)
>    at java.lang.Class.forName(Unknown Source)
>    at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2138)
>    at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:91)
>    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>    at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>    at 
> org.apache.hadoop.hbase.security.UserProvider.instantiate(UserProvider.java:124)
>    at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:341)
>    at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>    at 
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:408)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:256)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2408)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
>    at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
>    at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>    at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>    at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>    at java.lang.reflect.Method.invoke(Unknown Source)
>    at com.onseven.dbvis.g.B.D.ā(Z:1548)
>    at com.onseven.dbvis.g.B.F$A.call(Z:1369)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4461) Test run for Pull Request #289, sync branch 4.x-HBase-1.2

2017-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16293758#comment-16293758
 ] 

Hadoop QA commented on PHOENIX-4461:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902488/286.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902488

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 9 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1667//console

This message is automatically generated.

> Test run for Pull Request #289, sync branch 4.x-HBase-1.2
> -
>
> Key: PHOENIX-4461
> URL: https://issues.apache.org/jira/browse/PHOENIX-4461
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: verify
>Reporter: Pedro Boado
> Attachments: 286.patch
>
>
> Ticket for requesting a full test run for PR #289, syncing 4.x-HBase-1.2 with 
> master.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291919#comment-16291919
 ] 

Hadoop QA commented on PHOENIX-4460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902178/PHOENIX-4460-v2.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902178

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1666//console

This message is automatically generated.

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460-v2.patch, PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4460) High GC / RS shutdown when we use select query with "IN" clause using 4.10 phoenix client on 4.13 phoenix server

2017-12-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16291490#comment-16291490
 ] 

Hadoop QA commented on PHOENIX-4460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12902112/PHOENIX-4460.patch
  against master branch at commit 5cb02da74c15b0ae7c0fb4c880d60a2d1b6d18aa.
  ATTACHMENT ID: 12902112

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterSessionIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AggregateIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1665//console

This message is automatically generated.

> High GC / RS shutdown when we use select query with "IN" clause using 4.10 
> phoenix client on 4.13 phoenix server
> 
>
> Key: PHOENIX-4460
> URL: https://issues.apache.org/jira/browse/PHOENIX-4460
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Blocker
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4460.patch
>
>
> We were able to reproduce the High GC / RS shutdown / phoenix KeyRange query 
> high object count issue on cluster today. 
> Main observation is that this is reproducible when firing lots of query
> select from xyz where abc in (?, ?, ...)  of this type with 4.10 phoenix 
> client hitting 4.13 phoenix on HBase server side
>  (4.10 client/4.10 server works fine, 4.13 client with 4.13 server works fine)
> We wrote a loader client (attached) with the below table/query , upserted 
> ~100 million rows and fired the query in parallel using 4-5 loader clients 
> with 16 threads each
> {code}
> TABLE:  = "CREATE TABLE " + TABLE_NAME_TEMPLATE 
>  + " (\n" + " TestKey varchar(255) PRIMARY KEY, TestVal1 varchar(200), 
> TestVal2 varchar(200), "  + "TestValue varchar(1))";
> QUERY: = "SELECT * FROM " +  TABLE_NAME_TEMPLATE + " WHERE TestKey IN (?, ?, 
> ?, ?, ?, ?, ?, ?, ?, ?)"
> {code}
> After running this client immediately within a min or two we see the 
> phoenix.query.KeyRange object count immediately going up to several lakhs and 
> keeps on increasing continuously. This count doesn't seem to come down even 
> after shutting down the clients 
> {code}
> -bash-4.1$ ~/current/bigdata-util/tools/Linux/jdk/jdk1.8.0_102_x64/bin/jmap 
> -histo:live 90725 | grep KeyRange
>   47:2748526596448  org.apache.phoenix.query.KeyRange
> 1851: 2 48  org.apache.phoenix.query.KeyRange$Bound
> 2434: 1 24  [Lorg.apache.phoenix.query.KeyRange$Bound;
> 3411: 1 16  org.apache.phoenix.query.KeyRange$1
> 3412: 1 16  org.apache.phoenix.query.KeyRange$2
> {code}
> After some time we also started seeing High GC issues and RegionServers 
> crashing
> Experiment Summary:
> - 4.13 client/4.13 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.10 Server --- Issue not reproducible (we do see KeyRange 
> count increasing upto few 100's)
> - 4.10 client/4.13 Server --- Issue reproducible as described above



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4457) Account for the Table interface addition of checkAndMutate

2017-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16289659#comment-16289659
 ] 

Hadoop QA commented on PHOENIX-4457:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12901922/PHOENIX-4457.patch
  against master branch at commit 1a19d1ecbd38f2b7ee406df8efa05d29f685ef57.
  ATTACHMENT ID: 12901922

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1664//console

This message is automatically generated.

> Account for the Table interface addition of checkAndMutate
> --
>
> Key: PHOENIX-4457
> URL: https://issues.apache.org/jira/browse/PHOENIX-4457
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4457.patch
>
>
> HBASE-19213 added a new method to Table:
> {code}
> +  CheckAndMutateBuilder checkAndMutate(byte[] row, byte[] family);
> {code}
> Need to account for this in our Table implementations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4456) queryserver script doesn't perform as expected.

2017-12-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16289019#comment-16289019
 ] 

Hadoop QA commented on PHOENIX-4456:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12901838/PHOENIX-4456.patch
  against master branch at commit 1a19d1ecbd38f2b7ee406df8efa05d29f685ef57.
  ATTACHMENT ID: 12901838

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1663//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1663//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1663//console

This message is automatically generated.

> queryserver script doesn't perform as expected.
> ---
>
> Key: PHOENIX-4456
> URL: https://issues.apache.org/jira/browse/PHOENIX-4456
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Minor
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4456.patch
>
>
> Our queryserver.py is using a copy of daemon module. It has several flows:
> 1. it forks first, exit the parent process and after that create the pid 
> file. So there is a gap between the queryserver.py finished and the pid file 
> created.
> 2. The check for existing pid is happening in forked process, so if we start 
> queryserver when there is already running one we wouldn't see the message 
> that process is already running/started 
> I've checked the more recent version from python 3.5 and it's still using the 
> same logic.
> for (2) I think we may add an additinal check to PidFile.__init__, so it will 
> happen before we fork the daemon. For (1) there is an option to wait until 
> pid file appears and only after exit the parent process.
> FYI [~elserj]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4449) Incorrect behavior of sqlline after PHOENIX-3567

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288561#comment-16288561
 ] 

Hadoop QA commented on PHOENIX-4449:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12901780/PHOENIX-4449.002.patch
  against master branch at commit 1a19d1ecbd38f2b7ee406df8efa05d29f685ef57.
  ATTACHMENT ID: 12901780

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ReadIsolationLevelIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1662//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1662//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1662//console

This message is automatically generated.

> Incorrect behavior of sqlline after PHOENIX-3567
> 
>
> Key: PHOENIX-4449
> URL: https://issues.apache.org/jira/browse/PHOENIX-4449
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0, 4.13.0, 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4449.001.patch, PHOENIX-4449.002.patch
>
>
> In PHOENIX-3567 we have introduced a usage of argparse in sqlline as well as 
> some default values for the parameters. That's not good because of:
> 1. argparse is not a default package in old versions of python like 2.6.6 
> which is using in centos 6 distribution. 
> 2. We should not set a default value for zookeeper parent because if it's not 
> specified in the command line, those values should be obtained from hbase 
> client (hbase-site.xml or default values if it's not in the classpath).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4452) change usage of WALKey to WALKeyImpl due HBASE-19134

2017-12-12 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16288017#comment-16288017
 ] 

Hadoop QA commented on PHOENIX-4452:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12901731/PHOENIX-4452.patch
  against master branch at commit 1a19d1ecbd38f2b7ee406df8efa05d29f685ef57.
  ATTACHMENT ID: 12901731

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1661//console

This message is automatically generated.

> change usage of WALKey to WALKeyImpl due HBASE-19134
> 
>
> Key: PHOENIX-4452
> URL: https://issues.apache.org/jira/browse/PHOENIX-4452
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Blocker
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4452.patch
>
>
> Changes in HBASE-19134 broken our build.
> WALKey now is abstract, so we need to use WALKeyImpl in 
> SystemCatalogWALEntryFilterIT



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4449) Incorrect behavior of sqlline after PHOENIX-3567

2017-12-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16287057#comment-16287057
 ] 

Hadoop QA commented on PHOENIX-4449:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12901577/PHOENIX-4449.001.patch
  against master branch at commit 334eb15b4a7a80ce8d4e1c1dc09b7724663fc4da.
  ATTACHMENT ID: 12901577

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.hadoop.hbase.regionserver.wal.WALReplayWithIndexWritesAndCompressedWALIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1660//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1660//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1660//console

This message is automatically generated.

> Incorrect behavior of sqlline after PHOENIX-3567
> 
>
> Key: PHOENIX-4449
> URL: https://issues.apache.org/jira/browse/PHOENIX-4449
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0, 4.13.0, 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4449.001.patch
>
>
> In PHOENIX-3567 we have introduced a usage of argparse in sqlline as well as 
> some default values for the parameters. That's not good because of:
> 1. argparse is not a default package in old versions of python like 2.6.6 
> which is using in centos 6 distribution. 
> 2. We should not set a default value for zookeeper parent because if it's not 
> specified in the command line, those values should be obtained from hbase 
> client (hbase-site.xml or default values if it's not in the classpath).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4397) Incorrect query results when with stats are disabled on a salted table

2017-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16284467#comment-16284467
 ] 

Hadoop QA commented on PHOENIX-4397:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12901315/PHOENIX-4397_v2.patch
  against master branch at commit d6e61af807f7a4e605c61217bac556ffe00ea237.
  ATTACHMENT ID: 12901315

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.iterate.RoundRobinResultIteratorIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ColumnEncodedImmutableTxStatsCollectorIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1659//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1659//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1659//console

This message is automatically generated.

> Incorrect query results when with stats are disabled on a salted table
> --
>
> Key: PHOENIX-4397
> URL: https://issues.apache.org/jira/browse/PHOENIX-4397
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4397.patch, PHOENIX-4397_v2.patch
>
>
> See attached unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282420#comment-16282420
 ] 

Hadoop QA commented on PHOENIX-3837:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12901107/PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch
  against master branch at commit ee728a4d19c004ad456b24cd228fb2351362472d.
  ATTACHMENT ID: 12901107

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 10");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 20");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=false");
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=true");
+private static void asssertIsWALDisabled(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException {
+assertEquals(expectedValue, pconn.getTable(new 
PTableKey(pconn.getTenantId(), fullTableName)).isWALDisabled());
+  ((s=(USABLE | UNUSABLE | REBUILD | DISABLE | ACTIVE)) (async=ASYNC)? 
((SET?)p=fam_properties)?)
+  {ret = factory.alterIndex(factory.namedTable(null, 
TableName.create(t.getSchemaName(), i.getName())), t.getTableName(), ex!=null, 
PIndexState.valueOf(SchemaUtil.normalizeIdentifier(s.getText())), async!=null, 
p); }

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.execute.PartialCommitIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1656//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1656//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1656//console

This message is automatically generated.

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16282413#comment-16282413
 ] 

Hadoop QA commented on PHOENIX-3837:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12901107/PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch
  against master branch at commit ee728a4d19c004ad456b24cd228fb2351362472d.
  ATTACHMENT ID: 12901107

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 10");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 20");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=false");
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=true");
+private static void asssertIsWALDisabled(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException {
+assertEquals(expectedValue, pconn.getTable(new 
PTableKey(pconn.getTenantId(), fullTableName)).isWALDisabled());
+  ((s=(USABLE | UNUSABLE | REBUILD | DISABLE | ACTIVE)) (async=ASYNC)? 
((SET?)p=fam_properties)?)
+  {ret = factory.alterIndex(factory.namedTable(null, 
TableName.create(t.getSchemaName(), i.getName())), t.getTableName(), ex!=null, 
PIndexState.valueOf(SchemaUtil.normalizeIdentifier(s.getText())), async!=null, 
p); }

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ContextClassloaderIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1657//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1657//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1657//console

This message is automatically generated.

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837-Merged_With_5.x-2.0-and-Master.patch, 
> PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4441) Reflect changes that were made recently in HBase branch-2

2017-12-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16281527#comment-16281527
 ] 

Hadoop QA commented on PHOENIX-4441:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12901017/PHOENIX-4441.patch
  against master branch at commit ee728a4d19c004ad456b24cd228fb2351362472d.
  ATTACHMENT ID: 12901017

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1655//console

This message is automatically generated.

> Reflect changes that were made recently in HBase branch-2 
> --
>
> Key: PHOENIX-4441
> URL: https://issues.apache.org/jira/browse/PHOENIX-4441
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4441.patch
>
>
> There were several changes that break our build:
> HBASE-19417. Changed method signature in RegionObserver
> HBASE-19430. Removed setTimestamp(byte[], int)  method from ExtendedCell.
> Also, there is another problem - configuration on coprocessor env is now 
> read-only, so we need to find out a way to set index failure policy for index 
> coprocessor. Will cover in a separate jira. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4439) QueryServer pid file name doesn't comply the usual schema we are using in hadoop ecosystem

2017-12-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16280967#comment-16280967
 ] 

Hadoop QA commented on PHOENIX-4439:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900923/PHOENIX-4439.patch
  against master branch at commit d77c237b560900671c3a9c58f6f2398342655e8a.
  ATTACHMENT ID: 12900923

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.iterate.RoundRobinResultIteratorWithStatsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1654//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1654//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1654//console

This message is automatically generated.

> QueryServer pid file name doesn't comply the usual schema we are using in 
> hadoop ecosystem
> --
>
> Key: PHOENIX-4439
> URL: https://issues.apache.org/jira/browse/PHOENIX-4439
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Trivial
> Fix For: 4.8.0, 4.13.0, 5.0.0
>
> Attachments: PHOENIX-4439.patch
>
>
> In PHOENIX-2877 we have changed the pid file for PQS to user-queryserver.pid. 
> But it's not consistent with the usual schema product-user-component.pid that 
> we are using in most of the products. We need to add 'product' part to the 
> pid file name.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Upsert of some big values not correct for immutable tables

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279396#comment-16279396
 ] 

Hadoop QA commented on PHOENIX-4382:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12900763/PHOENIX-4382.v1.master.patch
  against master branch at commit d77c237b560900671c3a9c58f6f2398342655e8a.
  ATTACHMENT ID: 12900763

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private  void testValues(boolean immutable, 
PDataType dataType, List testData) throws Exception {
+(initPos + ptr.getLength() - (Bytes.SIZEOF_BYTE + 2 * 
Bytes.SIZEOF_INT))) + initPos;

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.phoenix.schema.ImmutableStorageSchemeTest

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1653//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1653//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1653//console

This message is automatically generated.

> Upsert of some big values not correct for immutable tables
> --
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.  Numbers smaller than that 
> are fine (until you get closer to Short.MIN_VALUE...)
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3050) Handle DESC columns in child/parent join optimization

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279258#comment-16279258
 ] 

Hadoop QA commented on PHOENIX-3050:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12894919/PHOENIX-3050.patch
  against master branch at commit d77c237b560900671c3a9c58f6f2398342655e8a.
  ATTACHMENT ID: 12894919

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public Pair, List> 
compileJoinConditions(StatementContext lhsCtx, StatementContext rhsCtx, 
Strategy strategy) throws SQLException {
+SortOrder toSortOrder = strategy == Strategy.SORT_MERGE ? 
SortOrder.ASC : (strategy == Strategy.HASH_BUILD_LEFT ? right.getSortOrder() : 
left.getSortOrder());
+right = CoerceExpression.create(right, toType, 
toSortOrder, right.getMaxLength());
+Pair, List> joinConditions = 
joinSpec.compileJoinConditions(context, subContexts[i], 
JoinCompiler.Strategy.HASH_BUILD_RIGHT);
+Pair, List> joinConditions = 
lastJoinSpec.compileJoinConditions(lhsCtx, context, 
JoinCompiler.Strategy.HASH_BUILD_LEFT);
+Pair, List> joinConditions = 
lastJoinSpec.compileJoinConditions(type == JoinType.Right ? rhsCtx : lhsCtx, 
type == JoinType.Right ? lhsCtx : rhsCtx, JoinCompiler.Strategy.SORT_MERGE);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1652//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1652//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1652//console

This message is automatically generated.

> Handle DESC columns in child/parent join optimization
> -
>
> Key: PHOENIX-3050
> URL: https://issues.apache.org/jira/browse/PHOENIX-3050
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
> Attachments: PHOENIX-3050.patch
>
>
> We found that child/parent join optimization would not work with DESC pk 
> columns. So as a quick fix for PHOENIX-3029, we simply avoid DESC columns 
> when optimizing, which would have no impact on the overall approach and no 
> impact on ASC columns.
>  
> But eventually we need to make the optimization work with DESC columns too.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4436) Hive PhoenixStorageHandler doesn't work well with quoted namespace/table name.

2017-12-05 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278430#comment-16278430
 ] 

Hadoop QA commented on PHOENIX-4436:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900643/PHOENIX-4436.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900643

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+tableName = 
StringEscapeUtils.unescapeJava(config.get(PhoenixStorageHandlerConstants.PHOENIX_TABLE_NAME));
+List columnMetadata = 
PhoenixUtil.getColumnInfoList(conn, 
StringEscapeUtils.unescapeJava(tbl.getProperty
+String tableName = 
StringEscapeUtils.unescapeJava(tableProperties.getProperty(PhoenixStorageHandlerConstants
+String tableName = 
StringEscapeUtils.unescapeJava(jobConf.get(PhoenixStorageHandlerConstants.PHOENIX_TABLE_NAME));
+tableName = 
StringEscapeUtils.unescapeJava(config.get(PhoenixStorageHandlerConstants.PHOENIX_TABLE_NAME));
+primaryKeyColumnList = PhoenixUtil.getPrimaryKeyColumnList(config, 
StringEscapeUtils.unescapeJava(config.get
+String tableName = 
StringEscapeUtils.unescapeJava(tableParameterMap.get(PhoenixStorageHandlerConstants

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-hive/target/failsafe-reports/TEST-org.apache.phoenix.hive.HiveMapReduceIT
./phoenix-hive/target/failsafe-reports/TEST-org.apache.phoenix.hive.HiveTezIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1651//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1651//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1651//console

This message is automatically generated.

> Hive PhoenixStorageHandler doesn't work well with quoted namespace/table 
> name. 
> ---
>
> Key: PHOENIX-4436
> URL: https://issues.apache.org/jira/browse/PHOENIX-4436
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HivePhoenix
> Attachments: PHOENIX-4436.patch
>
>
> With quoted schema name the table will be created, but not usable with an 
> exception about '\' character. The reason is that job properties in Hive are 
> stored using escaped java strings. So to correctly handle it we need to 
> unescape. 
> For quoted name the table creation would fail because we are using table name 
> in PK name prefixed by 'pk_'. So the PK name comes incorrect like 
> 'pk_"table_name"'. I think that there is no reason to generate constraint pk 
> name and we may use the constant. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4424) Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase Schema Names)

2017-12-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16277569#comment-16277569
 ] 

Hadoop QA commented on PHOENIX-4424:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12900529/PHOENIX-4424.002.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900529

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+// ddl2 should create uppercase schemaName since Phoenix upper-cases 
identifiers without quotes
+// Create schema DEFAULT and HBASE (Should allow since they are 
upper-cased) and verify that it exists
+ HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+
assertNotNull(admin.getNamespaceDescriptor(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE.toUpperCase()));
+schemaName)) { throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SCHEMA_NOT_ALLOWED)
+
if(!changePermsStatement.getSchemaName().equals(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE))
 {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1650//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1650//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1650//console

This message is automatically generated.

> Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase Schema Names)
> ---
>
> Key: PHOENIX-4424
> URL: https://issues.apache.org/jira/browse/PHOENIX-4424
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4424.001.patch, PHOENIX-4424.002.patch
>
>
> We currently block users to create "DEFAULT" and "HBASE" schema, however it 
> should be actually "default" and "hbase" since hbase namespace is case 
> sensitive. Hence we should update it and allow is users want to create 
> schema's with those names.
> If user wants to access the schema names with capital letters, they can pass 
> it in directly (Phoenix will automatically upper-case it) or pass it in 
> uppercase letters with double-quotes.
> FYI.
> [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4415) Ignore CURRENT_SCN property if set in Pig Storer

2017-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16275191#comment-16275191
 ] 

Hadoop QA commented on PHOENIX-4415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900278/PHOENIX-4415_v3.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900278

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public PhoenixRecordWriter(final Configuration configuration, 
Set propsToIgnore) throws SQLException {
+this.conn = 
ConnectionUtil.getOutputConnectionWithoutTheseProps(configuration, 
propsToIgnore);
+public static Connection getOutputConnectionWithoutTheseProps(final 
Configuration conf, Set ignoreTheseProps) throws SQLException {
+public static Connection getOutputConnection(final Configuration conf, 
Properties props, Set withoutTheseProps) throws SQLException {
+public static Properties combineProperties(Properties props, final 
Configuration conf, Set withoutTheseProps) {
+if (copy.getProperty(entry.getKey()) == null && 
!withoutTheseProps.contains(entry.getKey())) {
+conf.set(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(System.currentTimeMillis()+QueryConstants.MILLIS_IN_DAY));
+private static final Set PROPS_TO_IGNORE = new 
HashSet<>(Arrays.asList(PhoenixRuntime.CURRENT_SCN_ATTRIB));
+private final PhoenixOutputFormat outputFormat = new 
PhoenixOutputFormat(PROPS_TO_IGNORE);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1649//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1649//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1649//console

This message is automatically generated.

> Ignore CURRENT_SCN property if set in Pig Storer
> 
>
> Key: PHOENIX-4415
> URL: https://issues.apache.org/jira/browse/PHOENIX-4415
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4415.patch, PHOENIX-4415_v2.patch, 
> PHOENIX-4415_v3.patch
>
>
> Phoenix does not support back-in-time writes, so to guard against that we'll 
> remove the CURRENT_SCN property if it's set in the job configuration in 
> PhoenixHBaseStorage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4139) select distinct with identical aggregations return weird values

2017-12-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16274716#comment-16274716
 ] 

Hadoop QA commented on PHOENIX-4139:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900237/PHOENIX-4139_v2.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900237

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+PreparedStatement statement = conn.prepareStatement("UPSERT INTO " 
+ tableName + "(nam, address, id) values (?,?,?)");
+ResultSet rs = stmt.executeQuery("select distinct 'harshit' as 
\"test_column\", trim(nam), trim(nam), lower(nam) from " + tableName);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-hive/target/failsafe-reports/TEST-org.apache.phoenix.hive.HiveTezIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1648//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1648//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1648//console

This message is automatically generated.

> select distinct with identical aggregations return weird values 
> 
>
> Key: PHOENIX-4139
> URL: https://issues.apache.org/jira/browse/PHOENIX-4139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: minicluster
>Reporter: Csaba Skrabak
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4139.patch, PHOENIX-4139_v2.patch
>
>
> From sme-hbase hipchat room:
> Pulkit Bhardwaj·10:31
> i'm seeing a weird issue with phoenix, appreciate some thoughts
> Created a simple table in phoenix
> {noformat}
> 0: jdbc:phoenix:> create table test_select(nam VARCHAR(20), address 
> VARCHAR(20), id BIGINT
> . . . . . . . . > constraint my_pk primary key (id));
> 0: jdbc:phoenix:> upsert into test_select (nam, address,id) 
> values('pulkit','badaun',1);
> 0: jdbc:phoenix:> select * from test_select;
> +-+--+-+
> |   NAM   | ADDRESS  | ID  |
> +-+--+-+
> | pulkit  | badaun   | 1   |
> +-+--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", nam from 
> test_select;
> +--+-+
> | test_column  |   NAM   |
> +--+-+
> | harshit  | pulkit  |
> +--+-+
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam) from test_select;
> +--+++
> | test_column  |   TRIM(NAM)|   TRIM(NAM)|
> +--+++
> | harshit  | pulkitpulkit  | pulkitpulkit  |
> +--+++
> {noformat}
> When I apply a trim on the nam column and use it multiple times, the output 
> has the cell data duplicated!
> {noformat}
> 0: jdbc:phoenix:> select distinct 'harshit' as "test_column", trim(nam), 
> trim(nam), trim(nam) from test_select;
> +--+---+---+---+
> | test_column  |   TRIM(NAM)   |   TRIM(NAM)   |   
> TRIM(NAM)   |
> +--+---+---+---+
> | harshit  | pulkitpulkitpulkit  | pulkitpulkitpulkit  | 
> pulkitpulkitpulkit  |
> +--+---+---+---+
> {noformat}
> Wondering if someone has seen this before??
> One thing to note is, if I remove the —— distinct 'harshit' as "test_column" 
> ——  The issue is not seen
> {noformat}
> 0: jdbc:phoenix:> select trim(nam), trim(nam), trim(nam) from test_select;
> ++++
> | TRIM(NAM)  | TRIM(NAM)  | TRIM(NAM)  |
> ++++
> | pulkit | pulkit | pulkit |
> ++--

[jira] [Commented] (PHOENIX-4424) Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase Schema Names)

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273880#comment-16273880
 ] 

Hadoop QA commented on PHOENIX-4424:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12900125/PHOENIX-4424.001.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900125

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+verifyAllowed(grantPermissions("C", regularUser1, "\"" + 
SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE + "\"", true), superUser1);
+// Create schema DEFAULT and HBASE (Should allow since they are 
upper-cased) and verify that it exists
+ HBaseAdmin admin = 
conn.unwrap(PhoenixConnection.class).getQueryServices().getAdmin();) {
+
assertNotNull(admin.getNamespaceDescriptor(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE.toUpperCase()));
+schemaName)) { throw new 
SQLExceptionInfo.Builder(SQLExceptionCode.SCHEMA_NOT_ALLOWED)
+
if(!changePermsStatement.getSchemaName().equals(SchemaUtil.SCHEMA_FOR_DEFAULT_NAMESPACE))
 {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.DropSchemaIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.rpc.PhoenixServerRpcIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1647//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1647//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1647//console

This message is automatically generated.

> Allow users to create "DEFAULT" and "HBASE" Schema (Uppercase Schema Names)
> ---
>
> Key: PHOENIX-4424
> URL: https://issues.apache.org/jira/browse/PHOENIX-4424
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
> Attachments: PHOENIX-4424.001.patch
>
>
> We currently block users to create "DEFAULT" and "HBASE" schema, however it 
> should be actually "default" and "hbase" since hbase namespace is case 
> sensitive. Hence we should update it and allow is users want to create 
> schema's with those names.
> If user wants to access the schema names with capital letters, they can pass 
> it in directly (Phoenix will automatically upper-case it) or pass it in 
> uppercase letters with double-quotes.
> FYI.
> [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4322) DESC primary key column with variable length does not work in SkipScanFilter

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16273576#comment-16273576
 ] 

Hadoop QA commented on PHOENIX-4322:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12900077/PHOENIX-4322.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12900077

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, 
code VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
+runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
"(('o2', '2'), ('o1', '1'))"),
+  && outputBytes[outputSize-1] == 
SchemaUtil.getSeparatorByte(true, false, getChildren().get(k)) ; k--) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AlterTableWithViewsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1646//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1646//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1646//console

This message is automatically generated.

> DESC primary key column with variable length does not work in SkipScanFilter
> 
>
> Key: PHOENIX-4322
> URL: https://issues.apache.org/jira/browse/PHOENIX-4322
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Maryann Xue
>Assignee: Maryann Xue
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4322.patch
>
>
> Example:
> {code}
> @Test
> public void inDescCompositePK3() throws Exception {
> String table = generateUniqueName();
> String ddl = "CREATE table " + table + " (oid VARCHAR NOT NULL, code 
> VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC))";
> Object[][] insertedRows = new Object[][]{{"o1", "1"}, {"o2", "2"}, 
> {"o3", "3"}};
> runQueryTest(ddl, upsert("oid", "code"), insertedRows, new 
> Object[][]{{"o2", "2"}, {"o1", "1"}}, new WhereCondition("(oid, code)", "IN", 
> "(('o2', '2'), ('o1', '1'))"),
> table);
> }
> {code}
> Here the last column in primary key is in DESC order and has variable length, 
> and WHERE clause involves an "IN" operator with RowValueConstructor 
> specifying all PK columns. We get no results.
> This ends up being the root cause for not being able to use child/parent join 
> optimization on DESC pk columns as described in PHOENIX-3050.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4422) Connection to server is very slow.

2017-11-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272409#comment-16272409
 ] 

Hadoop QA commented on PHOENIX-4422:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899968/PHOENIX-4422.patch
  against master branch at commit 88038a2dacb7aa1a90015163d4d75d04793e4e11.
  ATTACHMENT ID: 12899968

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1645//console

This message is automatically generated.

> Connection to server is very slow.
> --
>
> Key: PHOENIX-4422
> URL: https://issues.apache.org/jira/browse/PHOENIX-4422
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Attachments: PHOENIX-4422.patch
>
>
> The problem is for HBase 2.0 integration. After recent refactoring with 
> moving HTableDescriptor constructor to TableDescriptorBuilder we incorrectly 
> modified the check whether we need to modify the system catalog table, so now 
> it always triggers modification of all system tables.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4415) Ignore CURRENT_SCN property if set in Pig Storer

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272213#comment-16272213
 ] 

Hadoop QA commented on PHOENIX-4415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899941/PHOENIX-4415_v2.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899941

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public PhoenixRecordWriter(final Configuration configuration, 
Set propsToIgnore) throws SQLException {
+this.conn = 
ConnectionUtil.getOutputConnectionWithoutTheseProps(configuration, 
propsToIgnore);
+public static Connection getOutputConnectionWithoutTheseProps(final 
Configuration conf, Set ignoreTheseProps) throws SQLException {
+public static Connection getOutputConnection(final Configuration conf, 
Properties props, Set withoutTheseProps) throws SQLException {
+public static Properties combineProperties(Properties props, final 
Configuration conf, Set withoutTheseProps) {
+if (copy.getProperty(entry.getKey()) == null && 
!withoutTheseProps.contains(entry.getKey())) {
+conf.set(PhoenixRuntime.CURRENT_SCN_ATTRIB, 
Long.toString(System.currentTimeMillis()+QueryConstants.MILLIS_IN_DAY));
+private static final Set PROPS_TO_IGNORE = 
Sets.newHashSet(PhoenixRuntime.CURRENT_SCN_ATTRIB);
+private final PhoenixOutputFormat outputFormat = new 
PhoenixOutputFormat(PROPS_TO_IGNORE);

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1644//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1644//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1644//console

This message is automatically generated.

> Ignore CURRENT_SCN property if set in Pig Storer
> 
>
> Key: PHOENIX-4415
> URL: https://issues.apache.org/jira/browse/PHOENIX-4415
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4415.patch, PHOENIX-4415_v2.patch
>
>
> Phoenix does not support back-in-time writes, so to guard against that we'll 
> remove the CURRENT_SCN property if it's set in the job configuration in 
> PhoenixHBaseStorage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4288) Indexes not used when ordering by primary key

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272131#comment-16272131
 ] 

Hadoop QA commented on PHOENIX-4288:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899924/PHOENIX-4288.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899924

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("CREATE LOCAL INDEX " + 
tableName + "_idx ON " + tableName + " (c1)");
+String query = "SELECT rowkey, c1, c2 FROM " + tableName + " where 
c1 LIKE 'X0%' ORDER BY rowkey";
+PreparedStatement stmt = conn.prepareStatement("UPSERT INTO " + 
tableName + " (rowkey, c1, c2) VALUES (?, ?, ?)");
+conn.createStatement().execute("CREATE LOCAL INDEX " + tableName + 
"_idx ON " + tableName + " (c1)");
+String query = "SELECT rowkey, max(c1), max(c2) FROM " + tableName 
+ " where c1 LIKE 'X%' GROUP BY rowkey";
+PreparedStatement stmt = conn.prepareStatement("UPSERT INTO " + 
tableName + " (rowkey, c1, c2) VALUES (?, ?, ?)");
+conn.createStatement().execute("CREATE LOCAL INDEX " + tableName + 
"_idx1 ON " + tableName + " (c1) INCLUDE (c2, c3)");
+conn.createStatement().execute("CREATE LOCAL INDEX " + tableName + 
"_idx2 ON " + tableName + " (c2, c3) INCLUDE (c1)");
+String query = "SELECT * FROM " + tableName + " where c1 BETWEEN 
10 AND 20 AND c2 < 9000 AND C3 < 5000";
+"SERVER FILTER BY ((\"C1\" >= 10 AND \"C1\" <= 20) AND 
TO_INTEGER(\"C3\") < 5000)\n" +

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.IndexToolIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1643//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1643//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1643//console

This message is automatically generated.

> Indexes not used when ordering by primary key
> -
>
> Key: PHOENIX-4288
> URL: https://issues.apache.org/jira/browse/PHOENIX-4288
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Marcin Januszkiewicz
>Assignee: Maryann Xue
>  Labels: CostBasedOptimization
> Attachments: PHOENIX-4288.patch
>
>
> We have a table
> CREATE TABLE t (
>   rowkey VARCHAR PRIMARY KEY,
>   c1 VARCHAR,
>   c2 VARCHAR
> )
> which we want to query by doing partial matches on c1, and keep the ordering 
> of the source table:
> SELECT rowkey, c1, c2 FROM t where c1 LIKE 'X0%' ORDER BY rowkey;
> We expect most queries to select a small subset of the table, so we create an 
> index to speed up searches:
> CREATE LOCAL INDEX t_c1_ix ON t (c1);
> However, this index will not be used since Phoenix will always choose not to 
> resort the data.
> In our actual use case, adding index hints is not a practical solution.
> See also discussion at:
> https://lists.apache.org/thread.html/26ab58288eb811d2f074c3f89067163d341e5531fb581f3b2486cf43@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4415) Ignore CURRENT_SCN property if set in Pig Storer

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272015#comment-16272015
 ] 

Hadoop QA commented on PHOENIX-4415:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899900/PHOENIX-4415.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899900

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static final Set DO_NOT_CLONE_PARAMS = 
Sets.newHashSet(PhoenixRuntime.CURRENT_SCN_ATTRIB);
+if (LOG.isWarnEnabled()) LOG.warn("The " + 
PhoenixRuntime.CURRENT_SCN_ATTRIB + " is unsupported when writing and will be 
ignored");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1641//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1641//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1641//console

This message is automatically generated.

> Ignore CURRENT_SCN property if set in Pig Storer
> 
>
> Key: PHOENIX-4415
> URL: https://issues.apache.org/jira/browse/PHOENIX-4415
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
> Attachments: PHOENIX-4415.patch
>
>
> Phoenix does not support back-in-time writes, so to guard against that we'll 
> remove the CURRENT_SCN property if it's set in the job configuration in 
> PhoenixHBaseStorage.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4288) Indexes not used when ordering by primary key

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16271883#comment-16271883
 ] 

Hadoop QA commented on PHOENIX-4288:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899912/PHOENIX-4288.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899912

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1642//console

This message is automatically generated.

> Indexes not used when ordering by primary key
> -
>
> Key: PHOENIX-4288
> URL: https://issues.apache.org/jira/browse/PHOENIX-4288
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Marcin Januszkiewicz
>Assignee: Maryann Xue
>  Labels: CostBasedOptimization
> Attachments: PHOENIX-4288.patch
>
>
> We have a table
> CREATE TABLE t (
>   rowkey VARCHAR PRIMARY KEY,
>   c1 VARCHAR,
>   c2 VARCHAR
> )
> which we want to query by doing partial matches on c1, and keep the ordering 
> of the source table:
> SELECT rowkey, c1, c2 FROM t where c1 LIKE 'X0%' ORDER BY rowkey;
> We expect most queries to select a small subset of the table, so we create an 
> index to speed up searches:
> CREATE LOCAL INDEX t_c1_ix ON t (c1);
> However, this index will not be used since Phoenix will always choose not to 
> resort the data.
> In our actual use case, adding index hints is not a practical solution.
> See also discussion at:
> https://lists.apache.org/thread.html/26ab58288eb811d2f074c3f89067163d341e5531fb581f3b2486cf43@%3Cuser.phoenix.apache.org%3E



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16271848#comment-16271848
 ] 

Hadoop QA commented on PHOENIX-672:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899867/PHOENIX-672.003.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899867

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 3 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"SYSTEM.\"CATALOG\"", "SYSTEM.\"SEQUENCE\"", 
"SYSTEM.\"STATS\"", "SYSTEM.\"FUNCTION\""));
+QueryConstants.SYSTEM_SCHEMA_NAME + "." + "\"" + 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE+ "\"";
+// DON'T USE HADOOP UserGroupInformation class to create testing users 
since HBase misses some of its functionality
+groupUser = User.createUserForTesting(testUtil.getConfiguration(), 
"groupUser", new String[] {GROUP_SYSTEM_ACCESS});
+unprivilegedUser = User.createUserForTesting(configuration, 
"unprivilegedUser", new String[0]);
+config.set("hbase.regionserver.wal.codec", 
"org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec");
+@Parameterized.Parameters(name = "isNamespaceMapped={0}") // name is used 
by failsafe as file name in reports
+void grantPermissions(String toUser, Set tablesToGrant, 
Permission.Action... actions) throws Throwable {
+AccessControlClient.grant(getUtility().getConnection(), 
TableName.valueOf(table), toUser, null, null,
+void grantPermissions(String toUser, String namespace, 
Permission.Action... actions) throws Throwable {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.execute.PartialCommitIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1640//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1640//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1640//console

This message is automatically generated.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>  Labels: namespaces, security
> Fix For: 4.14.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2017-11-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16271447#comment-16271447
 ] 

Hadoop QA commented on PHOENIX-672:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899865/PHOENIX-672.003.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899865

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1638//console

This message is automatically generated.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>  Labels: namespaces, security
> Fix For: 4.14.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch, 
> PHOENIX-672.003.patch
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2017-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269989#comment-16269989
 ] 

Hadoop QA commented on PHOENIX-672:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899739/PHOENIX-672.002.patch
  against master branch at commit 355ee522c1d4ff07cf9fbb0a9a01e43e3f702730.
  ATTACHMENT ID: 12899739

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1637//console

This message is automatically generated.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>  Labels: namespaces, security
> Fix For: 4.14.0
>
> Attachments: PHOENIX-672.001.patch, PHOENIX-672.002.patch
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4389) Flapping tests SystemTablePermissionsIT and MigrateSystemTablesToSystemNamespaceIT

2017-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269968#comment-16269968
 ] 

Hadoop QA commented on PHOENIX-4389:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12899708/PHOENIX-4389.001.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12899708

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1636//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1636//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1636//console

This message is automatically generated.

> Flapping tests SystemTablePermissionsIT and 
> MigrateSystemTablesToSystemNamespaceIT
> --
>
> Key: PHOENIX-4389
> URL: https://issues.apache.org/jira/browse/PHOENIX-4389
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
> Environment: Ubuntu 16.04 LTS, Oracle JDK 1.7.0_80, running long IT 
> for PHOENIX-4372 on 4.x-HBase1.2
>Reporter: Pedro Boado
>Assignee: Karan Mehta
>Priority: Minor
> Attachments: PHOENIX-4389.001.patch
>
>
> While running long IT, {{SystemTablePermissionsIT}} and 
> {{MigrateSystemTablesToSystemNamespaceIT}} are flapping throwing same 
> exception. Both tests run OK en their own.
> {code}
> [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 7.848 
> s <<< FAILURE! - in org.apache.phoenix.end2end.SystemTablePermissionsIT
> [ERROR] 
> testNamespaceMappedSystemTables(org.apache.phoenix.end2end.SystemTablePermissionsIT)
>   Time elapsed: 6.713 s  <<< ERROR!
> java.io.IOException: Shutting down
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
> Caused by: java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMasterAddress already in use
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
> Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
> Caused by: java.net.BindException: Address already in use
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testNamespaceMappedSystemTables(SystemTablePermissionsIT.java:162)
> [ERROR] 
> testSystemTablePermissions(org.apache.phoenix.end2end.SystemTablePermissionsIT)
>   Time elapsed: 1.133 s  <<< ERROR!
> java.io.IOException: Shutting down
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
> Caused by: java.lang.RuntimeException: Failed construction of Master: class 
> org.apache.hadoop.hbase.master.HMasterAddress already in use
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
> Caused by: java.net.BindException: Port in use: 0.0.0.0:60010
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
> Caused by: java.net.BindException: Address already in use
> at 
> org.apache.phoenix.end2end.SystemTablePermissionsIT.testSystemTablePermissions(SystemTablePermissionsIT.java:104)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269754#comment-16269754
 ] 

Hadoop QA commented on PHOENIX-3837:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899697/PHOENIX-3837.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12899697

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 10");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 20");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=false");
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=true");
+private static void asssertIsWALDisabled(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException {
+assertEquals(expectedValue, pconn.getTable(new 
PTableKey(pconn.getTenantId(), fullTableName)).isWALDisabled());
+  ((s=(USABLE | UNUSABLE | REBUILD | DISABLE | ACTIVE)) (async=ASYNC)? 
((SET?)p=fam_properties)?)
+  {ret = factory.alterIndex(factory.namedTable(null, 
TableName.create(t.getSchemaName(), i.getName())), t.getTableName(), ex!=null, 
PIndexState.valueOf(SchemaUtil.normalizeIdentifier(s.getText())), async!=null, 
p); }

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1635//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1635//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1635//console

This message is automatically generated.

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3837) Unable to set property on an index with Alter statement

2017-11-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16269644#comment-16269644
 ] 

Hadoop QA commented on PHOENIX-3837:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12899692/PHOENIX-3837.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12899692

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 10");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET GUIDE_POSTS_WIDTH = 20");
+"select GUIDE_POSTS_WIDTH from SYSTEM.\"CATALOG\" where 
TABLE_NAME='" + indexName + "'");assertTrue(rs.next());
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=false");
+conn.createStatement().execute("ALTER INDEX "+indexName+" ON " + 
testTable +" ACTIVE SET DISABLE_WAL=true");
+private static void asssertIsWALDisabled(Connection conn, String 
fullTableName, boolean expectedValue) throws SQLException {
+assertEquals(expectedValue, pconn.getTable(new 
PTableKey(pconn.getTenantId(), fullTableName)).isWALDisabled());
+  ((s=(USABLE | UNUSABLE | REBUILD | DISABLE | ACTIVE)) (async=ASYNC)? 
((SET?)p=fam_properties)?)
+  {ret = factory.alterIndex(factory.namedTable(null, 
TableName.create(t.getSchemaName(), i.getName())), t.getTableName(), ex!=null, 
PIndexState.valueOf(SchemaUtil.normalizeIdentifier(s.getText())), async!=null, 
p); }

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1634//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1634//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1634//console

This message is automatically generated.

> Unable to set property on an index with Alter statement
> ---
>
> Key: PHOENIX-3837
> URL: https://issues.apache.org/jira/browse/PHOENIX-3837
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
>Reporter: Mujtaba Chohan
>Assignee: Ethan Wang
> Fix For: 4.14.0
>
> Attachments: PHOENIX-3837.patch
>
>
> {{ALTER INDEX IDX_T ON T SET GUIDE_POSTS_WIDTH=1}}
> {noformat}
> Error: ERROR 601 (42P00): Syntax error. Encountered "SET" at line 1, column 
> 102. (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "SET" at line 1, column 102.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.java:33)
> at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
> at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1299)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4405) Compilation error using Hadoop3

2017-11-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16267552#comment-16267552
 ] 

Hadoop QA commented on PHOENIX-4405:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12899487/PHOENIX-4405.001.5.x-HBase-2.0.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12899487

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 7 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1633//console

This message is automatically generated.

> Compilation error using Hadoop3
> ---
>
> Key: PHOENIX-4405
> URL: https://issues.apache.org/jira/browse/PHOENIX-4405
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4405.001.5.x-HBase-2.0.patch
>
>
> {noformat}
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/trace/PhoenixMetricsSink.java:[37,40]
>  package org.apache.commons.configuration does not exist
> [ERROR] 
> /Users/jelser/projects/phoenix.git/phoenix-core/src/main/java/org/apache/phoenix/trace/PhoenixMetricsSink.java:[110,22]
>  cannot find symbol
>   symbol:   class SubsetConfiguration
>   location: class org.apache.phoenix.trace.PhoenixMetricsSink
> [INFO] 2 errors
> {noformat}
> Flipping over our Hadoop version seems to bring along some 
> commons-configuration dependency change which requires a tweaking in Phoenix. 
> Should be trivial.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4398) Change QueryCompiler get column expressions process from serial to parallel.

2017-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16263037#comment-16263037
 ] 

Hadoop QA commented on PHOENIX-4398:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12898882/PHOENIX-4398.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12898882

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 2 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+private static Configuration config = 
HBaseFactoryProvider.getConfigurationFactory().getConfiguration();
+private static boolean use_compile_parallel = 
config.getBoolean(USE_COMPILE_COLUMN_EXPRESSION_PARALLEL,
+expressions[i++] = ((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression();
+return new ExpressionOrder(((ProjectedColumn) 
column).getSourceColumnRef().newColumnExpression(), order);
+public static final String USE_COMPILE_COLUMN_EXPRESSION_PARALLEL = 
"phoenix.use.columnexpression.parallel";
+public static final String COMPILE_COLUMN_EXPRESSION_PARALLEL_THREAD = 
"phoenix.columnexpression.parallel.thread";

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexReplicationIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1632//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1632//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1632//console

This message is automatically generated.

> Change QueryCompiler get column expressions process from serial to parallel.
> 
>
> Key: PHOENIX-4398
> URL: https://issues.apache.org/jira/browse/PHOENIX-4398
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Albert Lee
> Fix For: 4.11.0, 4.13.0
>
> Attachments: PHOENIX-4398.patch
>
>
> When QueryCompiler compile a select sql, the process of getting column 
> expressions is a serial process. The performance is ok when the table is 
> narrow. But when compile a wide table(e.g. 130 columns in my use case), The 
> time-consuming of this step is very high, over 70ms. So I change 
> TupleProjector(PTable projectedTable) from serial for loop to parallel future.
> Because this is just modify code performance, not add new feture, so there is 
> no unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4398) Change QueryCompiler get column expressions process from serial to parallel.

2017-11-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16262415#comment-16262415
 ] 

Hadoop QA commented on PHOENIX-4398:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12898843/PHOENIX-4398_v1.patch
  against master branch at commit c216b667a8da568f768c0d26f46fa1a9c0994a04.
  ATTACHMENT ID: 12898843

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1631//console

This message is automatically generated.

> Change QueryCompiler get column expressions process from serial to parallel.
> 
>
> Key: PHOENIX-4398
> URL: https://issues.apache.org/jira/browse/PHOENIX-4398
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0, 4.13.0
>Reporter: Albert Lee
> Fix For: 4.11.0, 4.13.0
>
> Attachments: PHOENIX-4398_v1.patch
>
>
> When QueryCompiler compile a select sql, the process of getting column 
> expressions is a serial process. The performance is ok when the table is 
> narrow. But when compile a wide table(e.g. 130 columns in my use case), The 
> time-consuming of this step is very high, over 70ms. So I change 
> TupleProjector(PTable projectedTable) from serial for loop to parallel future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-672) Add GRANT and REVOKE commands using HBase AccessController

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16256432#comment-16256432
 ] 

Hadoop QA commented on PHOENIX-672:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12898104/PHOENIX-672.001.patch
  against master branch at commit ef3bce18fe7373b66136d933cc364001dff2c3f8.
  ATTACHMENT ID: 12898104

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 4 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+"SYSTEM.\"CATALOG\"", "SYSTEM.\"SEQUENCE\"", 
"SYSTEM.\"STATS\"", "SYSTEM.\"FUNCTION\""));
+QueryConstants.SYSTEM_SCHEMA_NAME + "." + "\"" + 
PhoenixDatabaseMetaData.SYSTEM_SEQUENCE_TABLE+ "\"";
+groupUser = User.createUserForTesting(testUtil.getConfiguration(), 
"groupUser", new String[]{GROUP_SYSTEM_ACCESS});
+unprivilegedUser = User.createUserForTesting(configuration, 
"unprivilegedUser", new String[0]);
+config.set("hbase.regionserver.wal.codec", 
"org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec");
+@Parameterized.Parameters(name = "isNamespaceMapped={0}") // name is used 
by failsafe as file name in reports
+void grantPermissions(String toUser, Set tablesToGrant, 
Permission.Action... actions) throws Throwable {
+AccessControlClient.grant(getUtility().getConnection(), 
TableName.valueOf(table), toUser, null, null,
+void grantPermissions(String toUser, String namespace, 
Permission.Action... actions) throws Throwable {
+void grantPermissions(String groupEntry, Permission.Action... actions) 
throws IOException, Throwable {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ColumnEncodedMutableTxStatsCollectorIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1630//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1630//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1630//console

This message is automatically generated.

> Add GRANT and REVOKE commands using HBase AccessController
> --
>
> Key: PHOENIX-672
> URL: https://issues.apache.org/jira/browse/PHOENIX-672
> Project: Phoenix
>  Issue Type: Task
>Reporter: James Taylor
>Assignee: Karan Mehta
>  Labels: namespaces, security
> Fix For: 4.14.0
>
> Attachments: PHOENIX-672.001.patch
>
>
> In HBase 0.98, cell-level security will be available. Take a look at 
> [this](https://communities.intel.com/community/datastack/blog/2013/10/29/hbase-cell-security)
>  excellent blog post by @apurtell. Once Phoenix works on 0.96, we should add 
> support for security to our SQL grammar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4384) Phoenix server jar doesn't include icu4j jars

2017-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255190#comment-16255190
 ] 

Hadoop QA commented on PHOENIX-4384:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12897951/PHOENIX-4384_v1.patch
  against master branch at commit ef3bce18fe7373b66136d933cc364001dff2c3f8.
  ATTACHMENT ID: 12897951

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.txn.RollbackIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1629//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1629//console

This message is automatically generated.

> Phoenix server jar doesn't include icu4j jars
> -
>
> Key: PHOENIX-4384
> URL: https://issues.apache.org/jira/browse/PHOENIX-4384
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
> Fix For: 4.13.0
>
> Attachments: PHOENIX-4384_v1.patch
>
>
> The phoenix server "shaded" jar is supposed to include all? its dependencies. 
> However icu4j is missing. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4381) Calculate the estimatedSize of MutationState incrementally

2017-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16254760#comment-16254760
 ] 

Hadoop QA commented on PHOENIX-4381:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12897906/PHOENIX-4381.patch
  against master branch at commit 2053905683409225ffdc1c0ae4fc6c759604a80d.
  ATTACHMENT ID: 12897906

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+// here we increment the estimated size by the fraction of new 
rows we added from the newMutationState 
+this.estimatedSize += 
((double)(this.numRows-oldNumRows)/newMutationState.numRows) * 
newMutationState.estimatedSize;
+if (logger.isDebugEnabled()) logger.debug("Sent 
batch of " + mutationBatch.size() + " for " + Bytes.toString(htableName));
+getEstimatedRowSize(TableRef tableRef, Map mutations) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.AbsFunctionEnd2EndIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ColumnEncodedImmutableNonTxStatsCollectorIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1628//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1628//console

This message is automatically generated.

> Calculate the estimatedSize of MutationState incrementally
> --
>
> Key: PHOENIX-4381
> URL: https://issues.apache.org/jira/browse/PHOENIX-4381
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.13.1
>
> Attachments: PHOENIX-4381.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4379) Upgrade code to create CHILD links should only create the links for views and not for indexes

2017-11-14 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16252846#comment-16252846
 ] 

Hadoop QA commented on PHOENIX-4379:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12897641/PHOENIX-4279.patch
  against master branch at commit b2d5b4d75d4698981b291fecfac3efa3fb6e2649.
  ATTACHMENT ID: 12897641

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1627//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1627//console

This message is automatically generated.

> Upgrade code to create CHILD links should only create the links for views and 
> not for indexes
> -
>
> Key: PHOENIX-4379
> URL: https://issues.apache.org/jira/browse/PHOENIX-4379
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.13.1
>
> Attachments: PHOENIX-4279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4305) Make use of Cell interface APIs where ever possible.

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250827#comment-16250827
 ] 

Hadoop QA commented on PHOENIX-4305:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12897441/PHOENIX-4305_v2.patch
  against master branch at commit 1d8a6bc3a6a277d9e3201066b753fa9fd7018545.
  ATTACHMENT ID: 12897441

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 55 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1626//console

This message is automatically generated.

> Make use of Cell interface APIs where ever possible.
> 
>
> Key: PHOENIX-4305
> URL: https://issues.apache.org/jira/browse/PHOENIX-4305
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.12.0
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4305.patch, PHOENIX-4305_v2.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4373) Local index variable length key can have trailing nulls while upserting

2017-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16250773#comment-16250773
 ] 

Hadoop QA commented on PHOENIX-4373:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12897415/PHOENIX-4373.v1.master.patch
  against master branch at commit 1d8a6bc3a6a277d9e3201066b753fa9fd7018545.
  ATTACHMENT ID: 12897415

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
++ "(ID,CAR_NUM,CAP_DATE,ORG_ID,ORG_NAME) 
VALUES('1','car1','2016-01-01 00:00:00',11,'orgname1')";
++ " WHERE CAR_NUM='car1' AND 
CAP_DATE>='2016-01-01' AND CAP_DATE<='2016-05-02' LIMIT 10");

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.KeyOnlyIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1625//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1625//console

This message is automatically generated.

> Local index variable length key can have trailing nulls while upserting
> ---
>
> Key: PHOENIX-4373
> URL: https://issues.apache.org/jira/browse/PHOENIX-4373
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4373.v1.master.patch
>
>
> In the UpsertCompiler#setValues() , if it's a local index, the key is 
> prefixed with regionPrefix.  During that process, ptr.get() is called to get 
> the base key, and the code assumes the entire array should be used.  However, 
> if it's a variable length key, we could have trailing nulls since the base 
> key ptr array size is just an estimate. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4367) Document new COLLATION_KEY function

2017-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16248237#comment-16248237
 ] 

Hadoop QA commented on PHOENIX-4367:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12897159/collation_key_doc.patch
  against master branch at commit 2a8e1c750f081f7f020d4321f8d76ae02c074aa5.
  ATTACHMENT ID: 12897159

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation, build,
or dev patch that doesn't require tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1624//console

This message is automatically generated.

> Document new COLLATION_KEY function
> ---
>
> Key: PHOENIX-4367
> URL: https://issues.apache.org/jira/browse/PHOENIX-4367
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: James Taylor
>Assignee: Shehzaad Nakhoda
>Priority: Minor
> Fix For: 4.13.0
>
> Attachments: collation_key_doc.patch
>
>
> Please add a small entry to phoenix.csv (which lives in 
> https://svn.apache.org/repos/asf/phoenix) to describe how to use the new 
> COLLATION_KEY built-in function. You can copy/paste an existing function 
> description and see examples here: 
> https://phoenix.apache.org/language/functions.html. For directions on 
> updating the website, see here: 
> https://phoenix.apache.org/language/functions.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4321) Replace deprecated HBaseAdmin with Admin

2017-11-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16246607#comment-16246607
 ] 

Hadoop QA commented on PHOENIX-4321:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12896953/PHOENIX-4321.patch
  against master branch at commit 217867c78108b29d991794726c01c1eefb49b828.
  ATTACHMENT ID: 12896953

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 44 new 
or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1622//console

This message is automatically generated.

> Replace deprecated HBaseAdmin with Admin
> 
>
> Key: PHOENIX-4321
> URL: https://issues.apache.org/jira/browse/PHOENIX-4321
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>  Labels: HBase-2.0
> Fix For: 4.14.0
>
> Attachments: PHOENIX-4321.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4358) Case Sensitive String match on SqlType in PDataType

2017-11-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16243561#comment-16243561
 ] 

Hadoop QA commented on PHOENIX-4358:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12896574/caseFix.patch
  against master branch at commit 4a1f0df6143ba705a48b5051aee52dab158afe8d.
  ATTACHMENT ID: 12896574

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.join.HashJoinNoIndexIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1621//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1621//console

This message is automatically generated.

> Case Sensitive String match on SqlType in PDataType
> ---
>
> Key: PHOENIX-4358
> URL: https://issues.apache.org/jira/browse/PHOENIX-4358
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: OSX and Linux
>Reporter: Dave Angulo
>Priority: Minor
> Attachments: caseFix.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> fromSqlTypeName() method uses a case sensitive match on input SqlType. This 
> causes an issue in Spark JDBCUtils.makeSetter() which lowerCases input. The 
> result is the error  _Unsupported sql type: varchar_.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4340) Implements Observer interfaces instead of extending base observers classes

2017-11-07 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16241820#comment-16241820
 ] 

Hadoop QA commented on PHOENIX-4340:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12896382/PHOENIX-4340.patch
  against master branch at commit 4a1f0df6143ba705a48b5051aee52dab158afe8d.
  ATTACHMENT ID: 12896382

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1620//console

This message is automatically generated.

> Implements Observer interfaces instead of extending base observers classes
> --
>
> Key: PHOENIX-4340
> URL: https://issues.apache.org/jira/browse/PHOENIX-4340
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0, JDK1.8
> Attachments: PHOENIX-4340.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2370) ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and varbinary columns

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16238606#comment-16238606
 ] 

Hadoop QA commented on PHOENIX-2370:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12883280/PHOENIX-2370_v2.patch
  against master branch at commit a09cea6bfb94edd95ce06aa2cb7f229227db5666.
  ATTACHMENT ID: 12883280

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1617//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1617//console

This message is automatically generated.

> ResultSetMetaData.getColumnDisplaySize() returns bad value for varchar and 
> varbinary columns
> 
>
> Key: PHOENIX-2370
> URL: https://issues.apache.org/jira/browse/PHOENIX-2370
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.5.0
> Environment: Linux lnxx64r6 2.6.32-131.0.15.el6.x86_64 #1 SMP Tue May 
> 10 15:42:40 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Sergio Lob
>Assignee: Csaba Skrabak
>Priority: Major
>  Labels: newbie, verify
> Fix For: 4.14.0
>
> Attachments: PHOENIX-2370.patch, PHOENIX-2370_v2.patch
>
>
> ResultSetMetaData.getColumnDisplaySize() returns bad values for varchar and 
> varbinary columns. Specifically, for the following table:
> CREATE TABLE SERGIO (I INTEGER, V10 VARCHAR(10),
> VHUGE VARCHAR(2147483647), V VARCHAR, VB10 VARBINARY(10), VBHUGE 
> VARBINARY(2147483647), VB VARBINARY) ;
> 1. getColumnDisplaySize() returns 20 for all varbinary columns, no matter the 
> defined size. This should return the max possible size of the column, so:
>  getColumnDisplaySize() should return 10 for column VB10,
>  getColumnDisplaySize() should return 2147483647 for column VBHUGE,
>  getColumnDisplaySize() should return 2147483647 for column VB, assuming that 
> a column defined with no size should default to the maximum size.
> 2. getColumnDisplaySize() returns 40 for all varchar columns that are not 
> defined with a size, like in column V in the above CREATE TABLE.  I would 
> think that a VARCHAR column defined with no size parameter should default to 
> the maximum size possible, not to a random number like 40.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-2048) change to_char() function to use HALF_UP rounding mode

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16238564#comment-16238564
 ] 

Hadoop QA commented on PHOENIX-2048:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12883121/PHOENIX-2048_v2.patch
  against master branch at commit a09cea6bfb94edd95ce06aa2cb7f229227db5666.
  ATTACHMENT ID: 12883121

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1618//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1618//console

This message is automatically generated.

> change to_char() function to use HALF_UP rounding mode
> --
>
> Key: PHOENIX-2048
> URL: https://issues.apache.org/jira/browse/PHOENIX-2048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: verify
>Reporter: Jonathan Leech
>Assignee: Csaba Skrabak
>Priority: Minor
> Fix For: 4.14.0
>
> Attachments: PHOENIX-2048.patch, PHOENIX-2048_v2.patch
>
>
> to_char() function uses the default rounding mode in java DecimalFormat, 
> which is a strange one called HALF_EVEN, which rounds a '5' in the last 
> position either up or down depending on the preceding digit. 
> Change it to HALF_UP so it rounds the same way as the round() function does, 
> or provide a way to override the behavior; e.g. globally or as a client 
> config, or an argument to the to_char() function.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4342) Surface QueryPlan in MutationPlan

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16238474#comment-16238474
 ] 

Hadoop QA commented on PHOENIX-4342:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895961/PHOENIX-4342-v4.patch
  against master branch at commit a09cea6bfb94edd95ce06aa2cb7f229227db5666.
  ATTACHMENT ID: 12895961

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+mutationPlans.add(new SingleRowDeleteMutationPlan(plan, 
connection, maxSize, maxSizeBytes));
+return new ServerSelectDeleteMutationPlan(dataPlan, connection, 
aggPlan, projector, maxSize, maxSizeBytes);
+return new ClientSelectDeleteMutationPlan(targetTableRef, 
dataPlan, bestPlan, hasPreOrPostProcessing,
+parallelIteratorFactory, otherTableRefs, 
projectedTableRef, maxSize, maxSizeBytes, connection);
+public SingleRowDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, int maxSize, int maxSizeBytes) {
+Map mutation = 
Maps.newHashMapWithExpectedSize(ranges.getPointLookupCount());
+
statement.getConnection().getStatementExecutionCounter(), 
NULL_ROWTIMESTAMP_INFO, null));
+return new MutationState(dataPlan.getTableRef(), mutation, 0, 
maxSize, maxSizeBytes, connection);
+public ServerSelectDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, QueryPlan aggPlan,
+  RowProjector projector, int 
maxSize, int maxSizeBytes) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1616//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1616//console

This message is automatically generated.

> Surface QueryPlan in MutationPlan
> -
>
> Key: PHOENIX-4342
> URL: https://issues.apache.org/jira/browse/PHOENIX-4342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: PHOENIX-4342-v2.patch, PHOENIX-4342-v3.patch, 
> PHOENIX-4342-v4.patch, PHOENIX-4342.patch
>
>
> For DELETE statements, it'd be good to be able to get at the QueryPlan 
> through the MutationPlan so we can get more structured information at compile 
> time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4342) Surface QueryPlan in MutationPlan

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16238035#comment-16238035
 ] 

Hadoop QA commented on PHOENIX-4342:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895895/PHOENIX-4342-v3.patch
  against master branch at commit 79eff5f89adb2c05024272203eebf0504f82ee3d.
  ATTACHMENT ID: 12895895

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1615//console

This message is automatically generated.

> Surface QueryPlan in MutationPlan
> -
>
> Key: PHOENIX-4342
> URL: https://issues.apache.org/jira/browse/PHOENIX-4342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: PHOENIX-4342-v2.patch, PHOENIX-4342-v3.patch, 
> PHOENIX-4342.patch
>
>
> For DELETE statements, it'd be good to be able to get at the QueryPlan 
> through the MutationPlan so we can get more structured information at compile 
> time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4237) Allow sorting on (Java) collation keys for non-English locales

2017-11-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16237360#comment-16237360
 ] 

Hadoop QA commented on PHOENIX-4237:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895584/PHOENIX-4237_v3.patch
  against master branch at commit 1e48eabe4cbf72ce71fb0dbdd6053a9600133ee4.
  ATTACHMENT ID: 12895584

{color:red}-1 @author{color}.  The patch appears to contain 1 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+   queryWithCollKeyDefaultArgsWithExpectedOrder("zh_TW", 0, 6, new 
Integer[] { 0, 3, 4, 1, 5, 2, 6 });
+   queryWithCollKeyDefaultArgsWithExpectedOrder("zh_TW_STROKE", 0, 
6, new Integer[] { 4, 2, 0, 3, 1, 6, 5 });
+   queryWithCollKeyDefaultArgsWithExpectedOrder("zh__STROKE", 0, 
6, new Integer[] { 0, 1, 3, 4, 6, 2, 5 });
+   queryWithCollKeyDefaultArgsWithExpectedOrder("zh__PINYIN", 0, 
6, new Integer[] { 0, 1, 3, 4, 6, 2, 5 });
+   queryWithCollKeyUpperCaseWithExpectedOrder("en", 7, 13, new 
Integer[] { 7, 10, 11, 13, 9, 12, 8 });
+   private void queryWithCollKeyDefaultArgsWithExpectedOrder(String 
localeString, Integer beginIndex, Integer endIndex,
+   "SELECT id, data FROM %s WHERE ID BETWEEN %d 
AND %d ORDER BY COLLATION_KEY(data, '%s')", tableName,
+   private void queryWithCollKeyUpperCaseWithExpectedOrder(String 
localeString, Integer beginIndex, Integer endIndex,
+   "SELECT id, data FROM %s WHERE ID BETWEEN %d 
AND %d ORDER BY COLLATION_KEY(data, '%s', true), id",
+   private void queryWithCollKeyWithStrengthWithExpectedOrder(String 
localeString, Integer strength, boolean isDescending,

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.MutableIndexFailureIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1614//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1614//console

This message is automatically generated.

> Allow sorting on (Java) collation keys for non-English locales
> --
>
> Key: PHOENIX-4237
> URL: https://issues.apache.org/jira/browse/PHOENIX-4237
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Shehzaad Nakhoda
>Assignee: Shehzaad Nakhoda
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: PHOENIX-4237_v1.patch, PHOENIX-4237_v2.patch, 
> PHOENIX-4237_v3.patch
>
>
> Strings stored via Phoenix can be composed from a subset of the entire set of 
> Unicode characters. The natural sort order for strings for different 
> languages often differs from the order dictated by the binary representation 
> of the characters of these strings. Java provides the idea of a Collator 
> which given an input string and a (language) locale can generate a Collation 
> Key which can then be used to compare strings in that natural order.
> Salesforce has recently open-sourced grammaticus. IBM has open-sourced ICU4J 
> some time ago. These technologies can be combined to provide a robust new 
> Phoenix function that can be used in an ORDER BY clause to sort strings 
> according to the user's locale.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4348) Point deletes do not work when there are immutable indexes with only row key columns

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16237027#comment-16237027
 ] 

Hadoop QA commented on PHOENIX-4348:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895545/PHOENIX-4348.patch
  against master branch at commit 895d067974639cd2205b14940e4e46864b4e2060.
  ATTACHMENT ID: 12895545

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+public void testPointDeleteRowFromTableWithImmutableIndex(boolean 
localIndex, boolean addNonPKIndex) throws Exception {
+"CONSTRAINT PK PRIMARY KEY (HOST, DOMAIN, FEATURE, 
\"DATE\")) IMMUTABLE_ROWS=true");
+stm.execute("CREATE " + (localIndex ? "LOCAL" : "") + " INDEX " + 
indexName1 + " ON " + tableName + " (\"DATE\", FEATURE)");
+stm.execute("CREATE " + (localIndex ? "LOCAL" : "") + " INDEX " + 
indexName2 + " ON " + tableName + " (FEATURE, DOMAIN)");
+stm.execute("CREATE " + (localIndex ? "LOCAL" : "") + " INDEX 
" + indexName3 + " ON " + tableName + " (\"DATE\", FEATURE, USAGE.DB)");
+.prepareStatement("UPSERT INTO " + tableName + "(HOST, 
DOMAIN, FEATURE, \"DATE\", CORE, DB, ACTIVE_VISITOR) VALUES(?,?, ? , ?, ?, ?, 
?)");
+String dml = "DELETE FROM " + tableName + " WHERE (HOST, DOMAIN, 
FEATURE, \"DATE\") = (?,?,?,?)";
+return new MutationState(plan.getTableRef(), mutation, 
0, maxSize, maxSizeBytes, connection);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1613//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1613//console

This message is automatically generated.

> Point deletes do not work when there are immutable indexes with only row key 
> columns
> 
>
> Key: PHOENIX-4348
> URL: https://issues.apache.org/jira/browse/PHOENIX-4348
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.13.0
>
> Attachments: PHOENIX-4348.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4342) Surface QueryPlan in MutationPlan

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236997#comment-16236997
 ] 

Hadoop QA commented on PHOENIX-4342:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895538/PHOENIX-4342-v2.patch
  against master branch at commit 895d067974639cd2205b14940e4e46864b4e2060.
  ATTACHMENT ID: 12895538

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+mutationPlans.add(new 
SingleRowDeleteMutationPlan(dataPlan, connection, maxSize, maxSizeBytes));
+return new ServerSelectDeleteMutationPlan(dataPlan, connection, 
aggPlan, projector, maxSize, maxSizeBytes);
+return new ClientSelectDeleteMutationPlan(targetTableRef, 
dataPlan, bestPlan, hasPreOrPostProcessing,
+parallelIteratorFactory, otherTableRefs, 
projectedTableRef, maxSize, maxSizeBytes, connection);
+public SingleRowDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, int maxSize, int maxSizeBytes) {
+Map mutation = 
Maps.newHashMapWithExpectedSize(ranges.getPointLookupCount());
+mutation.put(new 
ImmutableBytesPtr(iterator.next().getLowerRange()), new 
RowMutationState(PRow.DELETE_MARKER, 
statement.getConnection().getStatementExecutionCounter(), 
NULL_ROWTIMESTAMP_INFO, null));
+return new MutationState(context.getCurrentTable(), mutation, 0, 
maxSize, maxSizeBytes, connection);
+public ServerSelectDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, QueryPlan aggPlan,
+  RowProjector projector, int 
maxSize, int maxSizeBytes) {

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.RebuildIndexConnectionPropsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.SaltedViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1612//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1612//console

This message is automatically generated.

> Surface QueryPlan in MutationPlan
> -
>
> Key: PHOENIX-4342
> URL: https://issues.apache.org/jira/browse/PHOENIX-4342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: PHOENIX-4342-v2.patch, PHOENIX-4342.patch
>
>
> For DELETE statements, it'd be good to be able to get at the QueryPlan 
> through the MutationPlan so we can get more structured information at compile 
> time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4287) Incorrect aggregate query results when stats are disable for parallelization

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236987#comment-16236987
 ] 

Hadoop QA commented on PHOENIX-4287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12895529/PHOENIX-4287_addendum7.patch
  against master branch at commit 7d2205d0c9854f61e667a4939eeed645de518f45.
  ATTACHMENT ID: 12895529

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1611//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1611//console

This message is automatically generated.

> Incorrect aggregate query results when stats are disable for parallelization
> 
>
> Key: PHOENIX-4287
> URL: https://issues.apache.org/jira/browse/PHOENIX-4287
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: HBase 1.3.1
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
>Priority: Major
>  Labels: localIndex
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4287.patch, PHOENIX-4287_addendum.patch, 
> PHOENIX-4287_addendum2.patch, PHOENIX-4287_addendum3.patch, 
> PHOENIX-4287_addendum4.patch, PHOENIX-4287_addendum5.patch, 
> PHOENIX-4287_addendum6.patch, PHOENIX-4287_addendum7.patch, 
> PHOENIX-4287_v2.patch, PHOENIX-4287_v3.patch, PHOENIX-4287_v3_wip.patch, 
> PHOENIX-4287_v4.patch
>
>
> With {{phoenix.use.stats.parallelization}} set to {{false}}, aggregate query 
> returns incorrect results when stats are available.
> With local index and stats disabled for parallelization:
> {noformat}
> explain select count(*) from TABLE_T;
> +---+-++---+
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO |
> +---+-++---+
> | CLIENT 0-CHUNK 332170 ROWS 625043899 BYTES PARALLEL 0-WAY RANGE SCAN OVER 
> TABLE_T [1]  | 625043899   | 332170 | 150792825 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 625043899   | 332170 | 150792825 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 625043899   | 332170 | 150792825 |
> +---+-++---+
> select count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {noformat}
> Using data table
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-+++
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
> +--+-+++
> | CLIENT 2-CHUNK 332151 ROWS 438492470 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> TABLE_T  | 438492470   | 332151 | 1507928257617  |
> | SERVER FILTER BY FIRST KEY ONLY 
>  | 438492470   | 332151 | 1507928257617  |
> | SERVER AGGREGATE INTO SINGLE ROW
>  | 438492470   | 332151 | 1507928257617  |
> +-

[jira] [Commented] (PHOENIX-4287) Incorrect aggregate query results when stats are disable for parallelization

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236864#comment-16236864
 ] 

Hadoop QA commented on PHOENIX-4287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12895514/PHOENIX-4287_addendum5.patch
  against master branch at commit 8f9356a2bdd6ba603158899eba38750c85e8e574.
  ATTACHMENT ID: 12895514

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.IndexWithTableSchemaChangeIT
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.DropColumnIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1609//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1609//console

This message is automatically generated.

> Incorrect aggregate query results when stats are disable for parallelization
> 
>
> Key: PHOENIX-4287
> URL: https://issues.apache.org/jira/browse/PHOENIX-4287
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: HBase 1.3.1
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
>Priority: Major
>  Labels: localIndex
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4287.patch, PHOENIX-4287_addendum.patch, 
> PHOENIX-4287_addendum2.patch, PHOENIX-4287_addendum3.patch, 
> PHOENIX-4287_addendum4.patch, PHOENIX-4287_addendum5.patch, 
> PHOENIX-4287_addendum6.patch, PHOENIX-4287_v2.patch, PHOENIX-4287_v3.patch, 
> PHOENIX-4287_v3_wip.patch, PHOENIX-4287_v4.patch
>
>
> With {{phoenix.use.stats.parallelization}} set to {{false}}, aggregate query 
> returns incorrect results when stats are available.
> With local index and stats disabled for parallelization:
> {noformat}
> explain select count(*) from TABLE_T;
> +---+-++---+
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO |
> +---+-++---+
> | CLIENT 0-CHUNK 332170 ROWS 625043899 BYTES PARALLEL 0-WAY RANGE SCAN OVER 
> TABLE_T [1]  | 625043899   | 332170 | 150792825 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 625043899   | 332170 | 150792825 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 625043899   | 332170 | 150792825 |
> +---+-++---+
> select count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {noformat}
> Using data table
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-+++
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
> +--+-+++
> | CLIENT 2-CHUNK 332151 ROWS 438492470 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> TABLE_T  | 438492470   | 332151 | 1507928257617  |
> | SERVER FILTER BY FIRST KEY ONLY 
>  | 438492470   | 332151 | 1507928257617  |
> | SERVER AGGREGATE INTO SINGLE ROW

[jira] [Commented] (PHOENIX-4287) Incorrect aggregate query results when stats are disable for parallelization

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236857#comment-16236857
 ] 

Hadoop QA commented on PHOENIX-4287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12895524/PHOENIX-4287_addendum6.patch
  against master branch at commit 7d2205d0c9854f61e667a4939eeed645de518f45.
  ATTACHMENT ID: 12895524

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1610//console

This message is automatically generated.

> Incorrect aggregate query results when stats are disable for parallelization
> 
>
> Key: PHOENIX-4287
> URL: https://issues.apache.org/jira/browse/PHOENIX-4287
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.12.0
> Environment: HBase 1.3.1
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
>Priority: Major
>  Labels: localIndex
> Fix For: 4.13.0, 4.12.1
>
> Attachments: PHOENIX-4287.patch, PHOENIX-4287_addendum.patch, 
> PHOENIX-4287_addendum2.patch, PHOENIX-4287_addendum3.patch, 
> PHOENIX-4287_addendum4.patch, PHOENIX-4287_addendum5.patch, 
> PHOENIX-4287_addendum6.patch, PHOENIX-4287_v2.patch, PHOENIX-4287_v3.patch, 
> PHOENIX-4287_v3_wip.patch, PHOENIX-4287_v4.patch
>
>
> With {{phoenix.use.stats.parallelization}} set to {{false}}, aggregate query 
> returns incorrect results when stats are available.
> With local index and stats disabled for parallelization:
> {noformat}
> explain select count(*) from TABLE_T;
> +---+-++---+
> | PLAN
>   | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO |
> +---+-++---+
> | CLIENT 0-CHUNK 332170 ROWS 625043899 BYTES PARALLEL 0-WAY RANGE SCAN OVER 
> TABLE_T [1]  | 625043899   | 332170 | 150792825 |
> | SERVER FILTER BY FIRST KEY ONLY 
>   | 625043899   | 332170 | 150792825 |
> | SERVER AGGREGATE INTO SINGLE ROW
>   | 625043899   | 332170 | 150792825 |
> +---+-++---+
> select count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 0 |
> +---+
> {noformat}
> Using data table
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-+++
> |   PLAN  
>  | EST_BYTES_READ  | EST_ROWS_READ  |  EST_INFO_TS   |
> +--+-+++
> | CLIENT 2-CHUNK 332151 ROWS 438492470 BYTES PARALLEL 1-WAY FULL SCAN OVER 
> TABLE_T  | 438492470   | 332151 | 1507928257617  |
> | SERVER FILTER BY FIRST KEY ONLY 
>  | 438492470   | 332151 | 1507928257617  |
> | SERVER AGGREGATE INTO SINGLE ROW
>  | 438492470   | 332151 | 1507928257617  |
> +--+-+++
> select /*+NO_INDEX*/ count(*) from TABLE_T;
> +---+
> | COUNT(1)  |
> +---+
> | 14|
> +---+
> {noformat}
> Without stats available, results are correct:
> {noformat}
> explain select /*+NO_INDEX*/ count(*) from TABLE_T;
> +--+-++--

[jira] [Commented] (PHOENIX-4342) Surface QueryPlan in MutationPlan

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4342?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236851#comment-16236851
 ] 

Hadoop QA commented on PHOENIX-4342:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895489/PHOENIX-4342.patch
  against master branch at commit 8f9356a2bdd6ba603158899eba38750c85e8e574.
  ATTACHMENT ID: 12895489

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+List mutationPlans = 
Lists.newArrayListWithExpectedSize(queryPlans.size());
+mutationPlans.add(new SingleRowDeleteMutationPlan(dataPlan, 
connection, maxSize, maxSizeBytes));
+return new ServerSelectDeleteMutationPlan(dataPlan, connection, 
aggPlan, projector, maxSize, maxSizeBytes);
+return new ClientSelectDeleteMutationPlan(targetTableRef, 
dataPlan, bestPlan, hasPreOrPostProcessing,
+parallelIteratorFactory, otherTableRefs, 
projectedTableRef, maxSize, maxSizeBytes, connection);
+public SingleRowDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, int maxSize, int maxSizeBytes) {
+Map mutation = 
Maps.newHashMapWithExpectedSize(ranges.getPointLookupCount());
+mutation.put(new 
ImmutableBytesPtr(iterator.next().getLowerRange()), new 
RowMutationState(PRow.DELETE_MARKER, 
statement.getConnection().getStatementExecutionCounter(), 
NULL_ROWTIMESTAMP_INFO, null));
+return new MutationState(context.getCurrentTable(), mutation, 0, 
maxSize, maxSizeBytes, connection);
+public ServerSelectDeleteMutationPlan(QueryPlan dataPlan, 
PhoenixConnection connection, QueryPlan aggPlan,

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1608//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1608//console

This message is automatically generated.

> Surface QueryPlan in MutationPlan
> -
>
> Key: PHOENIX-4342
> URL: https://issues.apache.org/jira/browse/PHOENIX-4342
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Geoffrey Jacoby
>Priority: Minor
> Attachments: PHOENIX-4342.patch
>
>
> For DELETE statements, it'd be good to be able to get at the QueryPlan 
> through the MutationPlan so we can get more structured information at compile 
> time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3460) Namespace separator ":" should not be allowed in table or schema name

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236705#comment-16236705
 ] 

Hadoop QA commented on PHOENIX-3460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895480/PHOENIX-3460-v2.patch
  against master branch at commit 61684c4431d16deff53adfbb91ea76c13642df61.
  ATTACHMENT ID: 12895480

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.index.PartialIndexRebuilderIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1607//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1607//console

This message is automatically generated.

> Namespace separator ":" should not be allowed in table or schema name
> -
>
> Key: PHOENIX-3460
> URL: https://issues.apache.org/jira/browse/PHOENIX-3460
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: HDP 2.5
>Reporter: Xindian Long
>Assignee: Thomas D'Silva
>Priority: Major
>  Labels: namespaces, phoenix, spark
> Fix For: 4.13.0
>
> Attachments: 0001-Phoenix-fix.patch, PHOENIX-3460-v2.patch, 
> PHOENIX-3460-v2.patch, PHOENIX-3460.patch, SchemaUtil.java
>
>
> I am testing some code using Phoenix Spark plug in to read a Phoenix table 
> with a namespace prefix in the table name (the table is created as a phoenix 
> table not a hbase table), but it returns an TableNotFoundException.
> The table is obviously there because I can query it using plain phoenix sql 
> through Squirrel. In addition, using spark sql to query it has no problem at 
> all.
> I am running on the HDP 2.5 platform, with phoenix 4.7.0.2.5.0.0-1245
> The problem does not exist at all when I was running the same code on HDP 2.4 
> cluster, with phoenix 4.4.
> Neither does the problem occur when I query a table without a namespace 
> prefix in the DB table name, on HDP 2.5
> The log is in the attached file: tableNoFound.txt
> My testing code is also attached.
> The weird thing is in the attached code, if I run testSpark alone it gives 
> the above exception, but if I run the testJdbc first, and followed by 
> testSpark, both of them work.
>  After changing to create table by using
> create table ACME.ENDPOINT_STATUS
> The phoenix-spark plug in seems working. I also find some weird behavior,
> If I do both the following
> create table ACME.ENDPOINT_STATUS ...
> create table "ACME:ENDPOINT_STATUS" ...
> Both table shows up in phoenix, the first one shows as Schema ACME, and table 
> name ENDPOINT_STATUS, and the later on shows as scheme none, and table name 
> ACME:ENDPOINT_STATUS.
> However, in HBASE, I only see one table ACME:ENDPOINT_STATUS. In addition, 
> upserts in the table ACME.ENDPOINT_STATUS show up in the other table, so is 
> the other way around.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-3460) Namespace separator ":" should not be allowed in table or schema name

2017-11-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16236547#comment-16236547
 ] 

Hadoop QA commented on PHOENIX-3460:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12895471/PHOENIX-3460.patch
  against master branch at commit 61684c4431d16deff53adfbb91ea76c13642df61.
  ATTACHMENT ID: 12895471

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1606//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1606//console

This message is automatically generated.

> Namespace separator ":" should not be allowed in table or schema name
> -
>
> Key: PHOENIX-3460
> URL: https://issues.apache.org/jira/browse/PHOENIX-3460
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.8.0
> Environment: HDP 2.5
>Reporter: Xindian Long
>Assignee: Thomas D'Silva
>Priority: Major
>  Labels: namespaces, phoenix, spark
> Fix For: 4.13.0
>
> Attachments: 0001-Phoenix-fix.patch, PHOENIX-3460-v2.patch, 
> PHOENIX-3460-v2.patch, PHOENIX-3460.patch, SchemaUtil.java
>
>
> I am testing some code using Phoenix Spark plug in to read a Phoenix table 
> with a namespace prefix in the table name (the table is created as a phoenix 
> table not a hbase table), but it returns an TableNotFoundException.
> The table is obviously there because I can query it using plain phoenix sql 
> through Squirrel. In addition, using spark sql to query it has no problem at 
> all.
> I am running on the HDP 2.5 platform, with phoenix 4.7.0.2.5.0.0-1245
> The problem does not exist at all when I was running the same code on HDP 2.4 
> cluster, with phoenix 4.4.
> Neither does the problem occur when I query a table without a namespace 
> prefix in the DB table name, on HDP 2.5
> The log is in the attached file: tableNoFound.txt
> My testing code is also attached.
> The weird thing is in the attached code, if I run testSpark alone it gives 
> the above exception, but if I run the testJdbc first, and followed by 
> testSpark, both of them work.
>  After changing to create table by using
> create table ACME.ENDPOINT_STATUS
> The phoenix-spark plug in seems working. I also find some weird behavior,
> If I do both the following
> create table ACME.ENDPOINT_STATUS ...
> create table "ACME:ENDPOINT_STATUS" ...
> Both table shows up in phoenix, the first one shows as Schema ACME, and table 
> name ENDPOINT_STATUS, and the later on shows as scheme none, and table name 
> ACME:ENDPOINT_STATUS.
> However, in HBASE, I only see one table ACME:ENDPOINT_STATUS. In addition, 
> upserts in the table ACME.ENDPOINT_STATUS show up in the other table, so is 
> the other way around.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   3   4   5   6   7   8   9   10   >