[jira] [Created] (PHOENIX-4497) Fix Local Index IT tests

2017-12-26 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4497:


 Summary: Fix Local Index IT tests
 Key: PHOENIX-4497
 URL: https://issues.apache.org/jira/browse/PHOENIX-4497
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4498) Fix MutableIndexIT#testIndexHalfStoreFileReader

2017-12-26 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-4498:


 Summary: Fix MutableIndexIT#testIndexHalfStoreFileReader
 Key: PHOENIX-4498
 URL: https://issues.apache.org/jira/browse/PHOENIX-4498
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
 Fix For: 5.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4440) Local index split/merge IT tests are failing

2017-12-26 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4440:
-
Parent Issue: PHOENIX-4497  (was: PHOENIX-4338)

> Local index split/merge IT tests are failing
> 
>
> Key: PHOENIX-4440
> URL: https://issues.apache.org/jira/browse/PHOENIX-4440
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4440.patch
>
>
> IndexHalfStoreFileReaderGenerator#preStoreFileReaderOpen is not getting 
> called and going by default behaviour so split/merge not working.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4278) Implement pure client side transactional index maintenance

2017-12-26 Thread Ohad Shacham (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303843#comment-16303843
 ] 

Ohad Shacham commented on PHOENIX-4278:
---

[~giacomotaylor], Could you please direct me to the code that handles the index 
from the client side for the case of global index for immutable table?
Thx!

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4499) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2

2017-12-26 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4499:


 Summary: Make QueryPlan.getEstimatedBytesToScan() independent of 
getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 
4.x-HBase-1.2
 Key: PHOENIX-4499
 URL: https://issues.apache.org/jira/browse/PHOENIX-4499
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.11.0
Reporter: Pedro Boado
Assignee: Maryann Xue
 Fix For: 4.14.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4499) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4499:
-
Description: Cloned for applying patch to 4.x-HBase-1.2

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2
> --
>
> Key: PHOENIX-4499
> URL: https://issues.apache.org/jira/browse/PHOENIX-4499
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Pedro Boado
>Assignee: Maryann Xue
> Fix For: 4.14.0
>
>
> Cloned for applying patch to 4.x-HBase-1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4499) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4499:
-
Attachment: PHOENIX4437-4.x-HBase-1.2.patch

Please remember applying it with "git am" to preserve original author

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2
> --
>
> Key: PHOENIX-4499
> URL: https://issues.apache.org/jira/browse/PHOENIX-4499
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Pedro Boado
>Assignee: Maryann Xue
> Fix For: 4.14.0
>
> Attachments: PHOENIX4437-4.x-HBase-1.2.patch
>
>
> Cloned for applying patch to 4.x-HBase-1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303904#comment-16303904
 ] 

Pedro Boado commented on PHOENIX-4487:
--

What's the plan with this patch? Is it ready to be merged to CDH branch?

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (PHOENIX-4453) [CDH] Thin client fails with missing library

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4453.


> [CDH] Thin client fails with missing library
> 
>
> Key: PHOENIX-4453
> URL: https://issues.apache.org/jira/browse/PHOENIX-4453
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: Centos 6  +  4.13.1-cdh5.11.2 rc0 
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Fix For: 4.13.2-cdh5.11.2
>
> Attachments: PHOENIX-4453.patch
>
>
> sqlline-thin client cannot start because of a dependency problem
> {code}
> [cloudera@quickstart bin]$ ./phoenix-sqlline-thin.py 
> Setting property: [incremental, false]
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF none none 
> org.apache.phoenix.queryserver.client.Driver
> Connecting to 
> jdbc:phoenix:thin:url=http://localhost:8765;serialization=PROTOBUF
> SLF4J: Class path contains multiple SLF4J bindings.
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/APACHE_PHOENIX-4.13.1-cdh5.11.2.p0.0/lib/phoenix/phoenix-4.13.0-cdh5.11.2-thin-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: Found binding in 
> [jar:file:/opt/cloudera/parcels/CDH-5.11.2-1.cdh5.11.2.p0.4/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an 
> explanation.
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
> java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/org/apache/http/config/Lookup
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:190)
>   at 
> org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.instantiateClient(AvaticaHttpClientFactoryImpl.java:112)
>   at 
> org.apache.calcite.avatica.remote.AvaticaHttpClientFactoryImpl.getClient(AvaticaHttpClientFactoryImpl.java:68)
>   at 
> org.apache.calcite.avatica.remote.Driver.getHttpClient(Driver.java:160)
>   at 
> org.apache.calcite.avatica.remote.Driver.createService(Driver.java:123)
>   at org.apache.calcite.avatica.remote.Driver.createMeta(Driver.java:97)
>   at 
> org.apache.calcite.avatica.AvaticaConnection.(AvaticaConnection.java:121)
>   at 
> org.apache.calcite.avatica.AvaticaJdbc41Factory$AvaticaJdbc41Connection.(AvaticaJdbc41Factory.java:105)
>   at 
> org.apache.calcite.avatica.AvaticaJdbc41Factory.newConnection(AvaticaJdbc41Factory.java:62)
>   at 
> org.apache.calcite.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:138)
>   at org.apache.calcite.avatica.remote.Driver.connect(Driver.java:165)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
>   at sqlline.SqlLine.dispatch(SqlLine.java:809)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:661)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:291)
>   at 
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.phoenix.shaded.org.apache.http.config.Lookup
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
>   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
>   ... 27 more
> sqlline version 1.2.0
> 0: jdbc:phoenix:thin:url=http://localhost:876> 
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Closed] (PHOENIX-4454) [CDH] Heavy client fails when used from a standalone machine

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado closed PHOENIX-4454.


> [CDH] Heavy client fails when used from a standalone machine
> 
>
> Key: PHOENIX-4454
> URL: https://issues.apache.org/jira/browse/PHOENIX-4454
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
> Environment: Windows 7 + DB Visualizer + Heavy client
>Reporter: Pedro Boado
>Assignee: Pedro Boado
> Fix For: 4.13.2-cdh5.11.2
>
> Attachments: PHOENIX-4454.patch
>
>
> Client provided with the distribution doesn't work when used out of the HBase 
> cluster.  
> {code}
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.apache.hadoop.mapred.JobConf
>    at java.lang.Class.forName0(Native Method)
>    at java.lang.Class.forName(Unknown Source)
>    at 
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2138)
>    at 
> org.apache.hadoop.util.ReflectionUtils.setJobConf(ReflectionUtils.java:91)
>    at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:75)
>    at 
> org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:133)
>    at 
> org.apache.hadoop.hbase.security.UserProvider.instantiate(UserProvider.java:124)
>    at 
> org.apache.hadoop.hbase.client.ConnectionManager.createConnectionInternal(ConnectionManager.java:341)
>    at 
> org.apache.hadoop.hbase.client.HConnectionManager.createConnection(HConnectionManager.java:144)
>    at 
> org.apache.phoenix.query.HConnectionFactory$HConnectionFactoryImpl.createConnection(HConnectionFactory.java:47)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.openConnection(ConnectionQueryServicesImpl.java:408)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$400(ConnectionQueryServicesImpl.java:256)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2408)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2384)
>    at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
>    at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2384)
>    at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
>    at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
>    at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
>    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
>    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
>    at java.lang.reflect.Method.invoke(Unknown Source)
>    at com.onseven.dbvis.g.B.D.ā(Z:1548)
>    at com.onseven.dbvis.g.B.F$A.call(Z:1369)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4397) Incorrect query results when with stats are disabled on a salted table

2017-12-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303914#comment-16303914
 ] 

Pedro Boado commented on PHOENIX-4397:
--

[~jamestaylor] I've just noticed that you've already applied this one

> Incorrect query results when with stats are disabled on a salted table
> --
>
> Key: PHOENIX-4397
> URL: https://issues.apache.org/jira/browse/PHOENIX-4397
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Mujtaba Chohan
>Assignee: Samarth Jain
> Fix For: 4.14.0, 4.13.2
>
> Attachments: PHOENIX-4397.patch, PHOENIX-4397_v2.patch
>
>
> See attached unit test.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303924#comment-16303924
 ] 

Pedro Boado commented on PHOENIX-4382:
--

[~vincentpoon], [~jamestaylor] suggested to include this patch in the release 
for 4.x-cdh5.11.2 , is it ready to be merged?

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4500) Apply bugfixes from master prior to release 4.13.2

2017-12-26 Thread Pedro Boado (JIRA)
Pedro Boado created PHOENIX-4500:


 Summary: Apply bugfixes from master prior to release 4.13.2
 Key: PHOENIX-4500
 URL: https://issues.apache.org/jira/browse/PHOENIX-4500
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Pedro Boado


Include all problem solving patches from master prior to release CDH 4.13.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4500) Apply bugfixes from master prior to release 4.13.2

2017-12-26 Thread Pedro Boado (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303933#comment-16303933
 ] 

Pedro Boado commented on PHOENIX-4500:
--

[~jamestaylor] I've been looking at applying only bugfixes as suggested, we 
could cherry pick PHOENIX-4397, PHOENIX-4446 , PHOENIX-4449 , PHOENIX-4322 and 
PHOENIX-3050 , but PHOENIX-3837 depends on some other non bugfix changes ( 
below ).

I'd say PHOENIX-4361, PHOENIX-4386, PHOENIX-4198, PHOENIX-672, PHOENIX-4288, 
PHOENIX-4415 and PHOENIX-4424 are functional changes.

Other ( easier ) option is just apply all patches included in PHOENIX-4464 ( 
both bug fixes and new functionalities, basically all changes mentioned ) . 

I've just asked in PHOENIX-4382 thread if it's ready to be merged, I'm not 100% 
sure.

> Apply bugfixes from master prior to release 4.13.2
> --
>
> Key: PHOENIX-4500
> URL: https://issues.apache.org/jira/browse/PHOENIX-4500
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Pedro Boado
>
> Include all problem solving patches from master prior to release CDH 4.13.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4463) Update pom and org.apache.phoenix.coprocessor.MetaDataProtocol version to 4.13.2

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado updated PHOENIX-4463:
-
Summary: Update pom and org.apache.phoenix.coprocessor.MetaDataProtocol 
version to 4.13.2  (was: Update org.apache.phoenix.coprocessor.MetaDataProtocol 
version to 4.13.2)

> Update pom and org.apache.phoenix.coprocessor.MetaDataProtocol version to 
> 4.13.2
> 
>
> Key: PHOENIX-4463
> URL: https://issues.apache.org/jira/browse/PHOENIX-4463
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Pedro Boado
>
> We also store the version in org.apache.phoenix.coprocessor.MetaDataProtocol, 
> so this class should be updated as well prior to the release.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4499) Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303963#comment-16303963
 ] 

Hadoop QA commented on PHOENIX-4499:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12903726/PHOENIX4437-4.x-HBase-1.2.patch
  against 4.x-HBase-1.2 branch at commit 
34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903726

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+StatementPlan compilePlan = compilableStmt.compilePlan(stmt, 
Sequence.ValueOp.VALIDATE_SEQUENCE);
+// For a QueryPlan, we need to get its optimized plan; for a 
MutationPlan, its enclosed QueryPlan
+compilePlan = 
stmt.getConnection().getQueryServices().getOptimizer().optimize(stmt, dataPlan);

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 
./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.TransactionalViewIT

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1692//console

This message is automatically generated.

> Make QueryPlan.getEstimatedBytesToScan() independent of getExplainPlan() and 
> pull optimize() out of getExplainPlan() - HBase 4.x-HBase-1.2
> --
>
> Key: PHOENIX-4499
> URL: https://issues.apache.org/jira/browse/PHOENIX-4499
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.11.0
>Reporter: Pedro Boado
>Assignee: Maryann Xue
> Fix For: 4.14.0
>
> Attachments: PHOENIX4437-4.x-HBase-1.2.patch
>
>
> Cloned for applying patch to 4.x-HBase-1.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16303993#comment-16303993
 ] 

James Taylor commented on PHOENIX-4382:
---

Waiting for [~tdsilva] to review as he’s very familiar with this code.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304018#comment-16304018
 ] 

Thomas D'Silva commented on PHOENIX-4382:
-

[~vincentpoon]

Thanks for the patch. Should we add a config option to determine the behavior 
when we see a  {separatorByte, 1} ? By default should we assume that  
{separatorByte, 1} represents a column value that is not null? 
I assume in most cases people will not explicitly set a value to null. 

Could you also add a comment explaining how you determine the column value is 
null when you see {separatorByte, 1} .

{code}
private static boolean isNullValue(int arrayIndex, byte[] bytes, int initPos,
byte serializationVersion, boolean useShort, int indexOffset, int 
currOffset,
int elementLength) {
if (isSeparatorByte(bytes, initPos, currOffset)) {
if (isPriorValueZeroLength(arrayIndex, bytes,
serializationVersion, useShort, indexOffset, currOffset)) {
return true;
} else {
// if there's no prior null, there can be at most 1 null
if (elementLength == 2) {
byte nullByte = SortOrder.invert((byte)(0));
if (bytes[initPos+currOffset+1] == nullByte) {
return true;
}
}
}
}
return false;
}
{code}

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-26 Thread James Taylor (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4487:
--
Attachment: PHOENIX-4487_v2.patch

Added comment to explain why we're checking isNamespaceEnabled. Please review, 
[~tdsilva] or [~karanmehta93].

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch, PHOENIX-4487_v2.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (PHOENIX-4500) Apply bugfixes from master prior to release 4.13.2

2017-12-26 Thread Pedro Boado (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pedro Boado reassigned PHOENIX-4500:


Assignee: Pedro Boado

> Apply bugfixes from master prior to release 4.13.2
> --
>
> Key: PHOENIX-4500
> URL: https://issues.apache.org/jira/browse/PHOENIX-4500
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Pedro Boado
>Assignee: Pedro Boado
>
> Include all problem solving patches from master prior to release CDH 4.13.2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-26 Thread Karan Mehta (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304033#comment-16304033
 ] 

Karan Mehta commented on PHOENIX-4487:
--

Understood your comment, Thanks [~jamestaylor], +1 for the patch. Can [~pboado] 
or [~f.pompermaier] confirm that it works after applying the patch locally?

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch, PHOENIX-4487_v2.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4278) Implement pure client side transactional index maintenance

2017-12-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304047#comment-16304047
 ] 

Lars Hofhansl commented on PHOENIX-4278:


Just in case it matters... +1


> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4487) Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304095#comment-16304095
 ] 

Hadoop QA commented on PHOENIX-4487:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903737/PHOENIX-4487_v2.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903737

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+if (currentServerSideTableTimeStamp <= 
MetaDataProtocol.MIN_SYSTEM_TABLE_TIMESTAMP_4_10_0 &&
+if (acquiredMutexLock = 
acquireUpgradeMutex(currentServerSideTableTimeStamp, mutexRowKey)) {
+TableName mutexName = 
SchemaUtil.getPhysicalTableName(PhoenixDatabaseMetaData.SYSTEM_MUTEX_NAME, 
props);
+if 
(PhoenixDatabaseMetaData.SYSTEM_MUTEX_HBASE_TABLE_NAME.equals(mutexName) || 
!tableNames.contains(mutexName)) {

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//artifact/patchprocess/patchReleaseAuditWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1693//console

This message is automatically generated.

> Missing SYSTEM.MUTEX table upgrading from 4.7 to 4.13
> -
>
> Key: PHOENIX-4487
> URL: https://issues.apache.org/jira/browse/PHOENIX-4487
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 4.13.1, 4.13.2-cdh5.11.2
>Reporter: Flavio Pompermaier
>Assignee: James Taylor
> Attachments: PHOENIX-4487.patch, PHOENIX-4487_v2.patch
>
>
> Upgrading from the official Cloudera Parcel equipped with Phoenix 4.7  to the 
> last unofficial parcel (4.13 on CDH 5.11.2) I had the same error of 
> https://issues.apache.org/jira/browse/PHOENIX-4293 (apart from the fact that 
> the from version was 4.7...).
> The creation of system.mutex table fixed the problem.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304113#comment-16304113
 ] 

James Taylor commented on PHOENIX-4382:
---

bq. Should we add a config option to determine the behavior when we see a  
{separatorByte, 1} ? 
Not sure we need this. Probably more likely it’d be null as opposed to -32,767. 
A global config won’t help much since it could be different on table by table 
or column by column basis.

+1 for a few more comments.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304193#comment-16304193
 ] 

Thomas D'Silva commented on PHOENIX-4382:
-

Since we don't consider the schema of the column, I think you are right. Its 
more likely a column value is null than one of the two values (either short or 
varbinary). 

+1 

[~vincentpoon] Thanks for fixing this bug.

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304218#comment-16304218
 ] 

James Taylor commented on PHOENIX-4382:
---

Please commit to master and all 4.x branches (including the cdh one), 
[~vincentpoon].

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (PHOENIX-4501) Fix IndexUsageIT

2017-12-26 Thread Ankit Singhal (JIRA)
Ankit Singhal created PHOENIX-4501:
--

 Summary: Fix IndexUsageIT
 Key: PHOENIX-4501
 URL: https://issues.apache.org/jira/browse/PHOENIX-4501
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Ankit Singhal
Assignee: Ankit Singhal
 Fix For: 5.0.0






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4496) Fix RowValueConstructorIT

2017-12-26 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304227#comment-16304227
 ] 

Ankit Singhal commented on PHOENIX-4496:


Actually, I already found the issue for this.

> Fix RowValueConstructorIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4501) Fix IndexUsageIT

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4501:
---
Attachment: PHOENIX-4501.patch

[~chrajeshbab...@gmail.com], please review.

> Fix IndexUsageIT
> 
>
> Key: PHOENIX-4501
> URL: https://issues.apache.org/jira/browse/PHOENIX-4501
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4501.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4399) Remove explicit abort on RegionServerServices

2017-12-26 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304233#comment-16304233
 ] 

Ankit Singhal commented on PHOENIX-4399:


Thanks [~anoopsamjohn] and @stack, so as per comment on  HBASE-19341 , are we 
good with the original patch?

> Remove explicit abort on RegionServerServices
> -
>
> Key: PHOENIX-4399
> URL: https://issues.apache.org/jira/browse/PHOENIX-4399
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Attachments: PHOENIX-4399.patch
>
>
> Suggestions from @stack and [~anoop.hbase] from HBASE-18298
> Though regionServerServices are not visible from the environment, we can 
> still let CoprocessorHost abort the RS if observer/coprocessor throws 
> exception other than IOException and hbase.coprocessor.abortonerror is not 
> set to false. 
> But how to ensure that ABORT_ON_ERROR is never set to false in case of 
> Phoenix. or should we have some special exception in HBase for forcefull 
> abort.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4496) Fix RowValueConstructorIT

2017-12-26 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304227#comment-16304227
 ] 

Ankit Singhal edited comment on PHOENIX-4496 at 12/27/17 5:25 AM:
--

Actually, I already found the issue for this. This is related to 
https://issues.apache.org/jira/browse/HBASE-19640.


was (Author: an...@apache.org):
Actually, I already found the issue for this.

> Fix RowValueConstructorIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (PHOENIX-4474) Found some hanging tests

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal resolved PHOENIX-4474.

Resolution: Fixed

With latest fixes, these tests are also not hanging now.

> Found some hanging tests
> 
>
> Key: PHOENIX-4474
> URL: https://issues.apache.org/jira/browse/PHOENIX-4474
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> * ExplainPlanWithStatsDisabledIT
> * ConcurrentMutationsIT
> * NumericArithmeticIT
> * AggregateQueryIT
> AggregateQueryIT
> {code}
> Mon Dec 18 23:49:20 IST 2017, 
> RpcRetryingCaller{globalStartTime=1513621155916, pause=100, maxAttempts=7}, 
> java.net.ConnectException: Call to /192.168.1.3:56675 failed on connection 
> exception: 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException:
>  Connection refused: /192.168.1.3:56675
>   at 
> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithRetries(RpcRetryingCallerImpl.java:145)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:388)
>   at org.apache.hadoop.hbase.client.HTable.get(HTable.java:362)
>   at 
> org.apache.hadoop.hbase.MetaTableAccessor.getTableState(MetaTableAccessor.java:1118)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.readMetaState(TableStateManager.java:190)
>   at 
> org.apache.hadoop.hbase.master.TableStateManager.isTablePresent(TableStateManager.java:147)
>   at 
> org.apache.hadoop.hbase.master.HMaster.getTableDescriptors(HMaster.java:3135)
>   at 
> org.apache.hadoop.hbase.master.HMaster.listTableDescriptors(HMaster.java:3079)
>   at 
> org.apache.hadoop.hbase.master.MasterRpcServices.getTableDescriptors(MasterRpcServices.java:999)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:325)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:305)
> Caused by: java.net.ConnectException: Call to /192.168.1.3:56675 failed on 
> connection exception: 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannel$AnnotatedConnectException:
>  Connection refused: /192.168.1.3:56675
>   at org.apache.hadoop.hbase.ipc.IPCUtil.wrapException(IPCUtil.java:165)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.onCallFinished(AbstractRpcClient.java:390)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient.access$100(AbstractRpcClient.java:95)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:410)
>   at 
> org.apache.hadoop.hbase.ipc.AbstractRpcClient$3.run(AbstractRpcClient.java:406)
>   at org.apache.hadoop.hbase.ipc.Call.callComplete(Call.java:103)
>   at org.apache.hadoop.hbase.ipc.Call.setException(Call.java:118)
>   at 
> org.apache.hadoop.hbase.ipc.BufferCallBeforeInitHandler.userEventTriggered(BufferCallBeforeInitHandler.java:92)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.fireUserEventTriggered(AbstractChannelHandlerContext.java:307)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.DefaultChannelPipeline$HeadContext.userEventTriggered(DefaultChannelPipeline.java:1352)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:329)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.AbstractChannelHandlerContext.invokeUserEventTriggered(AbstractChannelHandlerContext.java:315)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.channel.DefaultChannelPipeline.fireUserEventTriggered(DefaultChannelPipeline.java:920)
>   at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.failInit(NettyRpcConnection.java:179)
>   at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection.access$500(NettyRpcConnection.java:71)
>   at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:269)
>   at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:263)
>   at 
> org.apache.hadoop.hbase.shaded.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:50

[jira] [Assigned] (PHOENIX-4496) Fix RowValueConstructorIT

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal reassigned PHOENIX-4496:
--

Assignee: Ankit Singhal  (was: Rajeshbabu Chintaguntla)

> Fix RowValueConstructorIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (PHOENIX-4382) Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator byte return null in query results

2017-12-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304218#comment-16304218
 ] 

James Taylor edited comment on PHOENIX-4382 at 12/27/17 5:51 AM:
-

Please commit to master, 5.x and all 4.x branches (including the cdh one), 
[~vincentpoon].


was (Author: jamestaylor):
Please commit to master and all 4.x branches (including the cdh one), 
[~vincentpoon].

> Immutable table SINGLE_CELL_ARRAY_WITH_OFFSETS values starting with separator 
> byte return null in query results
> ---
>
> Key: PHOENIX-4382
> URL: https://issues.apache.org/jira/browse/PHOENIX-4382
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
> Attachments: PHOENIX-4382.v1.master.patch, 
> PHOENIX-4382.v2.master.patch, PHOENIX-4382.v3.master.patch, 
> UpsertBigValuesIT.java
>
>
> For immutable tables, upsert of some values like Short.MAX_VALUE results in a 
> null value in query resultsets.  Mutable tables are not affected.  I tried 
> with BigInt and got the same problem.
> For Short, the breaking point seems to be 32512.
> This is happening because of the way we serialize nulls.  For nulls, we write 
> out [separatorByte, #_of_nulls].  However, some data values, like 
> Short.MAX_VALUE, start with separatorByte, we can't distinguish between a 
> null and these values.  Currently the code assumes it's a null when it sees a 
> leading separatorByte, hence the incorrect query results.
> See attached test - testShort() , testBigInt()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4493) Fix DropColumnIT

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4493:
---
Attachment: PHOENIX-4493_v1.patch

[~rajeshbabu], can you please review.

> Fix DropColumnIT
> 
>
> Key: PHOENIX-4493
> URL: https://issues.apache.org/jira/browse/PHOENIX-4493
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4493.patch, PHOENIX-4493_v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4501) Fix IndexUsageIT

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304253#comment-16304253
 ] 

Hadoop QA commented on PHOENIX-4501:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903762/PHOENIX-4501.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903762

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1694//console

This message is automatically generated.

> Fix IndexUsageIT
> 
>
> Key: PHOENIX-4501
> URL: https://issues.apache.org/jira/browse/PHOENIX-4501
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4501.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4493) Fix DropColumnIT

2017-12-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304255#comment-16304255
 ] 

Hadoop QA commented on PHOENIX-4493:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12903764/PHOENIX-4493_v1.patch
  against master branch at commit 34693843abe4490b54fbd30512bf7d98d0f59c0d.
  ATTACHMENT ID: 12903764

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/1695//console

This message is automatically generated.

> Fix DropColumnIT
> 
>
> Key: PHOENIX-4493
> URL: https://issues.apache.org/jira/browse/PHOENIX-4493
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4493.patch, PHOENIX-4493_v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4493) Fix DropColumnIT

2017-12-26 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304262#comment-16304262
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4493:
--

+1 [~an...@apache.org].

> Fix DropColumnIT
> 
>
> Key: PHOENIX-4493
> URL: https://issues.apache.org/jira/browse/PHOENIX-4493
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4493.patch, PHOENIX-4493_v1.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4278) Implement pure client side transactional index maintenance

2017-12-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304273#comment-16304273
 ] 

James Taylor commented on PHOENIX-4278:
---

[~ohads] - MutationState calls IndexUtil.generateIndexData() which eventually 
calls IndexMaintainer.buildUpdateMutation(). The IndexMaintainer code is used 
for both immutable and mutable indexes.

> Implement pure client side transactional index maintenance
> --
>
> Key: PHOENIX-4278
> URL: https://issues.apache.org/jira/browse/PHOENIX-4278
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Assignee: Ohad Shacham
>
> The index maintenance for transactions follows the same model as non 
> transactional tables - coprocessor based on data table updates that looks up 
> previous row value to perform maintenance. This is necessary for non 
> transactional tables to ensure the rows are locked so that a consistent view 
> may be obtained. However, for transactional tables, the time stamp oracle 
> ensures uniqueness of time stamps (via transaction IDs) and the filtering 
> handles a scan seeing the "true" last committed value for a row. Thus, 
> there's no hard dependency to perform this on the server side.
> Moving the index maintenance to the client side would prevent any RS->RS RPC 
> calls (which have proved to be troublesome for HBase). It would require 
> returning more data to the client (i.e. the prior row value), but this seems 
> like a reasonable tradeoff.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4399) Remove explicit abort on RegionServerServices

2017-12-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304281#comment-16304281
 ] 

Anoop Sam John commented on PHOENIX-4399:
-

Changing IndexBuildingFailureException  to be a RTE is fine with the assumption 
that it is been thrown only from this one place.
bq. throw new RuntimeException(errormsg);
Same way can u throw a Phoenix specific RTE? If can not be 
IndexBuildingFailureException, a new one which is RTE
Else patch looks fine.

> Remove explicit abort on RegionServerServices
> -
>
> Key: PHOENIX-4399
> URL: https://issues.apache.org/jira/browse/PHOENIX-4399
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Attachments: PHOENIX-4399.patch
>
>
> Suggestions from @stack and [~anoop.hbase] from HBASE-18298
> Though regionServerServices are not visible from the environment, we can 
> still let CoprocessorHost abort the RS if observer/coprocessor throws 
> exception other than IOException and hbase.coprocessor.abortonerror is not 
> set to false. 
> But how to ensure that ABORT_ON_ERROR is never set to false in case of 
> Phoenix. or should we have some special exception in HBase for forcefull 
> abort.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4496:
---
Summary: Fix RowValueConstructorIT and IndexMetadataIT  (was: Fix 
RowValueConstructorIT)

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4496:
---
Description: 
{noformat}
[ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
[ERROR] 
testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
  Time elapsed: 4.516 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
{noformat}

{noformat}
ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 79.381 
s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
[ERROR] 
testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
  Time elapsed: 4.504 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
at 
org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
{noformat}

  was:
{noformat}
[ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
[ERROR] 
testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
  Time elapsed: 4.516 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
{noformat}


> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[ANNOUNCE] New Phoenix committer: Karan Mehta

2017-12-26 Thread Thomas D'Silva
On behalf of the Apache Phoenix PMC, I am pleased to announce that Karan
Mehta
has accepted our invitation to become a committer. He has done excellent
work on
enhancing the metrics framework [1,2] and adding support for granting and
revoking
permissions on tables and namespaces [3].

Looking forward to more great contributions!

Thanks,
Thomas


[1] *https://issues.apache.org/jira/browse/PHOENIX-3752
*
[2] *https://issues.apache.org/jira/browse/PHOENIX-3248
*
[3] *https://issues.apache.org/jira/browse/PHOENIX-672
*


[jira] [Commented] (PHOENIX-4496) Fix RowValueConstructorIT and IndexMetadataIT

2017-12-26 Thread Ankit Singhal (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304287#comment-16304287
 ] 

Ankit Singhal commented on PHOENIX-4496:


Another test failing because of the same issue:-
{code}
[ERROR] Tests run: 24, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 76.881 
s <<< FAILURE! - in org.apache.phoenix.end2end.DefaultColumnValueIT
[ERROR] testDefaultIndexed(org.apache.phoenix.end2end.DefaultColumnValueIT)  
Time elapsed: 4.627 s  <<< FAILURE!
java.lang.AssertionError
at 
org.apache.phoenix.end2end.DefaultColumnValueIT.testDefaultIndexed(DefaultColumnValueIT.java:978)
{code}

> Fix RowValueConstructorIT and IndexMetadataIT
> -
>
> Key: PHOENIX-4496
> URL: https://issues.apache.org/jira/browse/PHOENIX-4496
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
>
> {noformat}
> [ERROR] Tests run: 46, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 117.444 s <<< FAILURE! - in org.apache.phoenix.end2end.RowValueConstructorIT
> [ERROR] 
> testRVCLastPkIsTable1stPkIndex(org.apache.phoenix.end2end.RowValueConstructorIT)
>   Time elapsed: 4.516 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.RowValueConstructorIT.testRVCLastPkIsTable1stPkIndex(RowValueConstructorIT.java:1584)
> {noformat}
> {noformat}
> ERROR] Tests run: 14, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 79.381 s <<< FAILURE! - in org.apache.phoenix.end2end.index.IndexMetadataIT
> [ERROR] 
> testMutableTableOnlyHasPrimaryKeyIndex(org.apache.phoenix.end2end.index.IndexMetadataIT)
>   Time elapsed: 4.504 s  <<< FAILURE!
> java.lang.AssertionError
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.helpTestTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:662)
> at 
> org.apache.phoenix.end2end.index.IndexMetadataIT.testMutableTableOnlyHasPrimaryKeyIndex(IndexMetadataIT.java:623)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (PHOENIX-4498) Fix MutableIndexIT#testIndexHalfStoreFileReader

2017-12-26 Thread Ankit Singhal (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4498:
---
Attachment: PHOENIX-4498.patch

This test is passing for me with the attached patch. But it uses a private API 
RegionInfo.toByteArray to parse RegionInfo. 

> Fix MutableIndexIT#testIndexHalfStoreFileReader
> ---
>
> Key: PHOENIX-4498
> URL: https://issues.apache.org/jira/browse/PHOENIX-4498
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4498.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (PHOENIX-4473) Exception when Adding new columns to base table and view diverge

2017-12-26 Thread Thomas D'Silva (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-4473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16304302#comment-16304302
 ] 

Thomas D'Silva commented on PHOENIX-4473:
-

[~ankit.singhal] 
I think this patch should be applied to the other branches as well. I think the 
tests are passing even though we serialize a boolean but read an integer 
because we use the cell value array. It might have failed in your environment 
because the particular value was at the end of the array.

> Exception when Adding new columns to base table and view diverge
> 
>
> Key: PHOENIX-4473
> URL: https://issues.apache.org/jira/browse/PHOENIX-4473
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>  Labels: HBase-2.0
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4473.patch
>
>
> {code}
> [ERROR] 
> testAddPKColumnToBaseTableWhoseViewsHaveIndices(org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT)
>   Time elapsed: 11.102 s  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 4 bytes, but had 1
>   at 
> org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT.testAddPKColumnToBaseTableWhoseViewsHaveIndices(AlterMultiTenantTableWithViewsIT.java:371)
> Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException:
> org.apache.hadoop.hbase.DoNotRetryIOException: T01_VIEW2: 
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least 4 bytes, but had 1
>   at 
> org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:98)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:557)
>   at 
> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:16267)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:7950)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2339)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2321)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41556)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:403)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:325)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:305)
> Caused by: java.lang.RuntimeException: java.sql.SQLException: ERROR 201 
> (22000): Illegal data. Expected length of at least 4 bytes, but had 1
>   at 
> org.apache.phoenix.schema.types.PDataType.checkForSufficientLength(PDataType.java:290)
>   at 
> org.apache.phoenix.schema.types.PInteger$IntCodec.decodeInt(PInteger.java:183)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addColumnToTable(MetaDataEndpointImpl.java:705)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:1005)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:571)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3197)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3142)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.addIndexToTable(MetaDataEndpointImpl.java:656)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:997)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:571)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3197)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doGetTable(MetaDataEndpointImpl.java:3142)
>   at 
> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:523)
>   ... 9 more
> Caused by: java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected 
> length of at least 4 bytes, but had 1
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:488)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:150)
>   ... 22 more
> [ERROR] 
> testAddingPkAndKeyValueColumnsToBaseTableWithDivergedView(org.apache.phoenix.end2end.AlterMultiTenantTableWithViewsIT)
>   Time elapsed: 10.795 s  <<< ERROR!
> java.sql.SQLException: ERROR 201 (22000): Illegal data. Expected length of at 
> least