[jira] [Updated] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-2664:

Attachment: PHOENIX-2664.patch

Patch against master.

Apart from the UpgradeIT I couldn't find any tests related to this upgrade 
scenario. Is UpgradeIT the place to add one?

> Upgrade from 4.1.0 to 4.6.0 fails
> -
>
> Key: PHOENIX-2664
> URL: https://issues.apache.org/jira/browse/PHOENIX-2664
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jan Van Besien
> Attachments: PHOENIX-2664.patch
>
>
> Upgrade from 4.1.0 to 4.6.0 fails with the following exception.
> {code}
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
> to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
> ava:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:804)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: NoViableAltException(28@[])
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 27 more
> {code}
> Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
> multiple places where string concatenation on the columnsToAdd string happens 
> without checking what the current content of that string is, resulting in a 
> string that starts with a comma.
> I should add that our 4.1.0 version actually has some custom patches, but the 
> code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2663) Phoenix View is not Updating

2016-02-08 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15136823#comment-15136823
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2663:
--

[~ankitbeohar90] It should work fine. Can you provide your schemas and sample 
data for which it's not working. 

> Phoenix View is not Updating
> 
>
> Key: PHOENIX-2663
> URL: https://issues.apache.org/jira/browse/PHOENIX-2663
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP
>Reporter: Ankit
>  Labels: features
>
> Hi All,
> I have a Hbase table which I am accessing through Phoenix View. 
> But if I am ingesting data into Hbase table through MR it is not reflected in 
> Phoenix View. 
> Please give the resolution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Van Besien updated PHOENIX-2664:

Affects Version/s: 4.6.0

> Upgrade from 4.1.0 to 4.6.0 fails
> -
>
> Key: PHOENIX-2664
> URL: https://issues.apache.org/jira/browse/PHOENIX-2664
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0
>Reporter: Jan Van Besien
> Attachments: PHOENIX-2664.patch
>
>
> Upgrade from 4.1.0 to 4.6.0 fails with the following exception.
> {code}
> org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
> to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
> (state=42P00,code=601)
> org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
> Syntax error. Encountered "," at line 1, column 49.
> at 
> org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
> ava:33)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
>   at 
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
>   at 
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
>   at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
>   at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
>   at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
>   at sqlline.Commands.connect(Commands.java:1064)
>   at sqlline.Commands.connect(Commands.java:996)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:497)
>   at 
> sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
>   at sqlline.SqlLine.dispatch(SqlLine.java:804)
>   at sqlline.SqlLine.initArgs(SqlLine.java:588)
>   at sqlline.SqlLine.begin(SqlLine.java:656)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> Caused by: NoViableAltException(28@[])
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
>   at 
> org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
>   at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
>   ... 27 more
> {code}
> Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
> multiple places where string concatenation on the columnsToAdd string happens 
> without checking what the current content of that string is, resulting in a 
> string that starts with a comma.
> I should add that our 4.1.0 version actually has some custom patches, but the 
> code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2607) PhoenixMapReduceUtil Upserts with earlier ts (relative to latest data ts) slower by 25x after stats collection

2016-02-08 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-2607:
-
Fix Version/s: (was: 4.4.0)

> PhoenixMapReduceUtil Upserts with earlier ts (relative to latest data ts) 
> slower by 25x after stats collection
> --
>
> Key: PHOENIX-2607
> URL: https://issues.apache.org/jira/browse/PHOENIX-2607
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.4.0
>Reporter: Arun Thangamani
>  Labels: patch
> Attachments: PHOENIX-2607.patch, hbase-master-fast-upload.log, 
> hbase-master-slow-upload.log, hbase-rs01-fast-upload.log, 
> hbase-rs01-slow-upload.log, hbase-rs02-fast-upload.log, 
> hbase-rs02-slow-upload.log, hbase-rs03-fast-upload.log, 
> hbase-rs03-slow-upload.log, hbase-rs04-fast-upload.log, 
> hbase-rs04-slow-upload.log, phoenix_slow_map_process_jstack.txt, 
> region_server_2_jstack.txt
>
>
> Description of the problem:
> 1) We face a 25x slow down when go back in time to load data in a table (when 
> specific timestamps set on connections during upserts)
> 2) set phoenix.stats.useCurrentTime=false (and 
> phoenix.stats.guidepost.per.region 1) which at least makes the forward 
> timestamps upserts perform correctly 
> 3) From what I can tell from the phoenix source code, logs attached and 
> jstacks from the region servers -- we continuously try to lookup the uncached 
> definition of the table when we have client timestamp earlier than the last 
> modified timestamp of the table in stats 
> 4) To reproduce, create a table with timestamp=100, and load 10M rows with 
> PhoenixMapReduceUtil and timestamps=144757440,144809280, wait for 20 
> mins (15+ min, phoenix.stats.updateFrequency is 15mins)
> After 20 mins, load 10M rows with a earlier timestamp compared to the latest 
> data (timestamp=144766080) and observe the 25x slowness, after this once 
> again load a forward timestamp 144817920 and observe the quickness
> 5) I was not able to reproduce this issue with simple multi threaded upserts 
> from a jdbc connection, with simple multi threaded upserts the stats table 
> never gets populated unlike PhoenixMapReduceUtil
> We are trying to use phoenix as a cache store to do analytics with the last 
> 60 days of data, a total of about 1.5 billion rows
> The table has a composite key and the data arrives in different times from 
> different sources, so it is easier to maintain the timestamps of the data and 
> expire the data automatically, this performance makes a difference between 
> inserting the data in 10 mins versus 2 hours, 2 hours for data inserts 
> blocking up the cluster that we have.
> We are even talking about our use cases in the upcoming strata conference in 
> March..  (Thanks to excellent community)
> Steps to reproduce:
> Source code is available in 
> (https://github.com/athangamani/phoenix_mapreduce_timestamp_upsert) and the 
> jar the source code produces is attached which is readily runnable 
> 1) We use the following params to keep the stats collection happy to isolate 
> the specific issue
>  phoenix.stats.useCurrentTime false
>  phoenix.stats.guidepost.per.region 1
> 2) Create a table in phoenix 
>Run the following main class from the project.. 
> (StatPhoenixTableCreationTest).. It will create a table with timestamp=100  
>   CREATE TABLE stat_table ( 
>   pk1 VARCHAR NOT NULL, 
>   pk2 VARCHAR NOT NULL, 
>   pk3 UNSIGNED_LONG NOT NULL, 
>stat1 UNSIGNED_LONG, 
>stat2 UNSIGNED_LONG, 
>stat3 UNSIGNED_LONG, 
>CONSTRAINT pk PRIMARY KEY (pk1, pk2, pk3) 
>   ) SALT_BUCKETS=32, COMPRESSION='LZ4'
> 3) Open the code base to look at the sample for PhoenixMapReduceUtil.. With 
> DBWritable..  
> 4) Within the codebase, we get phoenix connection for the mappers using the 
> following settings in order to have a fixed client timestamp
>  conf.set(PhoenixRuntime.CURRENT_SCN_ATTRIB, ""+(timestamp));
> 5) fix the hbase-site.xml in the codebase for zookeeper quorum and hbase 
> parent znode info 
> 6) simply run the StatDataCreatorTest to create data for the run and load it 
> in hdfs for 10M records
> 7) to run the ready made jar attached, use the following commands, 
>hadoop jar phoenix_mr_ts_upsert-jar-with-dependencies.jar 
> statPhoenixLoader hdfs:///user/*/stat-data-1.txt STAT_TABLE 144757440
>hadoop jar phoenix_mr_ts_upsert-jar-with-dependencies.jar 
> statPhoenixLoader hdfs:///user/*/stat-data-1.txt STAT_TABLE 144809280 
> After 20 mins… 
>hadoop jar 

[jira] [Created] (PHOENIX-2664) Upgrade from 4.1.0 to 4.6.0 fails

2016-02-08 Thread Jan Van Besien (JIRA)
Jan Van Besien created PHOENIX-2664:
---

 Summary: Upgrade from 4.1.0 to 4.6.0 fails
 Key: PHOENIX-2664
 URL: https://issues.apache.org/jira/browse/PHOENIX-2664
 Project: Phoenix
  Issue Type: Bug
Reporter: Jan Van Besien


Upgrade from 4.1.0 to 4.6.0 fails with the following exception.

{code}
org.apache.phoenix.query.ConnectionQueryServicesImpl: Add column failed due 
to:org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): 
Syntax error. Encountered "," at line 1, column 49.
Error: ERROR 601 (42P00): Syntax error. Encountered "," at line 1, column 49. 
(state=42P00,code=601)
org.apache.phoenix.exception.PhoenixParserException: ERROR 601 (42P00): Syntax 
error. Encountered "," at line 1, column 49.
at 
org.apache.phoenix.exception.PhoenixParserException.newException(PhoenixParserException.j
ava:33)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:111)
at 
org.apache.phoenix.jdbc.PhoenixStatement$PhoenixStatementParser.parseStatement(PhoenixStatement.java:1285)
at 
org.apache.phoenix.jdbc.PhoenixStatement.parseStatement(PhoenixStatement.java:1366)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1416)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumn(ConnectionQueryServicesImpl.java:1866)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.addColumnsIfNotExists(ConnectionQueryServicesImpl.java:1892)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$500(ConnectionQueryServicesImpl.java:179)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1978)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:1898)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:78)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:1898)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:180)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.connect(PhoenixEmbeddedDriver.java:132)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:151)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:804)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:656)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:292)
Caused by: NoViableAltException(28@[])
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_name(PhoenixSQLParser.java:2397)
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_def(PhoenixSQLParser.java:3707)
at 
org.apache.phoenix.parse.PhoenixSQLParser.column_defs(PhoenixSQLParser.java:3631)
at 
org.apache.phoenix.parse.PhoenixSQLParser.alter_table_node(PhoenixSQLParser.java:3337)
at 
org.apache.phoenix.parse.PhoenixSQLParser.oneStatement(PhoenixSQLParser.java:847)
at 
org.apache.phoenix.parse.PhoenixSQLParser.statement(PhoenixSQLParser.java:500)
at org.apache.phoenix.parse.SQLParser.parseStatement(SQLParser.java:108)
... 27 more
{code}

Looking at the code in ConnectionQueryServicesImpl#init, it seems there are 
multiple places where string concatenation on the columnsToAdd string happens 
without checking what the current content of that string is, resulting in a 
string that starts with a comma.

I should add that our 4.1.0 version actually has some custom patches, but the 
code seems to suggest it will fail with a vanilla 4.1.0 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2667) Race condition between IndexBuilder and Split for region lock

2016-02-08 Thread Enis Soztutar (JIRA)
Enis Soztutar created PHOENIX-2667:
--

 Summary: Race condition between IndexBuilder and Split for region 
lock
 Key: PHOENIX-2667
 URL: https://issues.apache.org/jira/browse/PHOENIX-2667
 Project: Phoenix
  Issue Type: Bug
Reporter: Enis Soztutar


In a production cluster, we have seen a condition where the split did not 
finish for 30+ minutes. Also due to this, no request was being serviced in this 
time frame affectively making the region offline. 

The jstack provides 3 types of threads waiting on the regions read or write 
locks. 
First, the handlers are all blocked on trying to acquire the read lock on the 
region in multi(), most of the handlers are like this:
{code}
Thread 2328: (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
be imprecise)
 - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
@bci=20, line=226 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int, 
long) @bci=122, line=1033 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
 long) @bci=25, line=1326 (Compiled frame)
 - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
 int) @bci=27, line=7754 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock)
 @bci=3, line=7741 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(org.apache.hadoop.hbase.regionserver.Region$Operation)
 @bci=211, line=7650 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.regionserver.HRegion$BatchOperationInProgress)
 @bci=21, line=2803 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(org.apache.hadoop.hbase.client.Mutation[],
 long, long) @bci=12, line=2760 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$RegionActionResult$Builder,
 org.apache.hadoop.hbase.regionserver.Region, org.apache.
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(org.apache.hadoop.hbase.regionserver.Region,
 org.apache.hadoop.hbase.quotas.OperationQuota, org.apache.hadoop.hbase.protobuf
 - 
org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(com.google.protobuf.RpcController,
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$MultiRequest) 
@bci=407, line=2032 (Compiled frame)
 - 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(com.google.protobuf.Descriptors$MethodDescriptor,
 com.google.protobuf.RpcController, com.google.protobuf.Messa
 - 
org.apache.hadoop.hbase.ipc.RpcServer.call(com.google.protobuf.BlockingService, 
com.google.protobuf.Descriptors$MethodDescriptor, com.google.protobuf.Message, 
org.apache.hadoop.hbase.CellScanner, long,
 - org.apache.hadoop.hbase.ipc.CallRunner.run() @bci=345, line=101 (Compiled 
frame)
 - 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(java.util.concurrent.BlockingQueue)
 @bci=54, line=130 (Compiled frame)
 - org.apache.hadoop.hbase.ipc.RpcExecutor$1.run() @bci=20, line=107 
(Interpreted frame)
 - java.lang.Thread.run() @bci=11, line=745 (Interpreted frame)
{code}

second, the IndexBuilder threads from Phoenix index are also blocked waiting on 
the region read locks: 
{code}
Thread 17566: (state = BLOCKED)
 - sun.misc.Unsafe.park(boolean, long) @bci=0 (Compiled frame; information may 
be imprecise)
 - java.util.concurrent.locks.LockSupport.parkNanos(java.lang.Object, long) 
@bci=20, line=226 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(int, 
long) @bci=122, line=1033 (Compiled frame)
 - 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(int,
 long) @bci=25, line=1326 (Compiled frame)
 - java.util.concurrent.locks.ReentrantReadWriteLock$ReadLock.tryLock(long, 
java.util.concurrent.TimeUnit) @bci=10, line=873 (Compiled frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock,
 int) @bci=27, line=7754 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.lock(java.util.concurrent.locks.Lock)
 @bci=3, line=7741 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(org.apache.hadoop.hbase.regionserver.Region$Operation)
 @bci=211, line=7650 (Interpreted frame)
 - 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(org.apache.hadoop.hbase.client.Scan,
 java.util.List) @bci=4, line=2484 (Interpreted frame)
 - 

[jira] [Commented] (PHOENIX-2667) Race condition between IndexBuilder and Split for region lock

2016-02-08 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138214#comment-15138214
 ] 

Enis Soztutar commented on PHOENIX-2667:


I think the events are happening something like this: 
 (1) multi() request for a bunch of writes arrives at the region. We acquire 
the region read lock, then process the mutation in the primary table. 

(2) We write the edits to WAL. In the WAL path, we are preparing the index 
mutations and sending them to be committed to the index table. However, this 
prepare also have to read from the primary table region:
{code}
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:7755)
at org.apache.hadoop.hbase.regionserver.HRegion.lock(HRegion.java:7741)
at 
org.apache.hadoop.hbase.regionserver.HRegion.startRegionOperation(HRegion.java:7650)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2484)
at 
org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:2479)
at 
org.apache.phoenix.hbase.index.covered.data.LocalTable.getCurrentRowState(LocalTable.java:63)
at 
org.apache.phoenix.hbase.index.covered.LocalTableState.ensureLocalStateInitialized(LocalTableState.java:158)
at 
org.apache.phoenix.hbase.index.covered.LocalTableState.getIndexedColumnsTableState(LocalTableState.java:126)
at 
org.apache.phoenix.index.PhoenixIndexCodec.getIndexUpdates(PhoenixIndexCodec.java:162)
at 
org.apache.phoenix.index.PhoenixIndexCodec.getIndexDeletes(PhoenixIndexCodec.java:120)
at 
org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addDeleteUpdatesToMap(CoveredColumnsIndexBuilder.java:403)
at 
org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addCleanupForCurrentBatch(CoveredColumnsIndexBuilder.java:287)
at 
org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.addMutationsForBatch(CoveredColumnsIndexBuilder.java:239)
at 
org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.batchMutationAndAddUpdates(CoveredColumnsIndexBuilder.java:136)
at 
org.apache.phoenix.hbase.index.covered.CoveredColumnsIndexBuilder.getIndexUpdate(CoveredColumnsIndexBuilder.java:99)
at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:133)
at 
org.apache.phoenix.hbase.index.builder.IndexBuildManager$1.call(IndexBuildManager.java:129)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
{code}

(3) concurrently, we have decided to split the region, and now we are waiting 
on acquiring the write lock. 
(4) The reads from the IndexBuilder to the primary region executes, however, 
they are blocked since they cannot acquire the region write lock due to the 
RWLock heuristics. Although ReentrantRWLock is not fair by default, it 
implements this heuristics: 
{code}
static final class NonfairSync extends Sync {
private static final long serialVersionUID = -8159625535654395037L;
final boolean writerShouldBlock() {
return false; // writers can always barge
}
final boolean readerShouldBlock() {
/* As a heuristic to avoid indefinite writer starvation,
 * block if the thread that momentarily appears to be head
 * of queue, if one exists, is a waiting writer.  This is
 * only a probabilistic effect since a new reader will not
 * block if there is a waiting writer behind other enabled
 * readers that have not yet drained from the queue.
 */
return apparentlyFirstQueuedIsExclusive();
}
}
{code} 

Above heuristics means that, if (1) and (2) happens, then we enqueue the write 
lock request from(4) before all reads have finished for the index updates, we 
are kind of in a (soft) dead lock. 

Notice that the RPC handler thread holds the read lock, and waits on the index 
builder thread pool. The index builder thread tries to acquire the read lock 
again, but ends up being queued up after the write lock. The write lock just 
waits for the handler thread to finish, resulting in this sophisticated dead 
lock. 


> Race condition between IndexBuilder and Split for region lock
> -
>
> Key: PHOENIX-2667
> URL: https://issues.apache.org/jira/browse/PHOENIX-2667
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Enis Soztutar
>
> In a production cluster, we have seen a condition where the split did not 
> finish for 30+ minutes. Also due to this, no request was being serviced in 
> this time frame 

[jira] [Commented] (PHOENIX-2130) Can't connct to hbase cluster

2016-02-08 Thread Sourabh Jain (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138440#comment-15138440
 ] 

Sourabh Jain commented on PHOENIX-2130:
---

You can set this property in hbase-site.xml and restart your region servers.

> Can't connct to hbase cluster
> -
>
> Key: PHOENIX-2130
> URL: https://issues.apache.org/jira/browse/PHOENIX-2130
> Project: Phoenix
>  Issue Type: Bug
> Environment: ubuntu 14.0
>Reporter: BerylLin
>
> I have a hadoop cluster which have 6 nodes, hadoop version is 2.2.0.
> Zookeeper cluster are installed in 
> datanode1,datanode2,datanode3,datanode4,datanode5.
> Hbase cluster is installed in that environment above, version is 0.98.13.
> Hbase can be started and used successfully.
> Phoenix version is 4.3.0(4.4.0 has also been tried)
> When I use "sqlline.py datanode1:2181", I got the error below:
> Setting property: [isolation, TRANSACTION_READ_COMMITTED]
> issuing: !connect jdbc:phoenix:datanode1:2181 none none 
> org.apache.phoenix.jdbc.PhoenixDriver
> Connecting to jdbc:phoenix:datanode1:2181
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further 
> details.
> 15/07/18 20:55:39 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745) (state=08000,code=101)
> org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: Class 
> org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set 
> hbase.table.sanity.checks to false at conf or table descriptor if you want to 
> bypass sanity checks
>   at 
> org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1978)
>   at 
> org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1910)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1849)
>   at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:2025)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:42280)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2107)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
>   at 
> org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:74)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
>   at 
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:108)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:870)
>   at 
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1194)
>   at 
> org.apache.phoenix.query.DelegateConnectionQueryServices.createTable(DelegateConnectionQueryServices.java:111)
>   at 
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:1682)
>   at 
> 

[jira] [Commented] (PHOENIX-2656) Shield Phoenix from Tephra repackaging

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138282#comment-15138282
 ] 

Hudson commented on PHOENIX-2656:
-

SUCCESS: Integrated in Phoenix-master #1127 (See 
[https://builds.apache.org/job/Phoenix-master/1127/])
PHOENIX-2656 Shield Phoenix from Tephra repackaging (tdsilva: rev 
d5518f02d85e2cd92955377fc3934a266eaa1fa6)
* phoenix-core/src/it/java/org/apache/phoenix/tx/TransactionIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/DelegateRegionObserver.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/PhoenixTransactionalProcessor.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java


> Shield Phoenix from Tephra repackaging
> --
>
> Key: PHOENIX-2656
> URL: https://issues.apache.org/jira/browse/PHOENIX-2656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2656.patch
>
>
> When TEPHRA-151 happens, the Tephra coprocessors will get repackaged from 
> co.cask.tephra.hbase11.coprocessor to org.apache.tephra. This would force us 
> to modify the metadata of existing users since we attach this coprocessor to 
> transactional Phoenix tables.
> At a minimum, we should create our own PhoenixTransactionProcessor which 
> delegates to Tephra's TransactionProcessor. If there are other touch points 
> like this (I'm not aware of others), we should do the same. I think we're ok 
> for the Transaction Manager since we have our own startup script we could 
> muck with (plus this is really a test-only script and deployment would be 
> different).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2665) index split while running group by query is returning duplicate results

2016-02-08 Thread Rajeshbabu Chintaguntla (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138435#comment-15138435
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-2665:
--

[~jamestaylor]
Here are the schemas and uploaded some random data:
{noformat}
CREATE TABLE IF NOT EXISTS test (ID INTEGER PRIMARY KEY,unsig_id UNSIGNED_INT, 
big_id BIGINT,unsig_long_id UNSIGNED_LONG)
{noformat}
{noformat}
create index idx on test(unsig_id);
{noformat}

The explain plan is this. 
{noformat}
0: jdbc:phoenix:localhost> explain select unsig_id,id from test group by id, 
unsig_id;
+--+
|   PLAN   |
+--+
| CLIENT 2-CHUNK PARALLEL 1-WAY FULL SCAN OVER IDX |
| SERVER FILTER BY FIRST KEY ONLY  |
| SERVER AGGREGATE INTO ORDERED DISTINCT ROWS BY ["ID", "UNSIG_ID"] |
| CLIENT MERGE SORT|
+--+
4 rows selected (0.041 seconds)
{noformat}

The problem is this:
-
After creating iterators in BaseResultIterators we fetch first set of rows from 
the server. The hbase client scanner maintains last fetched row so that if any 
thing like splits happen then it will set the last fetched row as start row for 
the scan and create scanners. Since the scan ranges not proper we throw 
StaleRegionBoundaryException then we try to create two parallel scans from the 
boundaries [last_fetched_row, actual_scan_stop_row) but we need to create 
scanners for the boundaries [actual_scan_start_row, actual_scan_stop_row). The 
last_fetched_row may not be the proper row key in the index for aggregate 
queries.

The solution is to have copy of scan so that we use proper start and stop row 
to prepare the parallel scans.

> index split while running group by query is returning duplicate results
> ---
>
> Key: PHOENIX-2665
> URL: https://issues.apache.org/jira/browse/PHOENIX-2665
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.7.0
>
>
> When there is a index split while running group by query is returning 
> duplicate results.
> Instead of returning 500,000 records it's returning 729,500 records.
> {noformat}
> +--+--+
> | 4999 | 49   
> |
> +--+--+
> 500,000 rows selected (11.996 seconds)
> {noformat}
> {noformat}
> +--+--+
> | 4999 | 49   
> |
> +--+--+
> 729,500 rows selected (15.291 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2666) Performance regression: Aggregate query with filter on table with multiple column families

2016-02-08 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138321#comment-15138321
 ] 

ramkrishna.s.vasudevan commented on PHOENIX-2666:
-

[~mujtabachohan]
Seems to be a straight forward query. Went thro the code once again. Was the 
guideposts collected for this last_column?  May be the number of guideposts was 
not as much in number as it was for the default family? 

> Performance regression: Aggregate query with filter on table with multiple 
> column families
> --
>
> Key: PHOENIX-2666
> URL: https://issues.apache.org/jira/browse/PHOENIX-2666
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>
> In the test, table contains total of 6 columns with one column per column 
> family.
> Running a query  {code}select count(*) from T where last_column < ?{code} is 
> 4x slower after commit for PHOENIX-1312 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3fdaecdaaa2a2f07070df67f861252fd44e338c3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2656) Shield Phoenix from Tephra repackaging

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2656.
-
Resolution: Fixed

> Shield Phoenix from Tephra repackaging
> --
>
> Key: PHOENIX-2656
> URL: https://issues.apache.org/jira/browse/PHOENIX-2656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2656.patch
>
>
> When TEPHRA-151 happens, the Tephra coprocessors will get repackaged from 
> co.cask.tephra.hbase11.coprocessor to org.apache.tephra. This would force us 
> to modify the metadata of existing users since we attach this coprocessor to 
> transactional Phoenix tables.
> At a minimum, we should create our own PhoenixTransactionProcessor which 
> delegates to Tephra's TransactionProcessor. If there are other touch points 
> like this (I'm not aware of others), we should do the same. I think we're ok 
> for the Transaction Manager since we have our own startup script we could 
> muck with (plus this is really a test-only script and deployment would be 
> different).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2665) index split while running group by query is returning duplicate results

2016-02-08 Thread Rajeshbabu Chintaguntla (JIRA)
Rajeshbabu Chintaguntla created PHOENIX-2665:


 Summary: index split while running group by query is returning 
duplicate results
 Key: PHOENIX-2665
 URL: https://issues.apache.org/jira/browse/PHOENIX-2665
 Project: Phoenix
  Issue Type: Bug
Reporter: Rajeshbabu Chintaguntla
Assignee: Rajeshbabu Chintaguntla
Priority: Blocker
 Fix For: 4.7.0


When there is a index split while running group by query is returning duplicate 
results.
Instead of returning 500,000 records it's returning 729,500 records.
{noformat}
+--+--+
| 4999 | 49 
  |
+--+--+
500,000 rows selected (11.996 seconds)
{noformat}
{noformat}
+--+--+
| 4999 | 49 
  |
+--+--+
729,500 rows selected (15.291 seconds)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2153) Fix a couple of Null pointer dereferences

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15136971#comment-15136971
 ] 

Hudson commented on PHOENIX-2153:
-

FAILURE: Integrated in Phoenix-master #1123 (See 
[https://builds.apache.org/job/Phoenix-master/1123/])
PHOENIX-2153 Fix a couple of Null pointer dereferences(Alicia Ying Shu) 
(rajeshbabu: rev e4d569cd8bda5e7c828d3bae9b12165b0272b67a)
* phoenix-core/src/main/java/org/apache/phoenix/expression/InListExpression.java


> Fix a couple of Null pointer dereferences
> -
>
> Key: PHOENIX-2153
> URL: https://issues.apache.org/jira/browse/PHOENIX-2153
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Alicia Ying Shu
>Assignee: Alicia Ying Shu
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2153-v1.patch, PHOENIX-2153.patch
>
>
> New Defects reported by Coverity Scan for Apache Phoenix
> CID 98770: null pointer dereferences FORWARD_NULL)
> /phoenix/core/src/main/java/org/pache/phoenix/expression/InListExession.java: 
> 90 in org.apache.phoenix.expression.InListExpression.create(java.util.List, 
> boolean, org.apache.hado.hbase.io.ImmutableBytesWritable, boolean)()
> CID 98771:  Null pointer derefrences  (FORWARD_NULL)
> /phoenix-rf/src/main/java/org/apache/phoenix/pherf/util/PhoenixUtil.java: 112 
> in
> org.apache.phoenix.pherf.util.PhoenixUtil.cuteStatementThrowException(java.lang.String,
>  java.sql.Connection)()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2663) Phoenix View is not Updating

2016-02-08 Thread Rajeshbabu Chintaguntla (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2663?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-2663.
--
Resolution: Invalid

If you create view for mapping the table already exists in HBase then changes 
to the table after that won't be visible. To visible the updates you need to 
create table for mapping and create view with select query on the table.

You can see more info at "Mapping to an Existing HBase Table" section in 
phoenix.apache.org 
http://phoenix.apache.org/language/index.html#create_view

For any other doubts use user mailing list.


> Phoenix View is not Updating
> 
>
> Key: PHOENIX-2663
> URL: https://issues.apache.org/jira/browse/PHOENIX-2663
> Project: Phoenix
>  Issue Type: Bug
> Environment: HDP
>Reporter: Ankit
>  Labels: features
>
> Hi All,
> I have a Hbase table which I am accessing through Phoenix View. 
> But if I am ingesting data into Hbase table through MR it is not reflected in 
> Phoenix View. 
> Please give the resolution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2605) Enhance IndexToolIT to test transactional tables

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137688#comment-15137688
 ] 

Hudson commented on PHOENIX-2605:
-

SUCCESS: Integrated in Phoenix-master #1124 (See 
[https://builds.apache.org/job/Phoenix-master/1124/])
PHOENIX-2605 Enhance IndexToolIT to test transactional tables (tdsilva: rev 
b0122a541325fd7e40e62e3602eb0ad748b94a4f)
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportMapper.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/ContextClassloaderIT.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/UserDefinedFunctionsIT.java
* phoenix-core/src/test/java/org/apache/phoenix/query/BaseTest.java
* phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/IndexTool.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexReplicationIT.java
* 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/DropIndexDuringUpsertIT.java
* 
phoenix-core/src/it/java/org/apache/phoenix/hbase/index/covered/example/EndToEndCoveredIndexingIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/PhoenixInputFormat.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/index/PhoenixIndexImportDirectMapper.java
* phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/MutableIndexToolIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/mapreduce/util/PhoenixConfigurationUtil.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/CsvBulkLoadToolIT.java
* phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
* 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java


> Enhance IndexToolIT to test transactional tables
> 
>
> Key: PHOENIX-2605
> URL: https://issues.apache.org/jira/browse/PHOENIX-2605
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2605-final.patch, PHOENIX-2605-v2.patch, 
> PHOENIX-2605-v3.patch, PHOENIX-2605-v4.patch, PHOENIX-2605.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2605) Enhance IndexToolIT to test transactional tables

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2605.
-
Resolution: Fixed

> Enhance IndexToolIT to test transactional tables
> 
>
> Key: PHOENIX-2605
> URL: https://issues.apache.org/jira/browse/PHOENIX-2605
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2605-final.patch, PHOENIX-2605-v2.patch, 
> PHOENIX-2605-v3.patch, PHOENIX-2605-v4.patch, PHOENIX-2605.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2655) In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow NewerTableAlreadyExistsException if the ifNotExists flag is true

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2655:

Attachment: PHOENIX-2655.patch

[~jamestaylor]

Can you please review?

Thanks,
Thomas

> In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow 
> NewerTableAlreadyExistsException if the ifNotExists flag is true
> --
>
> Key: PHOENIX-2655
> URL: https://issues.apache.org/jira/browse/PHOENIX-2655
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2655.patch
>
>
> We already do this for TABLE_ALREADY_EXISTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2605) Enhance IndexToolIT to test transactional tables

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2605:

Attachment: PHOENIX-2605-final.patch

Attaching final patch.

> Enhance IndexToolIT to test transactional tables
> 
>
> Key: PHOENIX-2605
> URL: https://issues.apache.org/jira/browse/PHOENIX-2605
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2605-final.patch, PHOENIX-2605-v2.patch, 
> PHOENIX-2605-v3.patch, PHOENIX-2605-v4.patch, PHOENIX-2605.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2653) Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient falling back to HBase ZK quorum setting

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2653:

Attachment: PHOENIX-2653-4.x-HBase-0.98.patch

[~jamestaylor]

Can you please review?

Thanks,
Thomas

> Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient 
> falling back to HBase ZK quorum setting
> 
>
> Key: PHOENIX-2653
> URL: https://issues.apache.org/jira/browse/PHOENIX-2653
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2653-4.x-HBase-0.98.patch
>
>
> From an email discussion with [~poornachandra] [~gokulavasan]
> CDAP's transaction manager's discovery information in zookeeper uses a 
> namespace. The regular znode to discover tx manager is 
> /discoverable/transaction, but for CDAP's tx manager it is 
> /cdap/discoverable/transaction, and can change based on CDAP's root.namespace 
> value.
> Picking up the zk connection string from connection info in fine in most 
> cases. We'll just need a way for user's to override that by setting 
> "data.tx.zookeeper.quorum" in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2653) Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient falling back to HBase ZK quorum setting

2016-02-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137621#comment-15137621
 ] 

James Taylor commented on PHOENIX-2653:
---

+1

> Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient 
> falling back to HBase ZK quorum setting
> 
>
> Key: PHOENIX-2653
> URL: https://issues.apache.org/jira/browse/PHOENIX-2653
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2653-4.x-HBase-0.98.patch
>
>
> From an email discussion with [~poornachandra] [~gokulavasan]
> CDAP's transaction manager's discovery information in zookeeper uses a 
> namespace. The regular znode to discover tx manager is 
> /discoverable/transaction, but for CDAP's tx manager it is 
> /cdap/discoverable/transaction, and can change based on CDAP's root.namespace 
> value.
> Picking up the zk connection string from connection info in fine in most 
> cases. We'll just need a way for user's to override that by setting 
> "data.tx.zookeeper.quorum" in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2653) Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient falling back to HBase ZK quorum setting

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137615#comment-15137615
 ] 

Hadoop QA commented on PHOENIX-2653:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12786867/PHOENIX-2653-4.x-HBase-0.98.patch
  against 4.x-HBase-0.98 branch at commit 
b0122a541325fd7e40e62e3602eb0ad748b94a4f.
  ATTACHMENT ID: 12786867

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-PHOENIX-Build/248//console

This message is automatically generated.

> Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient 
> falling back to HBase ZK quorum setting
> 
>
> Key: PHOENIX-2653
> URL: https://issues.apache.org/jira/browse/PHOENIX-2653
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2653-4.x-HBase-0.98.patch
>
>
> From an email discussion with [~poornachandra] [~gokulavasan]
> CDAP's transaction manager's discovery information in zookeeper uses a 
> namespace. The regular znode to discover tx manager is 
> /discoverable/transaction, but for CDAP's tx manager it is 
> /cdap/discoverable/transaction, and can change based on CDAP's root.namespace 
> value.
> Picking up the zk connection string from connection info in fine in most 
> cases. We'll just need a way for user's to override that by setting 
> "data.tx.zookeeper.quorum" in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2665) index split while running group by query is returning duplicate results

2016-02-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137566#comment-15137566
 ] 

James Taylor commented on PHOENIX-2665:
---

[~rajeshbabu] -  can you post the schema, query, and explain plan too? I 
suspect it's when the GroupedAggregateRegionObserver.scanOrdered() code path is 
taken, since the region lock is not take for the entire traversal of rows as it 
is in the scanUnordered code path.


> index split while running group by query is returning duplicate results
> ---
>
> Key: PHOENIX-2665
> URL: https://issues.apache.org/jira/browse/PHOENIX-2665
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.7.0
>
>
> When there is a index split while running group by query is returning 
> duplicate results.
> Instead of returning 500,000 records it's returning 729,500 records.
> {noformat}
> +--+--+
> | 4999 | 49   
> |
> +--+--+
> 500,000 rows selected (11.996 seconds)
> {noformat}
> {noformat}
> +--+--+
> | 4999 | 49   
> |
> +--+--+
> 729,500 rows selected (15.291 seconds)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2655) In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow NewerTableAlreadyExistsException if the ifNotExists flag is true

2016-02-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137837#comment-15137837
 ] 

James Taylor commented on PHOENIX-2655:
---

+1

> In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow 
> NewerTableAlreadyExistsException if the ifNotExists flag is true
> --
>
> Key: PHOENIX-2655
> URL: https://issues.apache.org/jira/browse/PHOENIX-2655
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2655.patch
>
>
> We already do this for TABLE_ALREADY_EXISTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2653) Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient falling back to HBase ZK quorum setting

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137891#comment-15137891
 ] 

Hudson commented on PHOENIX-2653:
-

SUCCESS: Integrated in Phoenix-master #1125 (See 
[https://builds.apache.org/job/Phoenix-master/1125/])
PHOENIX-2653 Use data.tx.zookeeper.quorum property to initialize (tdsilva: rev 
39a982db98f52b33decb30ec51ca4b92a230abd2)
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java
PHOENIX-2653 Use data.tx.zookeeper.quorum property to initialize (tdsilva: rev 
e5e9144f4e98803902174858051be58e9edcca11)
* 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> Use data.tx.zookeeper.quorum property to initialize TransactionServiceClient 
> falling back to HBase ZK quorum setting
> 
>
> Key: PHOENIX-2653
> URL: https://issues.apache.org/jira/browse/PHOENIX-2653
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2653-4.x-HBase-0.98.patch
>
>
> From an email discussion with [~poornachandra] [~gokulavasan]
> CDAP's transaction manager's discovery information in zookeeper uses a 
> namespace. The regular znode to discover tx manager is 
> /discoverable/transaction, but for CDAP's tx manager it is 
> /cdap/discoverable/transaction, and can change based on CDAP's root.namespace 
> value.
> Picking up the zk connection string from connection info in fine in most 
> cases. We'll just need a way for user's to override that by setting 
> "data.tx.zookeeper.quorum" in a config file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PHOENIX-2656) Shield Phoenix from Tephra repackaging

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva updated PHOENIX-2656:

Attachment: PHOENIX-2656.patch

[~jamestaylor]

Can you please review?

Thanks,
Thomas

> Shield Phoenix from Tephra repackaging
> --
>
> Key: PHOENIX-2656
> URL: https://issues.apache.org/jira/browse/PHOENIX-2656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2656.patch
>
>
> When TEPHRA-151 happens, the Tephra coprocessors will get repackaged from 
> co.cask.tephra.hbase11.coprocessor to org.apache.tephra. This would force us 
> to modify the metadata of existing users since we attach this coprocessor to 
> transactional Phoenix tables.
> At a minimum, we should create our own PhoenixTransactionProcessor which 
> delegates to Tephra's TransactionProcessor. If there are other touch points 
> like this (I'm not aware of others), we should do the same. I think we're ok 
> for the Transaction Manager since we have our own startup script we could 
> muck with (plus this is really a test-only script and deployment would be 
> different).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2655) In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow NewerTableAlreadyExistsException if the ifNotExists flag is true

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138096#comment-15138096
 ] 

Hudson commented on PHOENIX-2655:
-

SUCCESS: Integrated in Phoenix-master #1126 (See 
[https://builds.apache.org/job/Phoenix-master/1126/])
PHOENIX-2655 In MetadataClient creatTableInternal if NEWER_TABLE_FOUND 
(tdsilva: rev 1c3a86d3139804c5c2e8a51a8e02bd3ecbd59515)
* phoenix-core/src/main/java/org/apache/phoenix/schema/MetaDataClient.java


> In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow 
> NewerTableAlreadyExistsException if the ifNotExists flag is true
> --
>
> Key: PHOENIX-2655
> URL: https://issues.apache.org/jira/browse/PHOENIX-2655
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2655.patch
>
>
> We already do this for TABLE_ALREADY_EXISTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2655) In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow NewerTableAlreadyExistsException if the ifNotExists flag is true

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2655.
-
Resolution: Fixed

> In MetadataClient creatTableInternal if NEWER_TABLE_FOUND swallow 
> NewerTableAlreadyExistsException if the ifNotExists flag is true
> --
>
> Key: PHOENIX-2655
> URL: https://issues.apache.org/jira/browse/PHOENIX-2655
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2655.patch
>
>
> We already do this for TABLE_ALREADY_EXISTS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PHOENIX-2654) Add test for connection rollback with external tx context set.

2016-02-08 Thread Thomas D'Silva (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-2654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva resolved PHOENIX-2654.
-
Resolution: Not A Problem

> Add test for connection rollback with external tx context set.
> --
>
> Key: PHOENIX-2654
> URL: https://issues.apache.org/jira/browse/PHOENIX-2654
> Project: Phoenix
>  Issue Type: Test
>Reporter: Thomas D'Silva
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PHOENIX-2666) Performance regression: Aggregate query with filter on table with multiple column families

2016-02-08 Thread Mujtaba Chohan (JIRA)
Mujtaba Chohan created PHOENIX-2666:
---

 Summary: Performance regression: Aggregate query with filter on 
table with multiple column families
 Key: PHOENIX-2666
 URL: https://issues.apache.org/jira/browse/PHOENIX-2666
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Mujtaba Chohan


In the test, table contains total of 6 columns with one column per column 
family.

Running a query  {code}select count(*) from T where last_column < ?{code} is 4x 
slower after commit for PHOENIX-1312 
(https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3fdaecdaaa2a2f07070df67f861252fd44e338c3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2666) Performance regression: Aggregate query with filter on table with multiple column families

2016-02-08 Thread Mujtaba Chohan (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137972#comment-15137972
 ] 

Mujtaba Chohan commented on PHOENIX-2666:
-

[~ram_krish]

> Performance regression: Aggregate query with filter on table with multiple 
> column families
> --
>
> Key: PHOENIX-2666
> URL: https://issues.apache.org/jira/browse/PHOENIX-2666
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Mujtaba Chohan
>
> In the test, table contains total of 6 columns with one column per column 
> family.
> Running a query  {code}select count(*) from T where last_column < ?{code} is 
> 4x slower after commit for PHOENIX-1312 
> (https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=3fdaecdaaa2a2f07070df67f861252fd44e338c3)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2631) Exception when parsing boundary timestamp values

2016-02-08 Thread Sergey Soldatov (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138016#comment-15138016
 ] 

Sergey Soldatov commented on PHOENIX-2631:
--

The problem is that we consider timestamp like just a single 12 bytes array and 
SortOrder.invert is applying without meaning that the int part of Timestamp has 
boundaries 0-9. From my PoV  the right way to fix it is to write a 
separate handler for inverting PTimestamp which will invert int part like 
99-nano. But it will leads to write a separate coerceBytes and handle 
Timestamp in KeyRange (when it sets the nextKey). Possible there are other 
places that can be affected. 

> Exception when parsing boundary timestamp values
> 
>
> Key: PHOENIX-2631
> URL: https://issues.apache.org/jira/browse/PHOENIX-2631
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.6.0, 4.7.0
>Reporter: Nick Dimiduk
> Attachments: 2631-workaround.patch
>
>
> I get a stack trace when querying or explaining a query that contains a 
> timestamp value on the boundary of the day.
> {noformat}
> > CREATE TABLE FOO(
>   a VARCHAR NOT NULL,
>   b TIMESTAMP NOT NULL,
>   c VARCHAR,
>   CONSTRAINT pk PRIMARY KEY (a, b DESC ROW_TIMESTAMP, c)
> ) IMMUTABLE_ROWS=true,
>   SALT_BUCKETS=20
> ;
> No rows affected (1.532 seconds)
> > explain select * from foo where a = 'a' and b >= timestamp '2016-01-28 
> > 00:00:00' and b < timestamp '2016-01-29 00:00:00';
> Error: ERROR 201 (22000): Illegal data. (state=22000,code=201)
> java.sql.SQLException: ERROR 201 (22000): Illegal data.
>   at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:419)
>   at 
> org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:145)
>   at 
> org.apache.phoenix.schema.types.PDataType.newIllegalDataException(PDataType.java:286)
>   at 
> org.apache.phoenix.schema.types.PUnsignedInt$UnsignedIntCodec.decodeInt(PUnsignedInt.java:165)
>   at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:108)
>   at 
> org.apache.phoenix.schema.types.PTimestamp.toObject(PTimestamp.java:32)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:968)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:972)
>   at 
> org.apache.phoenix.schema.types.PDataType.toObject(PDataType.java:1001)
>   at 
> org.apache.phoenix.schema.types.PDataType.toStringLiteral(PDataType.java:1074)
>   at 
> org.apache.phoenix.schema.types.PDataType.toStringLiteral(PDataType.java:1070)
>   at 
> org.apache.phoenix.iterate.ExplainTable.appendPKColumnValue(ExplainTable.java:194)
>   at 
> org.apache.phoenix.iterate.ExplainTable.appendScanRow(ExplainTable.java:270)
>   at 
> org.apache.phoenix.iterate.ExplainTable.appendKeyRanges(ExplainTable.java:282)
>   at 
> org.apache.phoenix.iterate.ExplainTable.explain(ExplainTable.java:125)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.explain(BaseResultIterators.java:830)
>   at 
> org.apache.phoenix.iterate.RoundRobinResultIterator.explain(RoundRobinResultIterator.java:153)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getPlanSteps(BaseQueryPlan.java:468)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:322)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.iterator(BaseQueryPlan.java:193)
>   at 
> org.apache.phoenix.execute.BaseQueryPlan.getExplainPlan(BaseQueryPlan.java:463)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:459)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableExplainStatement.compilePlan(PhoenixStatement.java:438)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:266)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:261)
>   at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:260)
>   at 
> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1349)
>   at sqlline.Commands.execute(Commands.java:822)
>   at sqlline.Commands.sql(Commands.java:732)
>   at sqlline.SqlLine.dispatch(SqlLine.java:808)
>   at sqlline.SqlLine.begin(SqlLine.java:681)
>   at sqlline.SqlLine.start(SqlLine.java:398)
>   at sqlline.SqlLine.main(SqlLine.java:292)
> {noformat}
> In this case, down in {{PUnsignedInt$UnsignedIntCodec#decodeInt}}, I see the 
> parsed {{v}} is {{-1}}, and the if clause throws.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PHOENIX-2656) Shield Phoenix from Tephra repackaging

2016-02-08 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-2656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138081#comment-15138081
 ] 

James Taylor commented on PHOENIX-2656:
---

+1


> Shield Phoenix from Tephra repackaging
> --
>
> Key: PHOENIX-2656
> URL: https://issues.apache.org/jira/browse/PHOENIX-2656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Thomas D'Silva
> Fix For: 4.7.0
>
> Attachments: PHOENIX-2656.patch
>
>
> When TEPHRA-151 happens, the Tephra coprocessors will get repackaged from 
> co.cask.tephra.hbase11.coprocessor to org.apache.tephra. This would force us 
> to modify the metadata of existing users since we attach this coprocessor to 
> transactional Phoenix tables.
> At a minimum, we should create our own PhoenixTransactionProcessor which 
> delegates to Tephra's TransactionProcessor. If there are other touch points 
> like this (I'm not aware of others), we should do the same. I think we're ok 
> for the Transaction Manager since we have our own startup script we could 
> muck with (plus this is really a test-only script and deployment would be 
> different).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)