[jira] [Assigned] (PHOENIX-6291) Change Presto connector link to Trino link

2020-12-31 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-6291:
-

Assignee: Vincent Poon

> Change Presto connector link to Trino link
> --
>
> Key: PHOENIX-6291
> URL: https://issues.apache.org/jira/browse/PHOENIX-6291
> Project: Phoenix
>  Issue Type: Task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Trivial
> Attachments: PHOENIX-6291-trino.diff
>
>
> Presto SQL has been 
> [renamed|https://trino.io/blog/2020/12/27/announcing-trino.html] to Trino.
> This updates the Presto Phoenix connector link on the website to point to 
> Trino.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6291) Change Presto connector link to Trino link

2020-12-31 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-6291:
--
Attachment: PHOENIX-6291-trino.diff

> Change Presto connector link to Trino link
> --
>
> Key: PHOENIX-6291
> URL: https://issues.apache.org/jira/browse/PHOENIX-6291
> Project: Phoenix
>  Issue Type: Task
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Trivial
> Attachments: PHOENIX-6291-trino.diff
>
>
> Presto SQL has been 
> [renamed|https://trino.io/blog/2020/12/27/announcing-trino.html] to Trino.
> This updates the Presto Phoenix connector link on the website to point to 
> Trino.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6291) Change Presto connector link to Trino link

2020-12-31 Thread Vincent Poon (Jira)
Vincent Poon created PHOENIX-6291:
-

 Summary: Change Presto connector link to Trino link
 Key: PHOENIX-6291
 URL: https://issues.apache.org/jira/browse/PHOENIX-6291
 Project: Phoenix
  Issue Type: Task
Reporter: Vincent Poon


Presto SQL has been 
[renamed|https://trino.io/blog/2020/12/27/announcing-trino.html] to Trino.
This updates the Presto Phoenix connector link on the website to point to Trino.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5819) Remove unused presto-phoenix-shaded connector module

2020-04-03 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5819:
--
Attachment: PHOENIX-5819.master.v1.patch

> Remove unused presto-phoenix-shaded connector module
> 
>
> Key: PHOENIX-5819
> URL: https://issues.apache.org/jira/browse/PHOENIX-5819
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5819.master.v1.patch
>
>
> No longer needed, as Presto is using the embedded classifier in 
> phoenix-client instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5819) Remove unused presto-phoenix-shaded connector module

2020-04-03 Thread Vincent Poon (Jira)
Vincent Poon created PHOENIX-5819:
-

 Summary: Remove unused presto-phoenix-shaded connector module
 Key: PHOENIX-5819
 URL: https://issues.apache.org/jira/browse/PHOENIX-5819
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.16.0
Reporter: Vincent Poon
Assignee: Vincent Poon


No longer needed, as Presto is using the embedded classifier in phoenix-client 
instead



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5620) Phoenix-client embedded do not shade properly slf4j classes

2019-12-23 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5620:
-

Assignee: Vincent Poon

> Phoenix-client embedded do not shade properly slf4j classes
> ---
>
> Key: PHOENIX-5620
> URL: https://issues.apache.org/jira/browse/PHOENIX-5620
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Grzegorz Kokosinski
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5620.4.x-HBase-1.4.v1.patch
>
>
> Phoenix-client should either shade slf4j or require slf4j dependency. Now 
> projects that are using slf4j internally conflicts with phoenix-client embeded
> $ unzip -l 
> ~/.m2/repository/org/apache/phoenix/phoenix-client/4.14.1-HBase-1.4/phoenix-client-4.14.1-HBase-1.4-embedded.jar
>  | grep slf4j
>  0 04-12-2019 13:18 org/slf4j/
>  0 04-12-2019 13:18 org/slf4j/helpers/
>  3366 04-12-2019 13:18 org/slf4j/helpers/BasicMarker.class
>  1427 04-12-2019 13:18 org/slf4j/helpers/BasicMarkerFactory.class
>  2660 04-12-2019 13:18 org/slf4j/helpers/BasicMDCAdapter.class
>  1521 04-12-2019 13:18 org/slf4j/helpers/FormattingTuple.class
>  4704 04-12-2019 13:18 org/slf4j/helpers/MarkerIgnoringBase.class
>  6607 04-12-2019 13:18 org/slf4j/helpers/MessageFormatter.class
>  823 04-12-2019 13:18 org/slf4j/helpers/NamedLoggerBase.class
>  3267 04-12-2019 13:18 org/slf4j/helpers/NOPLogger.class
>  584 04-12-2019 13:18 org/slf4j/helpers/NOPLoggerFactory.class
>  1005 04-12-2019 13:18 org/slf4j/helpers/NOPMDCAdapter.class
>  1047 04-12-2019 13:18 org/slf4j/helpers/SubstituteLoggerFactory.class
>  931 04-12-2019 13:18 org/slf4j/helpers/Util.class
>  180 04-12-2019 13:18 org/slf4j/ILoggerFactory.class
>  272 04-12-2019 13:18 org/slf4j/IMarkerFactory.class
>  1375 04-12-2019 13:18 org/slf4j/Logger.class
>  7940 04-12-2019 13:18 org/slf4j/LoggerFactory.class
>  601 04-12-2019 13:18 org/slf4j/Marker.class
>  1325 04-12-2019 13:18 org/slf4j/MarkerFactory.class
>  2807 04-12-2019 13:18 org/slf4j/MDC.class
>  0 04-12-2019 13:18 org/slf4j/spi/
>  455 04-12-2019 13:18 org/slf4j/spi/LocationAwareLogger.class
>  249 04-12-2019 13:18 org/slf4j/spi/LoggerFactoryBinder.class
>  249 04-12-2019 13:18 org/slf4j/spi/MarkerFactoryBinder.class
>  384 04-12-2019 13:18 org/slf4j/spi/MDCAdapter.class
>  0 04-12-2019 13:18 META-INF/maven/org.slf4j/
>  0 04-12-2019 13:18 META-INF/maven/org.slf4j/slf4j-api/
>  2689 04-12-2019 13:18 META-INF/maven/org.slf4j/slf4j-api/pom.xml
>  108 04-12-2019 13:18 META-INF/maven/org.slf4j/slf4j-api/pom.properties
> Maven complains:
> [WARNING] Found duplicate and different classes in 
> [org.apache.phoenix:phoenix-client:4.14.1-HBase-1.4:jar:embedded, 
> org.slf4j:slf4j-api:1.7.28]:
> [WARNING] org.slf4j.ILoggerFactory
> [WARNING] org.slf4j.IMarkerFactory
> [WARNING] org.slf4j.Logger
> [WARNING] org.slf4j.LoggerFactory
> [WARNING] org.slf4j.MDC
> [WARNING] org.slf4j.Marker
> [WARNING] org.slf4j.MarkerFactory
> [WARNING] org.slf4j.helpers.BasicMDCAdapter
> [WARNING] org.slf4j.helpers.BasicMarker
> [WARNING] org.slf4j.helpers.BasicMarkerFactory
> [WARNING] org.slf4j.helpers.FormattingTuple
> [WARNING] org.slf4j.helpers.MarkerIgnoringBase
> [WARNING] org.slf4j.helpers.MessageFormatter
> [WARNING] org.slf4j.helpers.NOPLogger
> [WARNING] org.slf4j.helpers.NOPLoggerFactory
> [WARNING] org.slf4j.helpers.NOPMDCAdapter
> [WARNING] org.slf4j.helpers.NamedLoggerBase
> [WARNING] org.slf4j.helpers.SubstituteLoggerFactory
> [WARNING] org.slf4j.helpers.Util
> [WARNING] org.slf4j.spi.LocationAwareLogger
> [WARNING] org.slf4j.spi.LoggerFactoryBinder
> [WARNING] org.slf4j.spi.MDCAdapter
> [WARNING] org.slf4j.spi.MarkerFactoryBinder



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5528) Race condition in index verification causes multiple index rows to be returned for single data table row

2019-10-15 Thread Vincent Poon (Jira)
Vincent Poon created PHOENIX-5528:
-

 Summary: Race condition in index verification causes multiple 
index rows to be returned for single data table row
 Key: PHOENIX-5528
 URL: https://issues.apache.org/jira/browse/PHOENIX-5528
 Project: Phoenix
  Issue Type: Bug
Reporter: Vincent Poon


Warning: This is an artificially generated scenario that likely has a very low 
probability of happening in practice.  But a race condition nevertheless.  
Unfortunately I don't have a test case, but was able to produce this by 
debugging a local regionserver and adding breakpoints at the right places to 
produce the ordering here.

The core problem is that when we do an update to the data table, we produce two 
unverified index rows at first.  When we scan both of these index rows and 
attempt to verify via rebuilding the data table row, we cannot guarantee that 
both verifications happen before the data table update, or both happen after 
the data table update.

I use multiple index regions here to demonstrate, but I believe it could happen 
within a single region as well.

Steps:
1) Create a test table with "pk" and "indexed_val" columns, and a global index 
on "indexed_val".
2) upsert into test values ('test_pk', 'test_val');
3) Split the index table on 'test_pk':
   hbase shell: split 'test_index', 'test_pk'.
   This creates two regions, call them regionA and regionB (which holds the 
existing index row)
3) start an update: upsert into test values ('test_pk', 'new_val');
   The first thing the indexing code does is create two unverified index rows: 
one is a new version of the existing index row, and the other is for the new 
indexed value.
   We pause the thread after this is done, before the row locks and data table 
write happens.
4) select indexed_val from test;
   This scans both the index regions in parallel.  Each scan picks up a 
unverified row in its region.  We pause in GlobalIndexChecker.
   Let the regionB scan proceed.  It will attempt to rebuild the data table 
row.  The data table still has 'test_val' as the indexed value.  The rebuild 
succeeds.
   scan on regionA still paused.
5) The original update proceeds to update the data table indexed value to 
'new_val'.
6) The scan on regionA proceeds, and attempted to rebuild the data table row.  
The rebuild succeeds with 'new_val' as the indexed value.
7) Both 'test_val' and 'new_val' are returned to the client, because both 
rebuilds succeeded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (PHOENIX-5515) Able to write indexed value to data table without writing to index table

2019-10-14 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reopened PHOENIX-5515:
---

> Able to write indexed value to data table without writing to index table
> 
>
> Key: PHOENIX-5515
> URL: https://issues.apache.org/jira/browse/PHOENIX-5515
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.3
>Reporter: Vincent Poon
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5515.master.001.patch, 
> PHOENIX-5515.master.002.patch
>
>  Time Spent: 1h 50m
>  Remaining Estimate: 0h
>
> Using the 4.14.3 client, it still seems the IndexFailurePolicy is still 
> kicking in, which disables the index on write failure.  This means that while 
> the index is in 'disabled' state, writes to the data table can happen without 
> any writes to the index table.  While in theory this might be ok since the 
> rebuilder should eventually kick in and rebuild from the disable_timestamp, 
> this breaks the new indexing design invariant that there should be no data 
> table rows without a corresponding index row (potentially unverified), so 
> this could potentially cause some unexpected behavior.
> Steps to repro:
> 1) Create data table
> 2) Create index table
> 3) "close_region" on index region from hbase shell
> 4) Upsert to data table
> Eventually after some number of retries, the index will get disabled, which 
> means any other client can write to the data table without writing to the 
> index table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5515) Able to write indexed value to data table without writing to index table

2019-10-10 Thread Vincent Poon (Jira)
Vincent Poon created PHOENIX-5515:
-

 Summary: Able to write indexed value to data table without writing 
to index table
 Key: PHOENIX-5515
 URL: https://issues.apache.org/jira/browse/PHOENIX-5515
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.3
Reporter: Vincent Poon


Using the 4.14.3 client, it still seems the IndexFailurePolicy is still kicking 
in, which disables the index on write failure.  This means that while the index 
is in 'disabled' state, writes to the data table can happen without any writes 
to the index table.  While in theory this might be ok since the rebuilder 
should eventually kick in and rebuild from the disable_timestamp, this breaks 
the new indexing design invariant that there should be no data table rows 
without a corresponding index row (potentially unverified), so this could 
potentially cause some unexpected behavior.

Steps to repro:
1) Create data table
2) Create index table
3) "close_region" on index region from hbase shell
4) Upsert to data table
Eventually after some number of retries, the index will get disabled, which 
means any other client can write to the data table without writing to the index 
table.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5430) Update Generate a patch session on contributing page

2019-09-16 Thread Vincent Poon (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5430:
-

Assignee: Christine Feng

> Update Generate a patch session on contributing page
> 
>
> Key: PHOENIX-5430
> URL: https://issues.apache.org/jira/browse/PHOENIX-5430
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Xinyi Yan
>Assignee: Christine Feng
>Priority: Trivial
> Attachments: PHOENIX-5430.patch
>
>
> Our Contributing page is out of date, and I saw new contributors try to work 
> on 4.x-HBase-1.2 branch. For this reason, we need to remove `4.x-HBase-1.2` 
> and add `4.x-HBase-1.5` info at Generate a patch session.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Created] (PHOENIX-5456) IndexScrutinyTool slow for indexes on multitenant tables

2019-08-27 Thread Vincent Poon (Jira)
Vincent Poon created PHOENIX-5456:
-

 Summary: IndexScrutinyTool slow for indexes on multitenant tables
 Key: PHOENIX-5456
 URL: https://issues.apache.org/jira/browse/PHOENIX-5456
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.1.0
Reporter: Vincent Poon


The IndexScrutinyTool is doing full scans for index tables on multitenant 
tables. 
This is due to a change in PHOENIX-5089, where we added code to 
IndexColumnNames to skip a portion of the PK if the table is multitenant:

https://github.com/apache/phoenix/commit/cc9754fff840f38d13c3adb0c963296959fff3fa#diff-0705645faa779229d00792c032ed377fR64

This causes scrutinies for global, non-view indexes on multitenant tables to 
skip the tenantId when generating the pk column list, thereby causing a full 
scan.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


[jira] [Resolved] (PHOENIX-5314) Add Presto connector link to website

2019-06-03 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5314.
---
Resolution: Fixed

> Add Presto connector link to website
> 
>
> Key: PHOENIX-5314
> URL: https://issues.apache.org/jira/browse/PHOENIX-5314
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: publish.diff, website_presto_connector.patch
>
>
> Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5314:
--
Attachment: website_presto_connector.patch

> Add Presto connector link to website
> 
>
> Key: PHOENIX-5314
> URL: https://issues.apache.org/jira/browse/PHOENIX-5314
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: publish.diff, website_presto_connector.patch
>
>
> Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5314:
--
Attachment: publish.diff

> Add Presto connector link to website
> 
>
> Key: PHOENIX-5314
> URL: https://issues.apache.org/jira/browse/PHOENIX-5314
> Project: Phoenix
>  Issue Type: Task
>Affects Versions: 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: publish.diff, website_presto_connector.patch
>
>
> Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5314) Add Presto connector link to website

2019-05-31 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5314:
-

 Summary: Add Presto connector link to website
 Key: PHOENIX-5314
 URL: https://issues.apache.org/jira/browse/PHOENIX-5314
 Project: Phoenix
  Issue Type: Task
Affects Versions: 5.1.0
Reporter: Vincent Poon
Assignee: Vincent Poon
 Attachments: publish.diff, website_presto_connector.patch

Add a link under Addons



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5312) Publish official Phoenix docker image

2019-05-31 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5312:
-

 Summary: Publish official Phoenix docker image
 Key: PHOENIX-5312
 URL: https://issues.apache.org/jira/browse/PHOENIX-5312
 Project: Phoenix
  Issue Type: Wish
Affects Versions: 5.1.0
Reporter: Vincent Poon


Provide a canonical image to make it easy for new users to download and 
immediately run and play around with Phoenix.
This is also the first step in using tools like 
[docker-client|https://github.com/spotify/docker-client] to run integration 
tests against a docker image.
Other projects like the Presto-phoenix connector could then also execute tests 
against released images.

Ideally, we publish the image on docker hub as an ["Official 
Image"|https://docs.docker.com/docker-hub/official_images/]




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5231) Configurable Stats Cache

2019-05-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5231.
---
Resolution: Fixed

Pushed addendum to 4.x and master branches

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5231-quickfix-v2.txt, 5231-quickfix.txt, 
> 5231-services-fix.patch, PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-28 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5231:
--
Attachment: 5231-services-fix.patch

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5231-quickfix-v2.txt, 5231-quickfix.txt, 
> 5231-services-fix.patch, PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-28 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5231:
--
Attachment: (was: 5231-services-fix.patch)

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5231-quickfix-v2.txt, 5231-quickfix.txt, 
> 5231-services-fix.patch, PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5231) Configurable Stats Cache

2019-05-28 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5231:
--
Attachment: 5231-services-fix.patch

> Configurable Stats Cache
> 
>
> Key: PHOENIX-5231
> URL: https://issues.apache.org/jira/browse/PHOENIX-5231
> Project: Phoenix
>  Issue Type: Test
>Reporter: Daniel Wong
>Assignee: Daniel Wong
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5231-quickfix-v2.txt, 5231-quickfix.txt, 
> 5231-services-fix.patch, PHOENIX-5231.4.x-HBase-1.3.patch, 
> PHOENIX-5231.4.x-HBase-1.3.v2.patch, PHOENIX-5231.4.x-HBase-1.3.v3.patch, 
> PHOENIX-5231.master.v3.patch, PHOENIX-5231.master.v4.patch
>
>  Time Spent: 8h 40m
>  Remaining Estimate: 0h
>
> Currently, the phoenix stats cache is per 
> ConnectionQuerySerivce/ConnectionProfile, which leads to duplicated cached 
> entry (the guideposts) and waste resources if these separate connections are 
> querying the same underlying table. It would be good to be able to provide a 
> configurable stats cache as control the cache level so it could be per JVM.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5302) Different isNamespaceMappingEnabled for server / client causes TableNotFoundException

2019-05-24 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5302:
-

 Summary: Different isNamespaceMappingEnabled for server / client 
causes TableNotFoundException
 Key: PHOENIX-5302
 URL: https://issues.apache.org/jira/browse/PHOENIX-5302
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.2
Reporter: Vincent Poon


Scenario:
1)  Fresh cluster start with server isNamespaceMappingEnabled=true.
2)  Client connects with isNamespaceMappingEnabled=false.  Expected exception 
thrown ("Inconsistent namespace mapping")
3)  Client connects with isNamespaceMappingEnabled=true.  Exception: "Table 
undefined. tableName=SYSTEM.CATALOG"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-26 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5213.
---
   Resolution: Fixed
Fix Version/s: 5.1.0
   4.15.0

Pushed to master and 4.x branches

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-17 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: (was: PHOENIX-5213.4.x-HBase-1.4.v4.patch)

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-17 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v4.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-14 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v4.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v4.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-11 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Description: 
To make the existing phoenix-client, I'm proposing the following changes:
1)  Add additional relocations of some packages

Add a new "embedded" classifier to phoenix-client that does the following: 
2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
directly from phoenix-core itself, but transitively from other projects.  It's 
generally considered best practice to not impose a log binding on downstream 
projects.  The slf4j-log4j12 jar will still be in the phoenix tarball's /lib 
folder.

3)  Create a source jar for phoenix-client embedded.

4)  Create a dependency-reduced pom, so that the client can be used directly in 
downstream projects without having to exclude transitive artifacts.

5) rename the jar to match the final name in the repository:  
phoenix-client-{version}.jar  There is now a symlink 
phoenix-{version}-client.jar to maintain backwards compatibility.

  was:
To make the existing phoenix-client, I'm proposing the following changes:
1)  Add additional relocations of some packages

Add a new "embedded" classifier to phoenix-client that does the following: 
2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
directly from phoenix-core itself, but transitively from other projects.  It's 
generally considered best practice to not impose a log binding on downstream 
projects.  The slf4j-log4j12 jar will still be in the phoenix tarball's /lib 
folder.

3)  Create a source jar for phoenix-client embedded.

4)  Create a dependency-reduced pom, so that the client can be used directly in 
downstream projects without having to exclude transitive artifacts.


> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.
> 5) rename the jar to match the final name in the repository:  
> phoenix-client-{version}.jar  There is now a symlink 
> phoenix-{version}-client.jar to maintain backwards compatibility.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-11 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v3.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch, PHOENIX-5213.4.x-HBase-1.4.v3.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-04-10 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Description: 
To make the existing phoenix-client, I'm proposing the following changes:
1)  Add additional relocations of some packages

Add a new "embedded" classifier to phoenix-client that does the following: 
2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
directly from phoenix-core itself, but transitively from other projects.  It's 
generally considered best practice to not impose a log binding on downstream 
projects.  The slf4j-log4j12 jar will still be in the phoenix tarball's /lib 
folder.

3)  Create a source jar for phoenix-client embedded.

4)  Create a dependency-reduced pom, so that the client can be used directly in 
downstream projects without having to exclude transitive artifacts.

  was:
To make the existing phoenix-client, I'm proposing the following changes:
1)  Add additional relocations of some packages

2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
directly from phoenix-core itself, but transitively from other projects.  It's 
generally considered best practice to not impose a log binding on downstream 
projects.  The slf4j-log4j12 jar will still be in the phoenix tarball's /lib 
folder.

3)  Changing the jar naming from phoenix\-\[version\]\-client.jar to 
phoenix-client-\[version\].jar
The reason for this is that there is no way, AFAIK, to change the naming 
convention in maven's repo.  You can change the jar name locally, but when it 
gets installed to the repo, it always has to follow the artfiactname-version 
naming convention.  To avoid confusion of having two separate jar file names, I 
propose we just change it to Maven's convention so we can publish releases of 
phoenix-client.

4)  Create a source jar for phoenix-client.

5)  Create a dependency-reduced pom, so that the client can be used directly in 
downstream projects without having to exclude transitive artifacts.


> Phoenix-client improvements:  add more relocations, exclude log binding, 
> change naming
> --
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, add source jar

2019-04-10 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Summary: Phoenix-client improvements:  add more relocations, exclude log 
binding, add source jar  (was: Phoenix-client improvements:  add more 
relocations, exclude log binding, change naming)

> Phoenix-client improvements:  add more relocations, exclude log binding, add 
> source jar
> ---
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> Add a new "embedded" classifier to phoenix-client that does the following: 
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Create a source jar for phoenix-client embedded.
> 4)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-04-03 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v2.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, 
> change naming
> --
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Changing the jar naming from phoenix\-\[version\]\-client.jar to 
> phoenix-client-\[version\].jar
> The reason for this is that there is no way, AFAIK, to change the naming 
> convention in maven's repo.  You can change the jar name locally, but when it 
> gets installed to the repo, it always has to follow the artfiactname-version 
> naming convention.  To avoid confusion of having two separate jar file names, 
> I propose we just change it to Maven's convention so we can publish releases 
> of phoenix-client.
> 4)  Create a source jar for phoenix-client.
> 5)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-04-03 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: (was: PHOENIX-5213.4.x-HBase-1.4.v2.patch)

> Phoenix-client improvements:  add more relocations, exclude log binding, 
> change naming
> --
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Changing the jar naming from phoenix\-\[version\]\-client.jar to 
> phoenix-client-\[version\].jar
> The reason for this is that there is no way, AFAIK, to change the naming 
> convention in maven's repo.  You can change the jar name locally, but when it 
> gets installed to the repo, it always has to follow the artfiactname-version 
> naming convention.  To avoid confusion of having two separate jar file names, 
> I propose we just change it to Maven's convention so we can publish releases 
> of phoenix-client.
> 4)  Create a source jar for phoenix-client.
> 5)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-04-03 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v2.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, 
> change naming
> --
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5213.4.x-HBase-1.4.v2.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Changing the jar naming from phoenix\-\[version\]\-client.jar to 
> phoenix-client-\[version\].jar
> The reason for this is that there is no way, AFAIK, to change the naming 
> convention in maven's repo.  You can change the jar name locally, but when it 
> gets installed to the repo, it always has to follow the artfiactname-version 
> naming convention.  To avoid confusion of having two separate jar file names, 
> I propose we just change it to Maven's convention so we can publish releases 
> of phoenix-client.
> 4)  Create a source jar for phoenix-client.
> 5)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5214) Cleanup phoenix-core pom

2019-03-26 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5214:
--
Attachment: PHOENIX-5214.4.x-HBase-1.4.v1.patch

> Cleanup phoenix-core pom
> 
>
> Key: PHOENIX-5214
> URL: https://issues.apache.org/jira/browse/PHOENIX-5214
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5214.4.x-HBase-1.4.v1.patch
>
>
> 1) Remove version numbers when already specified in base pom's 
> dependencyManagement section
> 2) Change sqlline scope to 'runtime' since we don't need it to actually 
> compile.  (technically this shouldn't be in the pom and should be an assembly 
> thing that gets added to a tarball, but that's another jira for another 
> day...)
> 3) Change findbugs scope to 'provided' - we don't need this beyond 
> compiling/testing
> 4)  Remove hbase-annotations and hadoop-annotations.  We don't seem to be 
> using these anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5214) Cleanup phoenix-core pom

2019-03-26 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5214:
-

Assignee: Vincent Poon

> Cleanup phoenix-core pom
> 
>
> Key: PHOENIX-5214
> URL: https://issues.apache.org/jira/browse/PHOENIX-5214
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
>
> 1) Remove version numbers when already specified in base pom's 
> dependencyManagement section
> 2) Change sqlline scope to 'runtime' since we don't need it to actually 
> compile.  (technically this shouldn't be in the pom and should be an assembly 
> thing that gets added to a tarball, but that's another jira for another 
> day...)
> 3) Change findbugs scope to 'provided' - we don't need this beyond 
> compiling/testing
> 4)  Remove hbase-annotations and hadoop-annotations.  We don't seem to be 
> using these anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5214) Cleanup phoenix-core pom

2019-03-26 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5214:
-

 Summary: Cleanup phoenix-core pom
 Key: PHOENIX-5214
 URL: https://issues.apache.org/jira/browse/PHOENIX-5214
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.15.0
Reporter: Vincent Poon


1) Remove version numbers when already specified in base pom's 
dependencyManagement section

2) Change sqlline scope to 'runtime' since we don't need it to actually 
compile.  (technically this shouldn't be in the pom and should be an assembly 
thing that gets added to a tarball, but that's another jira for another day...)

3) Change findbugs scope to 'provided' - we don't need this beyond 
compiling/testing

4)  Remove hbase-annotations and hadoop-annotations.  We don't seem to be using 
these anywhere.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-03-26 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5213:
--
Attachment: PHOENIX-5213.4.x-HBase-1.4.v1.patch

> Phoenix-client improvements:  add more relocations, exclude log binding, 
> change naming
> --
>
> Key: PHOENIX-5213
> URL: https://issues.apache.org/jira/browse/PHOENIX-5213
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5213.4.x-HBase-1.4.v1.patch
>
>
> To make the existing phoenix-client, I'm proposing the following changes:
> 1)  Add additional relocations of some packages
> 2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
> directly from phoenix-core itself, but transitively from other projects.  
> It's generally considered best practice to not impose a log binding on 
> downstream projects.  The slf4j-log4j12 jar will still be in the phoenix 
> tarball's /lib folder.
> 3)  Changing the jar naming from phoenix-[version]-client.jar to 
> phoenix-client-[version].jar
> The reason for this is that there is no way, AFAIK, to change the naming 
> convention in maven's repo.  You can change the jar name locally, but when it 
> gets installed to the repo, it always has to follow the artfiactname-version 
> naming convention.  To avoid confusion of having two separate jar file names, 
> I propose we just change it to Maven's convention so we can publish releases 
> of phoenix-client.
> 4)  Create a source jar for phoenix-client.
> 5)  Create a dependency-reduced pom, so that the client can be used directly 
> in downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5213) Phoenix-client improvements: add more relocations, exclude log binding, change naming

2019-03-26 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5213:
-

 Summary: Phoenix-client improvements:  add more relocations, 
exclude log binding, change naming
 Key: PHOENIX-5213
 URL: https://issues.apache.org/jira/browse/PHOENIX-5213
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 5.0.0, 4.15.0
Reporter: Vincent Poon
Assignee: Vincent Poon


To make the existing phoenix-client, I'm proposing the following changes:
1)  Add additional relocations of some packages

2)  Exclude the slf4j-log4j12 binding.  Apparently this isn't pulled in 
directly from phoenix-core itself, but transitively from other projects.  It's 
generally considered best practice to not impose a log binding on downstream 
projects.  The slf4j-log4j12 jar will still be in the phoenix tarball's /lib 
folder.

3)  Changing the jar naming from phoenix-[version]-client.jar to 
phoenix-client-[version].jar
The reason for this is that there is no way, AFAIK, to change the naming 
convention in maven's repo.  You can change the jar name locally, but when it 
gets installed to the repo, it always has to follow the artfiactname-version 
naming convention.  To avoid confusion of having two separate jar file names, I 
propose we just change it to Maven's convention so we can publish releases of 
phoenix-client.

4)  Create a source jar for phoenix-client.

5)  Create a dependency-reduced pom, so that the client can be used directly in 
downstream projects without having to exclude transitive artifacts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3082) timestamp function display wrong output

2019-03-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-3082.
---
Resolution: Cannot Reproduce

Can't repro this, feel free to reopen if you can repro it

> timestamp function display wrong output
> ---
>
> Key: PHOENIX-3082
> URL: https://issues.apache.org/jira/browse/PHOENIX-3082
> Project: Phoenix
>  Issue Type: Bug
>Reporter: qinzl
>Priority: Major
>
> when i input : select now() ; or select CURRENT_DATE ( )
> the result is :
> 292278994-08-17 07:12:55.807   
> the year display is not correct



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5139:
-

Assignee: Vincent Poon

> PhoenixDriver lockInterruptibly usage could unlock without locking
> --
>
> Key: PHOENIX-5139
> URL: https://issues.apache.org/jira/browse/PHOENIX-5139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5139.4.x-HBase-1.4.v1.patch
>
>
> We have calls to lockInterruptibly surrounded by a finally call to unlock, 
> but there's a chance InterruptedException was thrown and we didn't obtain the 
> lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5139:
--
Attachment: PHOENIX-5139.4.x-HBase-1.4.v1.patch

> PhoenixDriver lockInterruptibly usage could unlock without locking
> --
>
> Key: PHOENIX-5139
> URL: https://issues.apache.org/jira/browse/PHOENIX-5139
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, 5.1.0
>Reporter: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5139.4.x-HBase-1.4.v1.patch
>
>
> We have calls to lockInterruptibly surrounded by a finally call to unlock, 
> but there's a chance InterruptedException was thrown and we didn't obtain the 
> lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5139) PhoenixDriver lockInterruptibly usage could unlock without locking

2019-02-13 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5139:
-

 Summary: PhoenixDriver lockInterruptibly usage could unlock 
without locking
 Key: PHOENIX-5139
 URL: https://issues.apache.org/jira/browse/PHOENIX-5139
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0, 5.1.0
Reporter: Vincent Poon


We have calls to lockInterruptibly surrounded by a finally call to unlock, but 
there's a chance InterruptedException was thrown and we didn't obtain the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5094) Index can transition from INACTIVE to ACTIVE via Phoenix Client

2019-02-01 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5094.
---
   Resolution: Resolved
Fix Version/s: 5.1.0
   4.15.0

Pushed to master and 4.x branches.  Thanks for the patch [~kiran.maturi] and 
[~mihir6692]

> Index can transition from INACTIVE to ACTIVE via Phoenix Client
> ---
>
> Key: PHOENIX-5094
> URL: https://issues.apache.org/jira/browse/PHOENIX-5094
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5094-4.14-HBase-1.3.01.patch, 
> PHOENIX-5094-4.14-HBase-1.3.02.patch, PHOENIX-5094-4.14-HBase-1.3.03.patch, 
> PHOENIX-5094-4.14-HBase-1.3.04.patch, PHOENIX-5094-4.14-HBase-1.3.05.patch, 
> PHOENIX-5094-master.01.patch, PHOENIX-5094-master.02.patch, 
> PHOENIX-5094-master.03.patch
>
>
> Suppose Index is in INACTIVE state and Client load is running continuously. 
> With INACTIVE State, client will keep maintaining index.
> Before Rebuilder could run and bring index back in sync with data table, If 
> some mutation for Index fails from client side, then client will transition 
> Index state (From INACTIVE--> PENDING_DISABLE).
> If client succeeds in writing mutation in subsequent retries, it will 
> transition Index state again ( From PENDING_DISABLE --> ACTIVE) .
> Above scenario will leave some part of Index out of sync with data table.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-02-01 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reopened PHOENIX-4993:
---

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-02-01 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4993.
---
Resolution: Resolved

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-02-01 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4993.
---
Resolution: Fixed

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch, 
> PHOENIX-4993-master.addendum-1.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-30 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4993.
---
Resolution: Fixed

Pushed to master

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch, PHOENIX-4993-master.02.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2019-01-28 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reopened PHOENIX-4993:
---

[~kiran.maturi] The master patch caused a build failure.  I've reverted the 
commit for now - can you look into it?

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4993-4.x-HBase-1.3.01.patch, 
> PHOENIX-4993-4.x-HBase-1.3.02.patch, PHOENIX-4993-4.x-HBase-1.3.03.patch, 
> PHOENIX-4993-4.x-HBase-1.3.04.patch, PHOENIX-4993-4.x-HBase-1.3.05.patch, 
> PHOENIX-4993-master.01.patch
>
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5103) Can't create/drop table using 4.14 client against 4.15 server

2019-01-18 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5103:
--
Priority: Blocker  (was: Major)

> Can't create/drop table using 4.14 client against 4.15 server
> -
>
> Key: PHOENIX-5103
> URL: https://issues.apache.org/jira/browse/PHOENIX-5103
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0
>Reporter: Vincent Poon
>Priority: Blocker
>
> server is running 4.15 commit e3280f
> Connect with 4.14.1 client.  Create table gives this:
> Caused by: 
> org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.TableNotFoundException):
>  org.apache.hadoop.hbase.TableNotFoundException: Table 'SYSTEM:CHILD_LINK' 
> was not found, got: SYSTEM:CATALOG.
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1362)
>   at 
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1230)
>   at 
> org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5103) Can't create/drop table using 4.14 client against 4.15 server

2019-01-18 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5103:
-

 Summary: Can't create/drop table using 4.14 client against 4.15 
server
 Key: PHOENIX-5103
 URL: https://issues.apache.org/jira/browse/PHOENIX-5103
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: Vincent Poon


server is running 4.15 commit e3280f
Connect with 4.14.1 client.  Create table gives this:

Caused by: 
org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.TableNotFoundException):
 org.apache.hadoop.hbase.TableNotFoundException: Table 'SYSTEM:CHILD_LINK' was 
not found, got: SYSTEM:CATALOG.
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1362)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1230)
at 
org.apache.hadoop.hbase.client.CoprocessorHConnection.locateRegion(CoprocessorHConnection.java:41)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5095) Support INTERLEAVE of parent and child tables

2019-01-10 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5095:
-

 Summary: Support INTERLEAVE of parent and child tables
 Key: PHOENIX-5095
 URL: https://issues.apache.org/jira/browse/PHOENIX-5095
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0
Reporter: Vincent Poon


Spanner has a concept of [interleaved 
tables|https://cloud.google.com/spanner/docs/schema-and-data-model#creating-interleaved-tables]

I'd like to brainstorm here how to implement this in Phoenix.  In general we 
want a design that can have
1) Fast queries against the parent table PK
2) Fast queries against the child table PK
3) Fast joins between the parent and child

It seems we can get pretty close to this with views.  Views can have their own 
PK which adds to the rowkey of the base table.  However, there doesn't seem to 
be a delimiter to distinguish PKs of different views on the base table.  The 
closest I could up with is adding a delimiter to the base table PK, something 
like:
CREATE TABLE IF NOT EXISTS Singers (
SingerId BIGINT NOT NULL,
Delimiter CHAR(10) NOT NULL,
FirstName VARCHAR,
CONSTRAINT PK PRIMARY KEY
(
SingerId,
Delimiter
)
);
CREATE VIEW Albums (AlbumId BIGINT PRIMARY KEY, AlbumTitle VARCHAR) AS SELECT * 
from Singers where Delimiter = 'Albums';

We also need to make the JOIN on these tables more intelligent, such that a 
single scan can join across parent-child.  Perhaps by reading metadata created 
during INTERLEAVE table creation, so we know we are joining across interleaved 
tables.

We could also have a custom split policy to avoid splitting in the middle of an 
interleaved table (though this might restrict how large your interleaved child 
table can be).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5088) Generate source jar and reduced pom for phoenix-client

2019-01-03 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5088:
--
Attachment: PHOENIX-5088.4.x-HBase-1.4.v1.patch

> Generate source jar and reduced pom for phoenix-client
> --
>
> Key: PHOENIX-5088
> URL: https://issues.apache.org/jira/browse/PHOENIX-5088
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.15.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Minor
> Attachments: PHOENIX-5088.4.x-HBase-1.4.v1.patch
>
>
> Other projects may include phoenix-client as a dependency.  For convenience, 
> we should generate the sources jar and dependency-reduced-pom.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5088) Generate source jar and reduced pom for phoenix-client

2019-01-02 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5088:
-

 Summary: Generate source jar and reduced pom for phoenix-client
 Key: PHOENIX-5088
 URL: https://issues.apache.org/jira/browse/PHOENIX-5088
 Project: Phoenix
  Issue Type: Improvement
Affects Versions: 4.15.0
Reporter: Vincent Poon
Assignee: Vincent Poon


Other projects may include phoenix-client as a dependency.  For convenience, we 
should generate the sources jar and dependency-reduced-pom.xml.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4832) Add Canary Test Tool for Phoenix Query Server

2018-12-07 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4832:
--
Affects Version/s: 5.0.0
   4.14.1

> Add Canary Test Tool for Phoenix Query Server
> -
>
> Key: PHOENIX-4832
> URL: https://issues.apache.org/jira/browse/PHOENIX-4832
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Ashutosh Parekh
>Assignee: Swaroopa Kadam
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4832-4.x-HBase-1.4.patch, 
> PHOENIX-4832-4.x-HBase-1.4.patch, PHOENIX-4832-master.patch, 
> PHOENIX-4832.patch
>
>
> A suggested improvement is to add a Canary Test tool to the Phoenix Query 
> Server. It will execute a set of Basic Tests (CRUD) against a PQS end-point 
> and report on the proper functioning and testing results. A configurable Log 
> Sink can help to publish the results as required.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-12-04 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4781.
---
Resolution: Resolved

Pushed v5 to master and 4.x branches

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.4.x-HBase-1.4.v4.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v5.patch, PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-30 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: PHOENIX-4781.4.x-HBase-1.4.v5.patch

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.4.x-HBase-1.4.v4.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v5.patch, PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-30 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: PHOENIX-4781.4.x-HBase-1.4.v4.patch

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1
>
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.4.x-HBase-1.4.v4.patch, 
> PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-30 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: PHOENIX-4781.addendum.patch

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch, PHOENIX-4781.addendum.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-11-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-1567.
---
   Resolution: Fixed
Fix Version/s: 4.14.1

After applying PHOENIX-4781, I was able to publish the client and server jars 
for 4.14.1 here:
https://repository.apache.org/content/repositories/releases/org/apache/phoenix/phoenix-client/

Marking this resolved.

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.14.1
>
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: PHOENIX-4781.4.x-HBase-1.4.v3.patch

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: (was: PHOENIX-4781.4.x-HBase-1.4.v3.patch)

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-11-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4781:
--
Attachment: PHOENIX-4781.4.x-HBase-1.4.v3.patch

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch, PHOENIX-4781.002.patch, 
> PHOENIX-4781.4.x-HBase-1.4.v3.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}\-client`, which is different from how the 
> standard maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5050) Maven install of phoenix-client overwrites phoenix-core jar

2018-11-29 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5050.
---
Resolution: Not A Problem

Was on an older branch, looks like this is fixed in PHOENIX-1567 , which is 
still unresolved.  Will see if I can finish any remaining work there.

> Maven install of phoenix-client overwrites phoenix-core jar
> ---
>
> Key: PHOENIX-5050
> URL: https://issues.apache.org/jira/browse/PHOENIX-5050
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Vincent Poon
>Priority: Major
>
> If I run `mvn install -pl phoenix-client`, I get:
> I
> [INFO] --- maven-install-plugin:2.5.2:install-file (default-install) @ 
> phoenix-client ---
> [INFO] Installing 
> /Users/vincent.poon/git_public/apache_git_phoenix/phoenix-client/target/phoenix-4.13.1-HBase-1.3-client.jar
>  to 
> /Users/vincent.poon/.m2/repository/org/apache/phoenix/phoenix-core/4.13.1-HBase-1.3/phoenix-core-4.13.1-HBase-1.3.jar



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5050) Maven install of phoenix-client overwrites phoenix-core jar

2018-11-29 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5050:
-

 Summary: Maven install of phoenix-client overwrites phoenix-core 
jar
 Key: PHOENIX-5050
 URL: https://issues.apache.org/jira/browse/PHOENIX-5050
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Vincent Poon


If I run `mvn install -pl phoenix-client`, I get:
I
[INFO] --- maven-install-plugin:2.5.2:install-file (default-install) @ 
phoenix-client ---
[INFO] Installing 
/Users/vincent.poon/git_public/apache_git_phoenix/phoenix-client/target/phoenix-4.13.1-HBase-1.3-client.jar
 to 
/Users/vincent.poon/.m2/repository/org/apache/phoenix/phoenix-core/4.13.1-HBase-1.3/phoenix-core-4.13.1-HBase-1.3.jar



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5048) Index Rebuilder does not handle INDEX_STATE timestamp check for all index

2018-11-28 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5048?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5048:
-

Assignee: Monani Mihir

> Index Rebuilder does not handle INDEX_STATE timestamp check for all index
> -
>
> Key: PHOENIX-5048
> URL: https://issues.apache.org/jira/browse/PHOENIX-5048
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0, 4.14.1
>Reporter: Monani Mihir
>Assignee: Monani Mihir
>Priority: Major
> Attachments: PHOENIX-5048.patch
>
>
> After rebuilder is finished for Partial Index Rebuild, It will check if Index 
> state has been updated after Upper bound of the scan we use in partial index 
> Rebuild. If that happens then it will fail Index Rebuild as Index write 
> failure occured while we were rebuilding Index.
> {code:java}
> MetaDataEndpointImpl.java#updateIndexState()
> public void updateIndexState(RpcController controller, 
> UpdateIndexStateRequest request,
> RpcCallback done) {
> ...
> // If the index status has been updated after the upper bound of the scan we 
> use
> // to partially rebuild the index, then we need to fail the rebuild because an
> // index write failed before the rebuild was complete.
> if (actualTimestamp > expectedTimestamp) {
> builder.setReturnCode(MetaDataProtos.MutationCode.UNALLOWED_TABLE_MUTATION);
> builder.setMutationTime(EnvironmentEdgeManager.currentTimeMillis());
> done.run(builder.build());
> return;
> }
> ...
> }{code}
> After Introduction of TrackingParallelWriterIndexCommitter 
> [PHOENIX-3815|https://issues.apache.org/jira/browse/PHOENIX-3815], we only 
> disable Index which get failure . Before that , in 
> ParallelWriterIndexCommitter we were disabling all index even if Index 
> failure happens for one Index only. 
> Suppose Data Table has 3 index and above condition becomes true for first 
> index , then we won't even check for remain two Index.
> {code:java}
> MetaDataRegionObserver.java#BuildIndexScheduleTask.java#run()
> for (PTable indexPTable : indexesToPartiallyRebuild) {
> String indexTableFullName = SchemaUtil.getTableName(
> indexPTable.getSchemaName().getString(),
> indexPTable.getTableName().getString());
> if (scanEndTime == latestUpperBoundTimestamp) {
> IndexUtil.updateIndexState(conn, indexTableFullName, PIndexState.ACTIVE, 0L, 
> latestUpperBoundTimestamp);
> batchExecutedPerTableMap.remove(dataPTable.getName());
> LOG.info("Making Index:" + indexPTable.getTableName() + " active after 
> rebuilding");
> } else {
> // Increment timestamp so that client sees updated disable timestamp
> IndexUtil.updateIndexState(conn, indexTableFullName, 
> indexPTable.getIndexState(), scanEndTime * signOfDisableTimeStamp, 
> latestUpperBoundTimestamp);
> Long noOfBatches = batchExecutedPerTableMap.get(dataPTable.getName());
> if (noOfBatches == null) {
> noOfBatches = 0l;
> }
> batchExecutedPerTableMap.put(dataPTable.getName(), ++noOfBatches);
> LOG.info("During Round-robin build: Successfully updated index disabled 
> timestamp for "
> + indexTableFullName + " to " + scanEndTime);
> }
> }
> {code}
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5046) Race condition in disabling an index can cause an index to get out of sync

2018-11-27 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5046.
---
Resolution: Not A Problem

Nevermind, after looking more closely with [~tdsilva], the rowlock is happening 
correctly.

> Race condition in disabling an index can cause an index to get out of sync
> --
>
> Key: PHOENIX-5046
> URL: https://issues.apache.org/jira/browse/PHOENIX-5046
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Vincent Poon
>Priority: Major
>
> Assume a row R at T0.
> If two index updates for R at T1 and T2 fail, the index might get marked 
> disabled as of T2 due to a race condition.  The partial rebuilder will will 
> then rebuild as of T2.  Since T1 was never replayed, the index row at T0 is 
> not deleted, leaving an extra orphan row in the index.
> This is because In MetaDataEndpointImpl#updateIndexState , we update the 
> index state without any rowlocking, so even though we take the min of the new 
> disable timestamp and the current disable timestamp, two concurrent requests 
> can be in a race condition and succeed in disabling the index with different 
> timestamps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5046) Race condition in disabling an index can cause an index to get out of sync

2018-11-27 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5046:
-

 Summary: Race condition in disabling an index can cause an index 
to get out of sync
 Key: PHOENIX-5046
 URL: https://issues.apache.org/jira/browse/PHOENIX-5046
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1, 5.0.0
Reporter: Vincent Poon


Assume a row R at T0.
If two index updates for R at T1 and T2 fail, the index might get marked 
disabled as of T2 due to a race condition.  The partial rebuilder will will 
then rebuild as of T2.  Since T1 was never replayed, the index row at T0 is not 
deleted, leaving an extra orphan row in the index.

This is because In MetaDataEndpointImpl#updateIndexState , we update the index 
state without any rowlocking, so even though we take the min of the new disable 
timestamp and the current disable timestamp, two concurrent requests can be in 
a race condition and succeed in disabling the index with different timestamps.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-16 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5005.
---
   Resolution: Resolved
Fix Version/s: 4.15.0

Pushed to 4.x branches.  It seems the 5.x branch no longer has a preSplit hook, 
so not applying it there.

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch, PHOENIX-5005.4.x-HBase-1.4.v3.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5019) Index mutations created by synchronous index builds will have wrong timestamps

2018-11-14 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5019:
-

 Summary: Index mutations created by synchronous index builds will 
have wrong timestamps
 Key: PHOENIX-5019
 URL: https://issues.apache.org/jira/browse/PHOENIX-5019
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.14.1
Reporter: Vincent Poon


Similar to PHOENIX-5018, if we do a synchronous index build, since it's doing 
an UpsertSelect , the timestamp of an index mutation will have current 
wallclock time instead matching up with the data table counterpart's timestamp



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5005:
--
Attachment: PHOENIX-5005.4.x-HBase-1.4.v3.patch

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch, PHOENIX-5005.4.x-HBase-1.4.v3.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5005:
--
Attachment: (was: PHOENIX-5005.4.x-HBase-1.4.v2.patch)

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5005:
--
Attachment: PHOENIX-5005.4.x-HBase-1.4.v2.patch

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5005:
--
Attachment: PHOENIX-5005.4.x-HBase-1.4.v2.patch

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch, 
> PHOENIX-5005.4.x-HBase-1.4.v2.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-5005:
-

Assignee: Vincent Poon

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-08 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5005?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5005:
--
Attachment: PHOENIX-5005.4.x-HBase-1.4.v1.patch

> Server-side delete / upsert-select potentially blocked after a split
> 
>
> Key: PHOENIX-5005
> URL: https://issues.apache.org/jira/browse/PHOENIX-5005
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-5005.4.x-HBase-1.4.v1.patch
>
>
> After PHOENIX-4214, we stop inbound writes after a split is requested, to 
> avoid split starvation.
> However, it seems there can be edge cases, depending on the split policy, 
> where a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy 
> relies on the count of regions, and balancer movement of regions at t1 could 
> make it such that the SplitPolicy triggers at t0 but not t2.
> However, after the first split request, in UngroupedAggregateRegionObserver 
> the flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5005) Server-side delete / upsert-select potentially blocked after a split

2018-11-07 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5005:
-

 Summary: Server-side delete / upsert-select potentially blocked 
after a split
 Key: PHOENIX-5005
 URL: https://issues.apache.org/jira/browse/PHOENIX-5005
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Vincent Poon


After PHOENIX-4214, we stop inbound writes after a split is requested, to avoid 
split starvation.

However, it seems there can be edge cases, depending on the split policy, where 
a split is not retried.  For example, IncreasingToUpperBoundSplitPolicy relies 
on the count of regions, and balancer movement of regions at t1 could make it 
such that the SplitPolicy triggers at t0 but not t2.

However, after the first split request, in UngroupedAggregateRegionObserver the 
flag to block inbound writes is flipped indefinitely.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5002) Don't load or disable Indexer coprocessor for non-indexed tables

2018-11-05 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-5002.
---
Resolution: Won't Fix

> Don't load or disable Indexer coprocessor for non-indexed tables
> 
>
> Key: PHOENIX-5002
> URL: https://issues.apache.org/jira/browse/PHOENIX-5002
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Priority: Major
>
> It seems the Indexer coprocessor is loaded for tables even if they have no 
> indexes.
> There is some overhead such as write locking within Phoenix - we should 
> investigate whether we can avoid loading the Indexer coproc or disable it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5002) Don't load Indexer coprocessor for non-indexed tables

2018-10-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5002:
--
Issue Type: Improvement  (was: Bug)

> Don't load Indexer coprocessor for non-indexed tables
> -
>
> Key: PHOENIX-5002
> URL: https://issues.apache.org/jira/browse/PHOENIX-5002
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Priority: Major
>
> It seems the Indexer coprocessor is loaded for tables even if they have no 
> indexes.
> There is some overhead such as write locking within Phoenix - we should 
> investigate whether we can avoid loading the Indexer coproc or disable it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5002) Don't load or disable Indexer coprocessor for non-indexed tables

2018-10-31 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-5002:
--
Summary: Don't load or disable Indexer coprocessor for non-indexed tables  
(was: Don't load Indexer coprocessor for non-indexed tables)

> Don't load or disable Indexer coprocessor for non-indexed tables
> 
>
> Key: PHOENIX-5002
> URL: https://issues.apache.org/jira/browse/PHOENIX-5002
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1
>Reporter: Vincent Poon
>Priority: Major
>
> It seems the Indexer coprocessor is loaded for tables even if they have no 
> indexes.
> There is some overhead such as write locking within Phoenix - we should 
> investigate whether we can avoid loading the Indexer coproc or disable it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5002) Don't load Indexer coprocessor for non-indexed tables

2018-10-30 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5002:
-

 Summary: Don't load Indexer coprocessor for non-indexed tables
 Key: PHOENIX-5002
 URL: https://issues.apache.org/jira/browse/PHOENIX-5002
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Vincent Poon


It seems the Indexer coprocessor is loaded for tables even if they have no 
indexes.
There is some overhead such as write locking within Phoenix - we should 
investigate whether we can avoid loading the Indexer coproc or disable it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5001) Don't issue deletes for ALTER INDEX REBUILD

2018-10-30 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-5001:
-

 Summary: Don't issue deletes for ALTER INDEX REBUILD
 Key: PHOENIX-5001
 URL: https://issues.apache.org/jira/browse/PHOENIX-5001
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.1
Reporter: Vincent Poon


For large tables, it'd be a lot more efficient to have HBase truncate the table 
instead of issuing a delete for every existing index row



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4989) Include disruptor jar in shaded dependency

2018-10-30 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4989?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4989:
--
Fix Version/s: (was: 4.14.1)
   4.15.0
   4.14.2

> Include disruptor jar in shaded dependency
> --
>
> Key: PHOENIX-4989
> URL: https://issues.apache.org/jira/browse/PHOENIX-4989
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Aman Poonia
>Assignee: Aman Poonia
>Priority: Major
> Fix For: 4.15.0, 4.14.2
>
> Attachments: PHOENIX-4989-4.x-HBase-1.3.patch
>
>
> Include disruptor jar in shaded dependency as hbase has a different version 
> of the same



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4990) When stopping IndexWriter, give tasks a chance to complete

2018-10-25 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4990:
--
Attachment: PHOENIX-4990.v3.4.x-HBase-1.4.patch

> When stopping IndexWriter, give tasks a chance to complete
> --
>
> Key: PHOENIX-4990
> URL: https://issues.apache.org/jira/browse/PHOENIX-4990
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4990.v1.4.x-HBase-1.4.patch, 
> PHOENIX-4990.v2.4.x-HBase-1.4.patch, PHOENIX-4990.v3.4.x-HBase-1.4.patch
>
>
> We've seen a race condition where an index write failure happens when a 
> coprocessor is shutting down.  Since we don't give the index writer threads a 
> chance to complete when shutdownNow() is called, there is a chance the 
> coprocessor shuts down its HTables while an index writer thread is in 
> PhoenixIndexFailurePolicy trying to disable an index, which ends up using a 
> closed HTable and received RejectedExecutionException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4993) Data table region should not close RS level shared/cached connections like IndexWriter, RecoveryIndexWriter

2018-10-25 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-4993:
-

Assignee: Kiran Kumar Maturi

> Data table region should not close RS level shared/cached connections like 
> IndexWriter, RecoveryIndexWriter
> ---
>
> Key: PHOENIX-4993
> URL: https://issues.apache.org/jira/browse/PHOENIX-4993
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Kiran Kumar Maturi
>Assignee: Kiran Kumar Maturi
>Priority: Major
>
> Issue is related to Region Server being killed when one region is closing and 
> another region is trying to write index updates.
> When the data table region closes it will close region server level 
> cached/shared connections and it could interrupt other region 
> index/index-state update.
> -- Region1: Closing
> {code:java}
> TrackingParallellWriterIndexCommitter#stop() {
> this.retryingFactory.shutdown();
> this.noRetriesFactory.shutdown();
> }{code}
> closes the cached connections calling 
> CoprocessorHConnectionTableFactory#shutdown() in ServerUtil.java
>  
> --Region2: Writing index updates
> Index updates fail as connections are closed, which leads to 
> RejectedExecutionException/Connection being null. This triggers 
> PhoenixIndexFailurePolicy#handleFailureWithExceptions that tries to get the 
> the syscat table using the cached connections. Here it will not be able to 
> reach to SYSCAT , so we will trigger KillServreFailurePolicy.
> CoprocessorHConnectionTableFactory#getTable()
>  
>  
> {code:java}
> if (connection == null || connection.isClosed()) {
> throw new IllegalArgumentException("Connection is null or closed.");
> }{code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4992) Handle StaleRegionBoundaryException in PhoenixRecordReader for MR jobs

2018-10-24 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4992:
--
Description: 
Found this when running IndexToolIT:
If a table region is offline while the PhoenixRecordReader runs, it hits a 
StaleRegionBoundaryException and the job fails.


2018-10-24 16:02:07,043 ERROR [LocalJobRunner Map Task Executor #0] 
org.apache.phoenix.mapreduce.PhoenixRecordReader(177):  Error [ERROR 1108 
(XCL08): Cache of region boundaries are out of date.] occurred while iterating 
over the resultset. 
2018-10-24 16:02:07,047 WARN  [Thread-992] 
org.apache.hadoop.mapred.LocalJobRunner$Job(560): job_local2024601432_0002
java.lang.Exception: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:178)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)

  was:
If a table region is offline while the PhoenixRecordReader runs, it hits a 
StaleRegionBoundaryException and the job fails.

2018-10-24 16:02:07,043 ERROR [LocalJobRunner Map Task Executor #0] 
org.apache.phoenix.mapreduce.PhoenixRecordReader(177):  Error [ERROR 1108 
(XCL08): Cache of region boundaries are out of date.] occurred while iterating 
over the resultset. 
2018-10-24 16:02:07,047 WARN  [Thread-992] 
org.apache.hadoop.mapred.LocalJobRunner$Job(560): job_local2024601432_0002
java.lang.Exception: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:178)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)


> Handle StaleRegionBoundaryException in PhoenixRecordReader for MR jobs
> --
>
> Key: PHOENIX-4992
> URL: https://issues.apache.org/jira/browse/PHOENIX-4992
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Priority: Major
>
> Found this when running IndexToolIT:
> If a table region is offline while the PhoenixRecordReader runs, it hits a 
> StaleRegionBoundaryException and the job fails.
> 2018-10-24 16:02:07,043 ERROR [LocalJobRunner Map Task Executor #0] 
> org.apache.phoenix.mapreduce.PhoenixRecordReader(177):  Error [ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.] occurred while 
> iterating over the resultset. 
> 2018-10-24 16:02:07,047 WARN  [Thread-992] 
> org.apache.hadoop.mapred.LocalJobRunner$Job(560): job_local2024601432_0002
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
>   at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:178)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4992) Handle StaleRegionBoundaryException in PhoenixRecordReader for MR jobs

2018-10-24 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4992?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4992:
--
Summary: Handle StaleRegionBoundaryException in PhoenixRecordReader for MR 
jobs  (was: Handle StaleRegionBoundaryException in PhoenixRecordReader)

> Handle StaleRegionBoundaryException in PhoenixRecordReader for MR jobs
> --
>
> Key: PHOENIX-4992
> URL: https://issues.apache.org/jira/browse/PHOENIX-4992
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Priority: Major
>
> If a table region is offline while the PhoenixRecordReader runs, it hits a 
> StaleRegionBoundaryException and the job fails.
> 2018-10-24 16:02:07,043 ERROR [LocalJobRunner Map Task Executor #0] 
> org.apache.phoenix.mapreduce.PhoenixRecordReader(177):  Error [ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.] occurred while 
> iterating over the resultset. 
> 2018-10-24 16:02:07,047 WARN  [Thread-992] 
> org.apache.hadoop.mapred.LocalJobRunner$Job(560): job_local2024601432_0002
> java.lang.Exception: java.lang.RuntimeException: 
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
>   at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
> Caused by: java.lang.RuntimeException: 
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
> (XCL08): Cache of region boundaries are out of date.
>   at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:178)
>   at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4992) Handle StaleRegionBoundaryException in PhoenixRecordReader

2018-10-24 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4992:
-

 Summary: Handle StaleRegionBoundaryException in PhoenixRecordReader
 Key: PHOENIX-4992
 URL: https://issues.apache.org/jira/browse/PHOENIX-4992
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Vincent Poon


If a table region is offline while the PhoenixRecordReader runs, it hits a 
StaleRegionBoundaryException and the job fails.

2018-10-24 16:02:07,043 ERROR [LocalJobRunner Map Task Executor #0] 
org.apache.phoenix.mapreduce.PhoenixRecordReader(177):  Error [ERROR 1108 
(XCL08): Cache of region boundaries are out of date.] occurred while iterating 
over the resultset. 
2018-10-24 16:02:07,047 WARN  [Thread-992] 
org.apache.hadoop.mapred.LocalJobRunner$Job(560): job_local2024601432_0002
java.lang.Exception: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at 
org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Caused by: java.lang.RuntimeException: 
org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108 
(XCL08): Cache of region boundaries are out of date.
at 
org.apache.phoenix.mapreduce.PhoenixRecordReader.nextKeyValue(PhoenixRecordReader.java:178)
at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4977) Make KillServerOnFailurePolicy a configurable option in PhoenixIndexFailurePolicy

2018-10-23 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4977:
--
Attachment: PHOENIX-4977.v2.4.x-HBase-1.4.patch

> Make KillServerOnFailurePolicy a configurable option in 
> PhoenixIndexFailurePolicy
> -
>
> Key: PHOENIX-4977
> URL: https://issues.apache.org/jira/browse/PHOENIX-4977
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4977.v1.4.x-HBase-1.4.patch, 
> PHOENIX-4977.v2.4.x-HBase-1.4.patch
>
>
> Currently PhoenixIndexFailurePolicy, which is the default policy, delegates 
> to KillServerOnFailurePolicy.  This is hardcoded in the constructor.  
> Apparently this was added for a specific use case, 
> BLOCK_DATA_TABLE_WRITES_ON_WRITE_FAILURE, and the policy itself derives from 
> the days where forcing a RS kill was in effect the way to 'rebuild' the index 
> via WAL replay.
> There are still cases where it's applicable, such as when Syscat itself 
> cannot be updated in order to e.g. disable an index.  However, killing the RS 
> may be too aggressive for some, who might prefer a temporarily out of sync 
> index to a potentially cascading wave of aborts.
> We should add a config option to control this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4990) When stopping IndexWriter, give tasks a chance to complete

2018-10-23 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4990:
--
Attachment: PHOENIX-4990.v2.4.x-HBase-1.4.patch

> When stopping IndexWriter, give tasks a chance to complete
> --
>
> Key: PHOENIX-4990
> URL: https://issues.apache.org/jira/browse/PHOENIX-4990
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4990.v1.4.x-HBase-1.4.patch, 
> PHOENIX-4990.v2.4.x-HBase-1.4.patch
>
>
> We've seen a race condition where an index write failure happens when a 
> coprocessor is shutting down.  Since we don't give the index writer threads a 
> chance to complete when shutdownNow() is called, there is a chance the 
> coprocessor shuts down its HTables while an index writer thread is in 
> PhoenixIndexFailurePolicy trying to disable an index, which ends up using a 
> closed HTable and received RejectedExecutionException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4977) Make KillServerOnFailurePolicy a configurable option in PhoenixIndexFailurePolicy

2018-10-23 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4977:
--
Attachment: PHOENIX-4977.v1.4.x-HBase-1.4.patch

> Make KillServerOnFailurePolicy a configurable option in 
> PhoenixIndexFailurePolicy
> -
>
> Key: PHOENIX-4977
> URL: https://issues.apache.org/jira/browse/PHOENIX-4977
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4977.v1.4.x-HBase-1.4.patch
>
>
> Currently PhoenixIndexFailurePolicy, which is the default policy, delegates 
> to KillServerOnFailurePolicy.  This is hardcoded in the constructor.  
> Apparently this was added for a specific use case, 
> BLOCK_DATA_TABLE_WRITES_ON_WRITE_FAILURE, and the policy itself derives from 
> the days where forcing a RS kill was in effect the way to 'rebuild' the index 
> via WAL replay.
> There are still cases where it's applicable, such as when Syscat itself 
> cannot be updated in order to e.g. disable an index.  However, killing the RS 
> may be too aggressive for some, who might prefer a temporarily out of sync 
> index to a potentially cascading wave of aborts.
> We should add a config option to control this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4977) Make KillServerOnFailurePolicy a configurable option in PhoenixIndexFailurePolicy

2018-10-23 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-4977:
-

Assignee: Vincent Poon

> Make KillServerOnFailurePolicy a configurable option in 
> PhoenixIndexFailurePolicy
> -
>
> Key: PHOENIX-4977
> URL: https://issues.apache.org/jira/browse/PHOENIX-4977
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
>
> Currently PhoenixIndexFailurePolicy, which is the default policy, delegates 
> to KillServerOnFailurePolicy.  This is hardcoded in the constructor.  
> Apparently this was added for a specific use case, 
> BLOCK_DATA_TABLE_WRITES_ON_WRITE_FAILURE, and the policy itself derives from 
> the days where forcing a RS kill was in effect the way to 'rebuild' the index 
> via WAL replay.
> There are still cases where it's applicable, such as when Syscat itself 
> cannot be updated in order to e.g. disable an index.  However, killing the RS 
> may be too aggressive for some, who might prefer a temporarily out of sync 
> index to a potentially cascading wave of aborts.
> We should add a config option to control this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4990) When stopping IndexWriter, give tasks a chance to complete

2018-10-23 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4990?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4990:
--
Attachment: PHOENIX-4990.v1.4.x-HBase-1.4.patch

> When stopping IndexWriter, give tasks a chance to complete
> --
>
> Key: PHOENIX-4990
> URL: https://issues.apache.org/jira/browse/PHOENIX-4990
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Attachments: PHOENIX-4990.v1.4.x-HBase-1.4.patch
>
>
> We've seen a race condition where an index write failure happens when a 
> coprocessor is shutting down.  Since we don't give the index writer threads a 
> chance to complete when shutdownNow() is called, there is a chance the 
> coprocessor shuts down its HTables while an index writer thread is in 
> PhoenixIndexFailurePolicy trying to disable an index, which ends up using a 
> closed HTable and received RejectedExecutionException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4990) When stopping IndexWriter, give tasks a chance to complete

2018-10-23 Thread Vincent Poon (JIRA)
Vincent Poon created PHOENIX-4990:
-

 Summary: When stopping IndexWriter, give tasks a chance to complete
 Key: PHOENIX-4990
 URL: https://issues.apache.org/jira/browse/PHOENIX-4990
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.0
Reporter: Vincent Poon
Assignee: Vincent Poon


We've seen a race condition where an index write failure happens when a 
coprocessor is shutting down.  Since we don't give the index writer threads a 
chance to complete when shutdownNow() is called, there is a chance the 
coprocessor shuts down its HTables while an index writer thread is in 
PhoenixIndexFailurePolicy trying to disable an index, which ends up using a 
closed HTable and received RejectedExecutionException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4844) Refactor queryserver tests to use QueryServerTestUtil

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4844:
--
Fix Version/s: (was: 4.14.1)

> Refactor queryserver tests to use QueryServerTestUtil
> -
>
> Key: PHOENIX-4844
> URL: https://issues.apache.org/jira/browse/PHOENIX-4844
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
>
> See related JIRA: PHOENIX-4750



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4960) Write to table with global index failed if meta of index changed (split, move, etc)

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4960:
--
Attachment: PHOENIX-4960.v1.master.patch

> Write to table with global index failed if meta of index changed (split, 
> move, etc)
> ---
>
> Key: PHOENIX-4960
> URL: https://issues.apache.org/jira/browse/PHOENIX-4960
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Alex Batyrshin
>Assignee: Vincent Poon
>Priority: Critical
> Attachments: PHOENIX-4960.v1.4.x-HBase-1.3.patch, 
> PHOENIX-4960.v1.master.patch, hbase-site-stage.xml
>
>
> Tested on released 4.14.0 and on commit 
> [https://github.com/apache/phoenix/commit/52893c240e4f24e2bfac0834d35205f866c16ed8]
> Reproduce steps:
> 1. Create table with global index
> {code:sql}
> 0: jdbc:phoenix:127.0.0.1> CREATE TABLE test_meta_change (
> . . . . . . . . . . . . .> pk VARCHAR NOT NULL PRIMARY KEY,
> . . . . . . . . . . . . .> data VARCHAR
> . . . . . . . . . . . . .> );
> No rows affected (1.359 seconds)
> 0: jdbc:phoenix:127.0.0.1> CREATE INDEX test_meta_change_idx ON 
> test_meta_change (data);
> No rows affected (6.333 seconds)
> {code}
> 2. Check that upsert works
> {code:sql}
> 0: jdbc:phoenix:127.0.0.1> UPSERT INTO test_meta_change VALUES ('a', 'foo');
> 1 row affected (0.186 seconds)
> {code}
> 3. Move index region via HBase shell
> {code:java}
> hbase(main):005:0> move 'index-region-hash'
> 0 row(s) in 0.1330 seconds
> {code}
> 4. Try to upsert again
> {code:sql}
> 0: jdbc:phoenix:127.0.0.1> UPSERT INTO test_meta_change VALUES ('b', 'bar');
> 18/10/08 03:32:10 WARN client.AsyncProcess: #1, table=TEST_META_CHANGE, 
> attempt=1/35 failed=1ops, last exception: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: ERROR 1121 (XCL21): Write to 
> the index failed.  disableIndexOnFailure=true, Failed
> to write to multiple index tables: [TEST_META_CHANGE_IDX] 
> ,serverTimestamp=1538958729912,
> at 
> org.apache.phoenix.util.ServerUtil.wrapInDoNotRetryIOException(ServerUtil.java:265)
> at 
> org.apache.phoenix.index.PhoenixIndexFailurePolicy.handleFailure(PhoenixIndexFailurePolicy.java:165)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:161)
> at 
> org.apache.phoenix.hbase.index.write.IndexWriter.writeAndKillYourselfOnFailure(IndexWriter.java:145)
> at 
> org.apache.phoenix.hbase.index.Indexer.doPostWithExceptions(Indexer.java:620)
> at org.apache.phoenix.hbase.index.Indexer.doPost(Indexer.java:595)
> at 
> org.apache.phoenix.hbase.index.Indexer.postBatchMutateIndispensably(Indexer.java:578)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$37.call(RegionCoprocessorHost.java:1048)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1711)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1789)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1745)
> at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postBatchMutateIndispensably(RegionCoprocessorHost.java:1044)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchMutation(HRegion.java:3646)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3108)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.batchMutate(HRegion.java:3050)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:916)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:844)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2405)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:36621)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2359)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:124)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:297)
> at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:277)
> Caused by: java.sql.SQLException: ERROR 1121 (XCL21): Write to the index 
> failed.  disableIndexOnFailure=true, Failed to write to multiple index 
> tables: [TEST_META_CHANGE_IDX]
> at 
> org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:494)
> at 
> 

[jira] [Resolved] (PHOENIX-4980) Mismatch in row counts between data and index tables while multiple clients try to upsert data

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon resolved PHOENIX-4980.
---
   Resolution: Duplicate
Fix Version/s: 4.14.1

> Mismatch in row counts between data and index tables while multiple clients 
> try to upsert data
> --
>
> Key: PHOENIX-4980
> URL: https://issues.apache.org/jira/browse/PHOENIX-4980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Abhishek Talluri
>Assignee: Vincent Poon
>Priority: Critical
>  Labels: LocalIndex, globalMutableSecondaryIndex, secondaryIndex
> Fix For: 4.14.1
>
> Attachments: TestSecIndex.java
>
>
> Phoenix table has A,B,C,D,E as its columns and A as the primary key for the 
> table.
> CREATE TABLE TEST (A VARCHAR NOT NULL PRIMARY KEY, B VARCHAR, C VARCHAR, D 
> VARCHAR , E VARCHAR);
> Global index is built on D & E
> CREATE INDEX TEST_IND on TEST (D,E);
> Client 1 updates A,B,C whereas client 2 updates A,B,D,E
> I used phoenix 5.14.2-1.cdh5.14.2.p0.3 parcel to test this issue. Ran with 
> two threads that load data using upserts reading from the csv file. Within 10 
> iterations, i could observe the difference in the row counts between data 
> table and index table. Attaching the code used to test this behavior. This 
> issue also exists in both Global and Local indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4988) Incorrect index rowkey generated when updating only non-indexed columns after a delete

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4988:
--
Attachment: PHOENIX-4988.v1.4.x-HBase-1.4.patch

> Incorrect index rowkey generated when updating only non-indexed columns after 
> a delete
> --
>
> Key: PHOENIX-4988
> URL: https://issues.apache.org/jira/browse/PHOENIX-4988
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
> Attachments: PHOENIX-4988.v1.4.x-HBase-1.4.patch, 
> PHOENIX-4988.v1.master.patch
>
>
> The following steps result in an incorrect index rowkey being generated after 
> an index update to a non-indexed column. 
> create table test (k VARCHAR NOT NULL PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)
> create index test_ind ON test (v2)
> upsert into test (k,v1,v2) VALUES ('testKey','v1_1','v2_1');
> delete from test;
> upsert into test (k,v1,v2) VALUES ('testKey','v1_2','v2_2');
> delete from test;
> upsert into test (k,v1) VALUES ('testKey','v1_3');



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4988) Incorrect index rowkey generated when updating only non-indexed columns after a delete

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4988:
--
Attachment: PHOENIX-4988.v1.master.patch

> Incorrect index rowkey generated when updating only non-indexed columns after 
> a delete
> --
>
> Key: PHOENIX-4988
> URL: https://issues.apache.org/jira/browse/PHOENIX-4988
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
> Attachments: PHOENIX-4988.v1.master.patch
>
>
> The following steps result in an incorrect index rowkey being generated after 
> an index update to a non-indexed column. 
> create table test (k VARCHAR NOT NULL PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)
> create index test_ind ON test (v2)
> upsert into test (k,v1,v2) VALUES ('testKey','v1_1','v2_1');
> delete from test;
> upsert into test (k,v1,v2) VALUES ('testKey','v1_2','v2_2');
> delete from test;
> upsert into test (k,v1) VALUES ('testKey','v1_3');



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4980) Mismatch in row counts between data and index tables while multiple clients try to upsert data

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4980:
--
Priority: Critical  (was: Major)

> Mismatch in row counts between data and index tables while multiple clients 
> try to upsert data
> --
>
> Key: PHOENIX-4980
> URL: https://issues.apache.org/jira/browse/PHOENIX-4980
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Abhishek Talluri
>Assignee: Vincent Poon
>Priority: Critical
>  Labels: LocalIndex, globalMutableSecondaryIndex, secondaryIndex
> Attachments: TestSecIndex.java
>
>
> Phoenix table has A,B,C,D,E as its columns and A as the primary key for the 
> table.
> CREATE TABLE TEST (A VARCHAR NOT NULL PRIMARY KEY, B VARCHAR, C VARCHAR, D 
> VARCHAR , E VARCHAR);
> Global index is built on D & E
> CREATE INDEX TEST_IND on TEST (D,E);
> Client 1 updates A,B,C whereas client 2 updates A,B,D,E
> I used phoenix 5.14.2-1.cdh5.14.2.p0.3 parcel to test this issue. Ran with 
> two threads that load data using upserts reading from the csv file. Within 10 
> iterations, i could observe the difference in the row counts between data 
> table and index table. Attaching the code used to test this behavior. This 
> issue also exists in both Global and Local indexes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4988) Incorrect index rowkey generated when updating only non-indexed columns after a delete

2018-10-22 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4988:
--
Priority: Critical  (was: Major)

> Incorrect index rowkey generated when updating only non-indexed columns after 
> a delete
> --
>
> Key: PHOENIX-4988
> URL: https://issues.apache.org/jira/browse/PHOENIX-4988
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Critical
> Attachments: PHOENIX-4988.v1.master.patch
>
>
> The following steps result in an incorrect index rowkey being generated after 
> an index update to a non-indexed column. 
> create table test (k VARCHAR NOT NULL PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)
> create index test_ind ON test (v2)
> upsert into test (k,v1,v2) VALUES ('testKey','v1_1','v2_1');
> delete from test;
> upsert into test (k,v1,v2) VALUES ('testKey','v1_2','v2_2');
> delete from test;
> upsert into test (k,v1) VALUES ('testKey','v1_3');



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4988) Incorrect index rowkey generated when updating only non-indexed columns after a delete

2018-10-19 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-4988:
-

Assignee: Vincent Poon

> Incorrect index rowkey generated when updating only non-indexed columns after 
> a delete
> --
>
> Key: PHOENIX-4988
> URL: https://issues.apache.org/jira/browse/PHOENIX-4988
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
>
> The following steps result in an incorrect index rowkey being generated after 
> an index update to a non-indexed column. 
> create table test (k VARCHAR NOT NULL PRIMARY KEY, v1 VARCHAR, v2 VARCHAR)
> create index test_ind ON test (v2)
> upsert into test (k,v1,v2) VALUES ('testKey','v1_1','v2_1');
> delete from test;
> upsert into test (k,v1,v2) VALUES ('testKey','v1_2','v2_2');
> delete from test;
> upsert into test (k,v1) VALUES ('testKey','v1_3');



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   >