[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513500#comment-16513500
 ] 

James Taylor commented on PHOENIX-4785:
---

Is this an issue for 4.x branches too?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4784) Downloads page on website should list xsums/sigs for "active" releases

2018-06-15 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4784?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514072#comment-16514072
 ] 

Josh Elser commented on PHOENIX-4784:
-

Need to use [https://www.apache.org/dist/phoenix/] and not dist.a.o for the 
link to sigs/xsums

> Downloads page on website should list xsums/sigs for "active" releases
> --
>
> Key: PHOENIX-4784
> URL: https://issues.apache.org/jira/browse/PHOENIX-4784
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Blocker
>
> I was made aware that our downloads page only links to the closer.cgi script 
> and does not proactively point users towards the xsums+sigs hosted (only) on 
> dist.a.o.
> We need to update our website such that we can be confident in saying that we 
> showed users how they need to validate our releases they download from 
> third-party mirrors.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[CANCEL][VOTE] Release of Apache Phoenix 5.0.0 RC0

2018-06-15 Thread rajeshb...@apache.org
Since PHOENIX-4785  blocker
will spin up another RC soon.

Thanks,
Rajeshbabu.
-- Forwarded message --
From: Ankit Singhal 
Date: Thu, Jun 14, 2018 at 4:00 PM
Subject: Re: [VOTE] Release of Apache Phoenix 5.0.0 RC0
To: dev@phoenix.apache.org


https://issues.apache.org/jira/browse/PHOENIX-4785 seems to be a blocker
for the release.

On Wed, Jun 13, 2018 at 3:37 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:

> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 5.0.0 RC0. This is the next
> major release
> of Phoenix compatible with the 2.0 branch of Apache HBase(2.0.0+).  The
> release
> includes both a source-only release and a convenience binary release.
>
> This release has feature parity with HBase 2.0.0 version.
>
> Here is are few highlights of Phoenix 5.0.0 over Phoenix 4.14.0 recently
> released.
> 1) Refactored coprocessor implementations as for the new coprocessor
design
> changes in HBase 2.0[1  >
> ].
> 2) Replaced many deprecated classes/interfaces related to admin, table,
> descriptor,  region,
> regioninfo, connection, cell, scan and mapreduce with new ones[2
> ].
> 3) Hive and Spark integration works with latest versions of Hive(3.0.0)
and
> Spark respectively[3  >][4
> ].
>
> The source tarball, including signatures, digests, etc can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
phoenix-5.0.0-HBase-2.0-rc0/src
> The binary artifacts can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-
phoenix-5.0.0-HBase-2.0-rc0/bin
>
> Release artifacts are signed with the following key:
> https://pgp.mit.edu/pks/lookup?op=get&search=0x318FD86BAAEDBD7B
> https://dist.apache.org/repos/dist/dev/phoenix/KEYS
>
> The hash and tag to be voted upon:
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
h=refs/tags/v5.0.0-HBase-2.0-rc0
> *
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=
shortlog;h=refs/tags/v5.0.0-HBase-2.0-rc0
> <
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=
shortlog;h=refs/tags/v5.0.0-HBase-2.0-rc0
> >*
> <
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
h=refs/tags/v5.0.0-HBase-2.0-rc0
> >
>
> Vote will be open for at least 72 hours. Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> [1] https://issues.apache.org/jira/browse/PHOENIX-4338
> [2] https://issues.apache.org/jira/browse/PHOENIX-4297
> [3] https://issues.apache.org/jira/browse/PHOENIX-4423
> [4] https://issues.apache.org/jira/browse/PHOENIX-4527
>
> Thanks,
> The Apache Phoenix Team
>


[jira] [Commented] (PHOENIX-4528) PhoenixAccessController checks permissions only at table level when creating views

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514147#comment-16514147
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4528:
--

Committed the v3 patch to 5.x-HBase-2.0.

> PhoenixAccessController checks permissions only at table level when creating 
> views
> --
>
> Key: PHOENIX-4528
> URL: https://issues.apache.org/jira/browse/PHOENIX-4528
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Karan Mehta
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.14.0, 5.0.0
>
> Attachments: PHOENIX-4528.001.patch, PHOENIX-4528.master.001.patch, 
> PHOENIX-4528.repro-test.diff, PHOENIX-4528_5.x-HBase-2.0.patch, 
> PHOENIX-4528_5.x-HBase-2.0_v2.patch, PHOENIX-4528_5.x-HBase-2.0_v3.patch
>
>
> The {{PhoenixAccessController#preCreateTable()}} method is invoked everytime 
> a user wants to create a view on a base table. The {{requireAccess()}} method 
> takes in tableName as the parameter and checks for user permissions only at 
> that table level. The correct approach is to also check permissions at 
> namespace level, since it is at a larger scope than per table level.
> For example, if the table name is {{TEST_SCHEMA.TEST_TABLE}}, it will created 
> as {{TEST_SCHEMA:TEST_TABLE}} HBase table is namespace mapping is enabled. 
> View creation on this table would fail if permissions are granted to just 
> {{TEST_SCHEMA}} and not on {{TEST_TABLE}}. It works correctly if same 
> permissions are granted at table level too.
> FYI. [~ankit.singhal] [~twdsi...@gmail.com]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514154#comment-16514154
 ] 

Ankit Singhal commented on PHOENIX-4785:


bq. Is this an issue for 4.x branches too?
Yes , I checked with master as well so it will be an issue with all branches.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Priority: Blocker
> Fix For: 5.0.0
>
> Attachments: PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ankit Singhal updated PHOENIX-4785:
---
Fix Version/s: 4.14.1

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514169#comment-16514169
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4786:
--

Trivial patch.

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4786:
-
Attachment: PHOENIX-4786.patch

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Geoffrey Jacoby (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514173#comment-16514173
 ] 

Geoffrey Jacoby commented on PHOENIX-4786:
--

Could this be either put in TRACE, or with a flag to turn off completely? Some 
organizations forbid any customer data in logs for compliance reasons. 

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4781) Phoenix client project's jar naming convention causes maven-deploy-plugin to fail

2018-06-15 Thread Karan Mehta (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514193#comment-16514193
 ] 

Karan Mehta commented on PHOENIX-4781:
--

[~apurtell] FYI.

> Phoenix client project's jar naming convention causes maven-deploy-plugin to 
> fail
> -
>
> Key: PHOENIX-4781
> URL: https://issues.apache.org/jira/browse/PHOENIX-4781
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Karan Mehta
>Priority: Major
> Attachments: PHOENIX-4781.001.patch
>
>
> `maven-deploy-plugin` is used for deploying built artifacts to repository 
> provided by `distributionManagement` tag. The name of files that need to be 
> uploaded are either derived from pom file of the project or it generates an 
> temporary one on its own.
> For `phoenix-client` project, we essentially create a shaded uber jar that 
> contains all dependencies and provide the project pom file for the plugin to 
> work. `maven-jar-plugin` is disabled for the project, hence the shade plugin 
> essentially packages the jar. The final name of the shaded jar is defined as 
> `phoenix-${project.version}-client`, which is different from how the standard 
> maven convention based on pom file (artifact and group id) is 
> `phoenix-client-${project.version}`
> This causes `maven-deploy-plugin` to fail since it is unable to find any 
> artifacts to be published.
> `maven-install-plugin` works correctly and hence it installs correct jar in 
> local repo.
> The same is effective for `phoenix-pig` project as well. However we require 
> the require jar for that project in the repo. I am not even sure why we 
> create shaded jar for that project.
> I will put up a 3 liner patch for the same.
> Any thoughts? [~sergey.soldatov] [~elserj]
> Files before change (first col is size):
> {code:java}
> 103487701 Jun 13 22:47 
> phoenix-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT-client.jar{code}
> Files after change (first col is size):
> {code:java}
> 3640 Jun 13 21:23 
> original-phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar
> 103487702 Jun 13 21:24 
> phoenix-client-4.14.0-HBase-1.3-sfdc-1.0.14-SNAPSHOT.jar{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514205#comment-16514205
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4786:
--

bq.Could this be either put in TRACE, or with a flag to turn off completely? 
Some organizations forbid any customer data in logs for compliance reasons. 
Agree. I can remove it.

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon reassigned PHOENIX-4785:
-

Assignee: Vincent Poon

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514239#comment-16514239
 ] 

Vincent Poon commented on PHOENIX-4785:
---

Thanks for the test [~an...@apache.org]

I've attached a patch for review.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vincent Poon updated PHOENIX-4785:
--
Attachment: PHOENIX-4785.v1.master.patch

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514248#comment-16514248
 ] 

Ankit Singhal commented on PHOENIX-4785:


[~vincentpoon], we can't simply reset the disabled timestamp as another client 
may fail in writing during the same time.

Shouldn't we DISABLE the index if we find the current state as PENDING_DISABLE 
when other client fail to write the index?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514248#comment-16514248
 ] 

Ankit Singhal edited comment on PHOENIX-4785 at 6/15/18 10:02 PM:
--

[~vincentpoon], we can't simply reset the disabled timestamp as another client 
may fail in writing during the same time.

Shouldn't we DISABLE the index if we find the current state as PENDING_DISABLE 
when other client fail to write the index?

And one more thing why we are allowing a user to use an index in a 
PENDING_DISABLE state? 


was (Author: an...@apache.org):
[~vincentpoon], we can't simply reset the disabled timestamp as another client 
may fail in writing during the same time.

Shouldn't we DISABLE the index if we find the current state as PENDING_DISABLE 
when other client fail to write the index?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla updated PHOENIX-4786:
-
Attachment: PHOENIX-4786_v2.patch

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514445#comment-16514445
 ] 

Rajeshbabu Chintaguntla commented on PHOENIX-4786:
--

Removed logging keyvalues in the attached patch. Will commit it.

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Geoffrey Jacoby (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514457#comment-16514457
 ] 

Geoffrey Jacoby commented on PHOENIX-4786:
--

+1, thanks [~rajeshbabu]

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Rajeshbabu Chintaguntla (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajeshbabu Chintaguntla resolved PHOENIX-4786.
--
Resolution: Fixed

Thanks [~gjacoby] for review. Committed the patch to 4.x and 5.x branches.

> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514483#comment-16514483
 ] 

Vincent Poon commented on PHOENIX-4785:
---

[~an...@apache.org] good point, this is a tricky case to handle.  I think if we 
simply disable the index if it's already in PENDING_DISABLE, then we would end 
up with a disabled index fairly frequently.  

The reason we allow a user to use an index in PENDING_DISABLE is that 
otherwise, *any* client index write failure would disable the index, which is 
too aggressive and the index would get disabled too often.  On the server side, 
we are only trying the index write once after PHOENIX-4130, to avoid tying up 
the handler.  Before PHOENIX-4130, we would have a number of retries of the 
index write, and only then would the index get disabled.  By having a 
PENDING_DISABLE state, we are mimicking that behavior by having a grace period 
wherein we allow the client to retry before disabling the index.

I think what we can try to do is pass the index_disabletimestamp back to the 
client.  Then when the client retries, if all the retries fail, it marks the 
index DISABLE with the timestamp.  The index_disabletimestamp should always be 
the minimum (i.e. the time of the first index write failure).  So this behavior 
should be safe.

Example:

T0.  Client A attempts write, index write fails.  Disabletimestamp of T0.  
Client A retries

T1.  Client B attempts write, index write fails.  Disabletimestamp is still T0 
(min).  Client B retries.

T2.  Client A succeeds.  Index is marked Active, clearing disabletimestamp.

T3.  Client B exhausts retries, marks index disabled with disabletimestamp of 
T0.  Rebuilder will rebuild as of T0.

What do you think?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514516#comment-16514516
 ] 

Ankit Singhal commented on PHOENIX-4785:


bq. T3.  Client B exhausts retries, marks index disabled with disabletimestamp 
of T0.  Rebuilder will rebuild as of T0.
But still there is a problem, if client B goes away without making index 
disabled, index will stay in ACTIVE state all the time.



> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514536#comment-16514536
 ] 

Vincent Poon commented on PHOENIX-4785:
---

[~an...@apache.org] Yes, that's a shortcoming - I don't have any good idea to 
get around that.  We are trying to strike a balance between not being overly 
aggressive in disabling the index, and still have the index be as consistent as 
possible.  It's difficult to have both.  I think the case of two clients both 
concurrently writing AND one succeeds and the other fails AND one goes away , 
is a case we have to compromise on for the time being.  Open to suggestions on 
this.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514516#comment-16514516
 ] 

Ankit Singhal edited comment on PHOENIX-4785 at 6/15/18 11:52 PM:
--

bq. T3.  Client B exhausts retries, marks index disabled with disabletimestamp 
of T0.  Rebuilder will rebuild as of T0.
But still there is a problem, if client B goes away without making index 
disabled, index will stay in ACTIVE state all the time.

How about keeping a count of PENDING_DISABLE? which we will be incremented 
atomically on every PENDING_DISABLE call and decrement it on ACTIVE call , if 
count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 
And use the same threshold for last PENDING_DISABLE call to make 
PENDING_DISABLE to DISABLE. WDYT?



was (Author: an...@apache.org):
bq. T3.  Client B exhausts retries, marks index disabled with disabletimestamp 
of T0.  Rebuilder will rebuild as of T0.
But still there is a problem, if client B goes away without making index 
disabled, index will stay in ACTIVE state all the time.



> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514516#comment-16514516
 ] 

Ankit Singhal edited comment on PHOENIX-4785 at 6/15/18 11:54 PM:
--

bq. T3.  Client B exhausts retries, marks index disabled with disabletimestamp 
of T0.  Rebuilder will rebuild as of T0.
But still there is a problem, if client B goes away without making index 
disabled, index will stay in ACTIVE state all the time.

How about keeping a count of PENDING_DISABLE? which we will be incrementing 
atomically on every PENDING_DISABLE call and decrementing it on ACTIVE call , 
if count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 
And if the client goes away ,use the same threshold for last PENDING_DISABLE 
call to make PENDING_DISABLE to DISABLE. WDYT?



was (Author: an...@apache.org):
bq. T3.  Client B exhausts retries, marks index disabled with disabletimestamp 
of T0.  Rebuilder will rebuild as of T0.
But still there is a problem, if client B goes away without making index 
disabled, index will stay in ACTIVE state all the time.

How about keeping a count of PENDING_DISABLE? which we will be incremented 
atomically on every PENDING_DISABLE call and decrement it on ACTIVE call , if 
count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 
And use the same threshold for last PENDING_DISABLE call to make 
PENDING_DISABLE to DISABLE. WDYT?


> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514559#comment-16514559
 ] 

Ankit Singhal commented on PHOENIX-4785:


bq. Yes, that's a shortcoming - I don't have any good idea to get around that.
[~vincentpoon], I think our comments were crossed, Please see my last comment.

bq. We are trying to strike a balance between not being overly aggressive in 
disabling the index, and still have the index be as consistent as possible.  
It's difficult to have both.  I think the case of two clients both concurrently 
writing AND one succeeds and the other fails AND one goes away , is a case we 
have to compromise on for the time being. 
The problem with this is it will make the index inconsistent silently. Consider 
any distributed job(spark, storm, MR) writing data in Phoenix and admin made 
some changes in HBase table by disabling and enabling it for a second, there is 
always a possibility that we can run into this race condition. 

If we can't fix it then either we should revert PHOENIX-4130 or make it 
configurable (Disabled by default) so that user is allowed to make an informed 
decision.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514570#comment-16514570
 ] 

Vincent Poon commented on PHOENIX-4785:
---

>How about keeping a count of PENDING_DISABLE? which we will be incrementing 
>atomically on every PENDING_DISABLE call and decrementing it on ACTIVE call , 
>if count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 

You would need a global counter, and to be able to uniquely identify each 
client request.  So if

Client A writes, fails, PENDING_DISABLE = 1

Client B writes, fails, PENDING_DISABLE should be 2

Client A writes, fails, PENDING_DISABLE should still be 2

I suppose we could set it on the client side, by writing to System.CATALOG 
every time we initially get back an IndexWriteException ?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514570#comment-16514570
 ] 

Vincent Poon edited comment on PHOENIX-4785 at 6/16/18 12:27 AM:
-

>How about keeping a count of PENDING_DISABLE? which we will be incrementing 
>atomically on every PENDING_DISABLE call and decrementing it on ACTIVE call , 
>if count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 

You would need a global counter, and to be able to uniquely identify each 
client request.  So if

Client A writes, fails, PENDING_DISABLE = 1

Client B writes, fails, PENDING_DISABLE should be 2

Client A writes, fails, PENDING_DISABLE should still be 2

I suppose we could set it on the client side, by writing to System.CATALOG 
every time we initially get back an IndexWriteException ?

The problem with doing it on the client side is, the server will set 
PENDING_DISABLE, but the client could go away before updating the counter.  So 
not sure how feasible this is.


was (Author: vincentpoon):
>How about keeping a count of PENDING_DISABLE? which we will be incrementing 
>atomically on every PENDING_DISABLE call and decrementing it on ACTIVE call , 
>if count=0 make it ACTIVE otherwise keep it PENDING_DISABLE. 

You would need a global counter, and to be able to uniquely identify each 
client request.  So if

Client A writes, fails, PENDING_DISABLE = 1

Client B writes, fails, PENDING_DISABLE should be 2

Client A writes, fails, PENDING_DISABLE should still be 2

I suppose we could set it on the client side, by writing to System.CATALOG 
every time we initially get back an IndexWriteException ?

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514622#comment-16514622
 ] 

Hudson commented on PHOENIX-4786:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-1.3 #156 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-1.3/156/])
PHOENIX-4786 Reduce log level to debug when logging new aggregate row 
(rajeshbabu: rev a0ef6613dfde647ac9b680744b4628dd2423c33f)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java


> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514639#comment-16514639
 ] 

Ankit Singhal commented on PHOENIX-4785:


{quote}I suppose we could set it on the client side, by writing to 
System.CATALOG every time we initially get back an IndexWriteException ?
{quote}
yes, client increment would be good.
{quote}The problem with doing it on the client side is, the server will set 
PENDING_DISABLE, but the client could go away before updating the counter. So 
not sure how feasible this is.
{quote}
This is fine I think because irrespective of the count we can DISABLE the table 
if it stays more than phoenix.index.pending.disable.threshold in 
PENDING_DISABLE state.

And , another thing we should not let the index usable for queries(writes are 
ok) when it is in PENDING_DISABLE state to avoid inconsistency issues even if 
it is for a small time.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4785) Unable to write to table if index is made active during retry

2018-06-15 Thread Vincent Poon (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514643#comment-16514643
 ] 

Vincent Poon commented on PHOENIX-4785:
---

>And , another thing we should not let the index usable for queries(writes are 
>ok) when it is in PENDING_DISABLE state to avoid inconsistency issues even if 
>it is for a small time.

The point of having PENDING_DISABLE is to allow queries to still use the index. 
 Otherwise we could just disable the index on the server side.  We need a 
period of time to allow the client to retry before we stop using the index.  We 
have not ack’d a successful write back to the client yet, so from that 
perspective it’s still consistent.  Even before PHOENIX-4130, there was a 
period of inconsistency while the index write failures were retried on the 
server side.

> Unable to write to table if index is made active during retry
> -
>
> Key: PHOENIX-4785
> URL: https://issues.apache.org/jira/browse/PHOENIX-4785
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Romil Choksi
>Assignee: Vincent Poon
>Priority: Blocker
> Fix For: 5.0.0, 4.14.1
>
> Attachments: PHOENIX-4785.v1.master.patch, PHOENIX-4785_test.patch
>
>
> After PHOENIX-4130, we are unable to write to a table if an index is made 
> ACTIVE during the retry as client timestamp is not cleared when table state 
> is changed from PENDING_DISABLE to ACTIVE even if our policy is not to block 
> writes on data table in case of write failure for index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-1567) Publish Phoenix-Client & Phoenix-Server jars into Maven Repo

2018-06-15 Thread Ankit Singhal (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-1567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514661#comment-16514661
 ] 

Ankit Singhal commented on PHOENIX-1567:


[~pboado], probably you are affected by this 
https://issues.apache.org/jira/browse/PHOENIX-4781. 

I was not able to personally try mvn deploy as it is failing because of 
Unauthorized access while transferring artifcats to remote.(though it's good I 
have not accidentally pushed anything to remote) but looks like gpg:sign might 
be looking for a jar as per the pom but the actual name of the jar is different.

 

 

> Publish Phoenix-Client & Phoenix-Server jars into Maven Repo
> 
>
> Key: PHOENIX-1567
> URL: https://issues.apache.org/jira/browse/PHOENIX-1567
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.2.0
>Reporter: Jeffrey Zhong
>Assignee: Ankit Singhal
>Priority: Major
> Attachments: PHOENIX-1567.patch
>
>
> Phoenix doesn't publish Phoenix Client & Server jars into Maven repository. 
> This make things quite hard for down steam projects/applications to use maven 
> to resolve dependencies.
> I tried to modify the pom.xml under phoenix-assembly while it shows the 
> following. 
> {noformat}
> [INFO] Installing 
> /Users/jzhong/work/phoenix_apache/checkins/phoenix/phoenix-assembly/target/phoenix-4.3.0-SNAPSHOT-client.jar
>  
> to 
> /Users/jzhong/.m2/repository/org/apache/phoenix/phoenix-assembly/4.3.0-SNAPSHOT/phoenix-assembly-4.3.0-SNAPSHOT-client.jar
> {noformat}
> Basically the jar published to maven repo will become  
> phoenix-assembly-4.3.0-SNAPSHOT-client.jar or 
> phoenix-assembly-4.3.0-SNAPSHOT-server.jar
> The artifact id "phoenix-assembly" has to be the prefix of the names of jars.
> Therefore, the possible solutions are:
> 1) rename current client & server jar to phoenix-assembly-clinet/server.jar 
> to match the jars published to maven repo.
> 2) rename phoenix-assembly to something more meaningful and rename our client 
> & server jars accordingly
> 3) split phoenix-assembly and move the corresponding artifacts into 
> phoenix-client & phoenix-server folders. Phoenix-assembly will only create 
> tar ball files.
> [~giacomotaylor], [~apurtell] or other maven experts: Any suggestion on this? 
> Thanks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514662#comment-16514662
 ] 

Hudson commented on PHOENIX-4731:
-

ABORTED: Integrated in Jenkins build Phoenix-omid2 #22 (See 
[https://builds.apache.org/job/Phoenix-omid2/22/])
PHOENIX-4731 Make running transactional unit tests for a given provider 
(jamestaylor: rev 5af5980ab08049d74b4a8732f8b402c4c741)
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/TransactionalViewIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalMutableTxIndexIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/IndexToolIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/FlappingTransactionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalImmutableTxIndexIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/NonColumnEncodedImmutableTxStatsCollectorIT.java
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/transaction/TransactionFactory.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/MutableRollbackIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/LocalMutableTxIndexIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/BaseViewIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedImmutableTxStatsCollectorIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/SysTableNamespaceMappedStatsCollectorIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/ColumnEncodedMutableTxStatsCollectorIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/rpc/UpdateCacheIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/TxWriteFailureIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/tx/ParameterizedTransactionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/txn/RollbackIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/execute/PartialCommitIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/tx/TransactionIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/GlobalImmutableTxIndexIT.java
* (edit) phoenix-core/src/test/java/org/apache/phoenix/util/TestUtil.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/MutableIndexFailureIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/index/ImmutableIndexIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/tx/TxCheckpointIT.java
* (edit) 
phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableWithViewsIT.java
* (edit) phoenix-core/src/it/java/org/apache/phoenix/end2end/AlterTableIT.java


> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: James Taylor
>Priority: Major
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4786) Reduce log level to debug when logging new aggregate row key found and added results for scan ordered queries

2018-06-15 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16514670#comment-16514670
 ] 

Hudson commented on PHOENIX-4786:
-

ABORTED: Integrated in Jenkins build Phoenix-4.x-HBase-0.98 #1917 (See 
[https://builds.apache.org/job/Phoenix-4.x-HBase-0.98/1917/])
PHOENIX-4786 Reduce log level to debug when logging new aggregate row 
(rajeshbabu: rev 175fe3fae0577fdc769c8ffbada9a3c2e2d6fb91)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/GroupedAggregateRegionObserver.java


> Reduce log level to debug when logging new aggregate row key found and added 
> results for scan ordered queries
> -
>
> Key: PHOENIX-4786
> URL: https://issues.apache.org/jira/browse/PHOENIX-4786
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Rajeshbabu Chintaguntla
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0, 4.15.0
>
> Attachments: PHOENIX-4786.patch, PHOENIX-4786_v2.patch
>
>
> Currently we are logging key value when the new aggregate row found for scan 
> ordered queries which is info log. This is going to add lot of overhead to 
> the queries because sometimes we may write almost all the rows into log.
> {noformat}
> results.add(keyValue);
> if (logger.isInfoEnabled()) {
> logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> + keyValue
> + ",for current key "
> + Bytes.toStringBinary(currentKey.get(), 
> currentKey.getOffset(),
> currentKey.getLength()) + ", aggregated 
> values: "
> + Arrays.asList(rowAggregators), 
> ScanUtil.getCustomAnnotations(scan)));
> }
> {noformat}
> {noformat}
> [root@ctr-e138-1518143905142-358323-01-10 hbase]# grep "Adding new 
> aggregate row: " 
> hbase-hbase-regionserver-ctr-e138-1518143905142-358323-01-10.log.* | wc -l
> 19082854
> {noformat}
> It's changed recently as part of PHOENIX-4742 so better to make it debug only.
> {noformat}
> -if (logger.isDebugEnabled()) {
> -logger.debug(LogUtil.addCustomAnnotations("Adding 
> new aggregate row: "
> +if (logger.isInfoEnabled()) {
> +logger.info(LogUtil.addCustomAnnotations("Adding new 
> aggregate row: "
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)