Re: Multiple rows with same key

2019-12-18 Thread William Shen
Also, quick question, is your table salted, and have you done anything that
might have altered the salt buckets?

On Wed, Dec 18, 2019 at 7:50 AM la...@apache.org  wrote:

>  You mean you see multiple rows with the same primary key? I have not see
> this - or heard about this before.
>
> Could you post your schema?
> -- Lars
>
> On Tuesday, December 17, 2019, 7:42:13 PM GMT+1, Francesco Malerba <
> malerba.francesco...@gmail.com> wrote:
>
>  Hi all,
> we are having some troubles with Phoenix 4.14 on top of Hbase 1.2, because
> some select are  returning duplicate rows with the same private.
> In our application, we make several upsert of rows with the same key since
> we have to keep synchronized an oracle and a phoenix table.
>
> We started to notice this problem after the update of phoenix to version
> 4.14 from version 4.7 and hbase from version 1.1 to version 1.2.
>
> I suspect this issue could be somehow related to returning different
> versions of the same hbase row as different rows.
> In order to look into this issue, my idea was to perform some scan
> directly on hbase, but i noticed that some bytes are appended after the
> first field of the PK and i'm not able to produce the equivalent scan of my
> select.
>
> Finally my question are:
> Is there a way to use the phoenix library to produce the equivalent bytes
> of my query to use in the Hbase Shell?
>
> Could this issues be related to the update of phoenix, since the data were
> written using phoenix 4.7 and now we are reading them from a phoenix 4.14
> server ?
>
> Any hint will be appreciated.
> Thanks
>
>
>
>
>
>
>
>
>
>
>


Re: [VOTE] Accept Tephra and Omid podlings as Phoenix sub-projects

2019-10-30 Thread William Shen
+1!

On Wed, Oct 30, 2019 at 8:27 AM Josh Elser  wrote:

> Hi,
>
> As was previously discussed[1][2], there is a motion to "adopt" the
> Tephra and Omid podlings as sub-projects to Apache Phoenix. A
> sub-project is a distinct software project from some top-level project
> (TLP) but operates under the guidance of a TLP (e.g. the TLP's PMC)
>
> Per the Incubator guidelines[3], we need to have a formal vote to adopt
> them. While we still need details from these two podlings, I believe we
> can have a VOTE now to keep things moving.
>
> Actions:
> * Phoenix will incorporate Omid and Tephra as sub-projects and they will
> continue to function under the Apache Phoenix PMC guidance.
> * Any current podling member may request to be added as a Phoenix
> committer. Podling members would be expected to demonstrate the normal
> level of commitment to grow from a committer to a PMC member.
>
> Stating what I see as an implicit decision (but to alleviate any
> possible confusion): all new community members will be expected to
> function in the way that Phoenix currently does today (e.g
> review-then-commit). Future Omid and Tephra development would happen in
> the same way that Phoenix development happens today.
>
> Please vote:
>
> +1: to accept Omid and Tephra as Phoenix sub-projects and to allow any
> PPMC to become Phoenix committers.
>
> -1/-0/+0: No and why..
>
> Here is my +1 (binding)
>
> This vote will be open for at least 72hrs (2019/11/02 1600 UTC).
>
>
> [1]
>
> https://lists.apache.org/thread.html/ec00615cbbb4225885e3776f1e8fd071600a9c50f35769f453b8a779@%3Cdev.phoenix.apache.org%3E
> [2]
>
> https://lists.apache.org/thread.html/692a030a27067c20b9228602af502199cd4d80eb0aa8ed6461ebe1ee@%3Cgeneral.incubator.apache.org%3E
> [3]
>
> https://incubator.apache.org/guides/graduation.html#graduating_to_a_subproject
>


Re: [DISCUSS] Board report due 2019/08/14

2019-08-13 Thread William Shen
A couple things that might be noteworthy:
- Thanks to Lars, we now have a fully passing integration test:
https://builds.apache.org/job/Phoenix-4.x-HBase-1.5/ (since June IIRC)
- Interesting discussion titled "is Apache phoenix reliable enough?" on the
user list, where people shared their experience of using Phoenix.

On Tue, Aug 13, 2019 at 8:18 AM Josh Elser  wrote:

> *bump*
>
> Shaping up to be a really thin board report.
>
> On 2019/08/06 18:00:19, Josh Elser  wrote: > Hiya
> folks,
> >
> > It's time again for our quarterly report to the ASF board.
> >
> > It's immensely helpful for you all to give me what _you_ see as the
> > highlights of the project. What happened in the past three months that
> > give someone on the sidelines insight into our little corner of the FOSS
> > world? Anything from simple to detailed is helpful.
> >
> > Thanks in advance.
> >
> > - Josh
> >
>


Re: [VOTE] Release of Apache Phoenix 4.14.3 RC1

2019-08-07 Thread William Shen
Thanks Artem for bringing up HBASE-22728 to help Phoenix stay ahead of the
game :)

On Wed, Aug 7, 2019 at 3:41 PM Artem Ervits  wrote:

> Didn't mean to insinuate to block 4.14.3, just wanted to understand where
> community is focusing. Thank you all for chiming in.
>
> On Wed, Aug 7, 2019, 6:30 PM Geoffrey Jacoby  wrote:
>
> > I agree with William that there's no need to block 4.14.3's 1.3 and
> > 1.4-based releases on HBase 1.5. Soon-but-later, when HBase 1.5 lands I
> > think we should do a followup 4.14.3 (or .4, if other critical JIRAs are
> > necessary) release based on 1.5.
> >
> > Then, based on the branch EOL discussions the HBase list is having right
> > now, and the status of 4.15, we can decide which branches to support
> going
> > forward.
> >
> > Geoffrey
> >
> > On Wed, Aug 7, 2019 at 3:00 PM William Shen 
> > wrote:
> >
> > > If HBase does not patch 1.3 or 1.4 for the Jackson CVE, the
> > recommendation
> > > for concerned users would be to use 1.5. I think that sounds like we
> > would
> > > want to get a 1.5 branch for 4.14, unless we think 4.15 is almost
> ready?
> > >
> > > Either way, I am not sure if this would need to block 4.14.3, since it
> > will
> > > take some time for this to play out?
> > >
> > > On Wed, Aug 7, 2019 at 11:10 AM Artem Ervits 
> > > wrote:
> > >
> > > > As described in HBASE-22728, are we planning to get a 1.5 branch for
> > 4.14
> > > > or are we targeting 4.15 and 5.x for HBase 1.5?
> > > >
> > > > On Tue, Aug 6, 2019, 5:01 PM swaroopa kadam <
> > swaroopa.kada...@gmail.com>
> > > > wrote:
> > > >
> > > > > non-binding  -1
> > > > >
> > > > > Uncovered a blocker Jira PHOENIX-5348.
> > > > > I will be back with RC2 soon.
> > > > >
> > > > > Thank you.
> > > > >
> > > > > On Tue, Aug 6, 2019 at 12:58 PM swaroopa kadam <
> > > > swaroopa.kada...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Hello Everyone,
> > > > > >
> > > > > > This is a call for a vote on Apache Phoenix 4.14.3 RC1. This is a
> > > patch
> > > > > > release of Phoenix 4.14 and is compatible with Apache HBase 1.3
> and
> > > > > > 1.4. The release includes both a source-only release and a
> > > convenience
> > > > > > binary
> > > > > > release for each supported HBase version.
> > > > > >
> > > > > > This release includes critical bug fixes and improvements for
> > > secondary
> > > > > > indexes -- making them self-consistent.
> > > > > >
> > > > > > The source tarball, including signatures, digests, etc can be
> found
> > > at:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.3-rc1/src/
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.4-rc1/src/
> > > > > >
> > > > > > The binary artifacts can be found at:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.3-rc1/bin/
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.4-rc1/bin/
> > > > > >
> > > > > > For a complete list of changes, see:
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12345601
> > > > > >
> > > > > > Artifacts are signed with my "CODE SIGNING KEY": 48B7807D95F127B5
> > > > > >
> > > > > > KEYS file available here:
> > > > > > https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> > > > > >
> > > > > > The tag to be voted upon:
> > > > > >
> > https://github.com/apache/phoenix/releases/tag/4.14.3-HBase-1.3-rc1
> > > > > >
> > https://github.com/apache/phoenix/releases/tag/4.14.3-HBase-1.4-rc1
> > > > > >
> > > > > > The vote will be open for at least 72 hours. Please vote:
> > > > > >
> > > > > > [ ] +1 approve
> > > > > > [ ] +0 no opinion
> > > > > > [ ] -1 disapprove (and the reason why)
> > > > > >
> > > > > > Thanks,
> > > > > > The Apache Phoenix Team
> > > > > >
> > > > > > --
> > > > > >
> > > > > >
> > > > > > Swaroopa Kadam
> > > > > > [image: https://]about.me/swaroopa_kadam
> > > > > > <
> > > > >
> > > >
> > >
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > >
> > > > > Swaroopa Kadam
> > > > > [image: https://]about.me/swaroopa_kadam
> > > > > <
> > > > >
> > > >
> > >
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: [VOTE] Release of Apache Phoenix 4.14.3 RC1

2019-08-07 Thread William Shen
If HBase does not patch 1.3 or 1.4 for the Jackson CVE, the recommendation
for concerned users would be to use 1.5. I think that sounds like we would
want to get a 1.5 branch for 4.14, unless we think 4.15 is almost ready?

Either way, I am not sure if this would need to block 4.14.3, since it will
take some time for this to play out?

On Wed, Aug 7, 2019 at 11:10 AM Artem Ervits  wrote:

> As described in HBASE-22728, are we planning to get a 1.5 branch for 4.14
> or are we targeting 4.15 and 5.x for HBase 1.5?
>
> On Tue, Aug 6, 2019, 5:01 PM swaroopa kadam 
> wrote:
>
> > non-binding  -1
> >
> > Uncovered a blocker Jira PHOENIX-5348.
> > I will be back with RC2 soon.
> >
> > Thank you.
> >
> > On Tue, Aug 6, 2019 at 12:58 PM swaroopa kadam <
> swaroopa.kada...@gmail.com
> > >
> > wrote:
> >
> > > Hello Everyone,
> > >
> > > This is a call for a vote on Apache Phoenix 4.14.3 RC1. This is a patch
> > > release of Phoenix 4.14 and is compatible with Apache HBase 1.3 and
> > > 1.4. The release includes both a source-only release and a convenience
> > > binary
> > > release for each supported HBase version.
> > >
> > > This release includes critical bug fixes and improvements for secondary
> > > indexes -- making them self-consistent.
> > >
> > > The source tarball, including signatures, digests, etc can be found at:
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.3-rc1/src/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.4-rc1/src/
> > >
> > > The binary artifacts can be found at:
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.3-rc1/bin/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.3-HBase-1.4-rc1/bin/
> > >
> > > For a complete list of changes, see:
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12345601
> > >
> > > Artifacts are signed with my "CODE SIGNING KEY": 48B7807D95F127B5
> > >
> > > KEYS file available here:
> > > https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> > >
> > > The tag to be voted upon:
> > > https://github.com/apache/phoenix/releases/tag/4.14.3-HBase-1.3-rc1
> > > https://github.com/apache/phoenix/releases/tag/4.14.3-HBase-1.4-rc1
> > >
> > > The vote will be open for at least 72 hours. Please vote:
> > >
> > > [ ] +1 approve
> > > [ ] +0 no opinion
> > > [ ] -1 disapprove (and the reason why)
> > >
> > > Thanks,
> > > The Apache Phoenix Team
> > >
> > > --
> > >
> > >
> > > Swaroopa Kadam
> > > [image: https://]about.me/swaroopa_kadam
> > > <
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > >
> > >
> >
> >
> > --
> >
> >
> > Swaroopa Kadam
> > [image: https://]about.me/swaroopa_kadam
> > <
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > >
> >
>


[jira] [Updated] (PHOENIX-5275) Remove accidental imports from curator-client-2.12.0

2019-07-29 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5275:
--
Attachment: PHOENIX-5275.4.14-HBase-1.3.v1.patch

> Remove accidental imports from curator-client-2.12.0
> 
>
> Key: PHOENIX-5275
> URL: https://issues.apache.org/jira/browse/PHOENIX-5275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>    Assignee: William Shen
>Priority: Minor
> Fix For: 4.15.0, 5.1.0, 4.14.3
>
> Attachments: PHOENIX-5275.4.14-HBase-1.3.v1.patch, 
> PHOENIX-5275.4.x-HBase-1.3.v1.patch, PHOENIX-5275.master.v1.patch, 
> PHOENIX-5275.master.v2.patch
>
>
> The following imports 
> import org.apache.curator.shaded.com.google.common.*
> were accidentally introduced in
> phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
> phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (PHOENIX-5408) Typo in ConnectionQueryServicesImpl Warning Logging

2019-07-23 Thread William Shen (JIRA)
William Shen created PHOENIX-5408:
-

 Summary: Typo in ConnectionQueryServicesImpl Warning Logging
 Key: PHOENIX-5408
 URL: https://issues.apache.org/jira/browse/PHOENIX-5408
 Project: Phoenix
  Issue Type: Bug
Reporter: William Shen
Assignee: William Shen


"query" was misspelled to "qeuery" in ConnectionQueryServicesImpl warning 
logging:
https://github.com/apache/phoenix/blob/62387ee3c55f8be1947161bc9d501b1867cc24f1/phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java#L443




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (PHOENIX-5275) Remove accidental imports from curator-client-2.12.0

2019-07-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5275:
--
Attachment: PHOENIX-5275.4.x-HBase-1.3.v1.patch

> Remove accidental imports from curator-client-2.12.0
> 
>
> Key: PHOENIX-5275
> URL: https://issues.apache.org/jira/browse/PHOENIX-5275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>    Assignee: William Shen
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5275.4.x-HBase-1.3.v1.patch, 
> PHOENIX-5275.master.v1.patch, PHOENIX-5275.master.v2.patch
>
>
> The following imports 
> import org.apache.curator.shaded.com.google.common.*
> were accidentally introduced in
> phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
> phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


Re: A successful jenkins test run, at last

2019-06-10 Thread William Shen
Awesome news. Thanks Lars!
Not sure if I'm looking at the wrong place, but I don't think I found a
green build on jenkins yet. Is this change local only for now?
https://builds.apache.org/job/PreCommit-PHOENIX-Build/

On Mon, Jun 10, 2019 at 8:54 AM la...@apache.org  wrote:

> I finally managed to get a single successful test run. Yeah! :)
>
> The fact that this is worth a note to the @dev points to a larger problem,
> though... Our tests have been failing for months (or years? I don't
> remember that last successful run.)
>
> I'll continue to stabilize the tests (for 4.15.x and 5.1.x). I'll also
> keep an eye on the test runtimes.
>
> More importantly I will also start to veto and revert _any_ change that
> breaks a new test, other committers should please do the same. It is task
> of every contributor and committer to ensure that the test suite passes.
>
> Only that way can we hope to have more frequent releases.
>
> -- Lars
>
> And here's the proof:
>
> [image: Inline image]
>
>
>


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-06-05 Thread William Shen
Vincent, thanks for clearing up the discussion :)

Do folks have a preference on which of the following we should take? I'm
leaning toward 2).
1) move docs from SVN to a dedicated git repository
2) move docs from SVN to the main phoenix git repository, and publish site
from master branch
3) move docs from SVN to the main phoenix git repository, and publish site
from __ branch

- Will

On Mon, Jun 3, 2019 at 10:54 AM Vincent Poon  wrote:

> I think the confusion here is stemming from two different things:
> 1)  Moving docs from SVN to git (potentially its own repo)
> 2) Moving docs into the same apache/phoenix repo as the source code.
>
> I think we should do #1, and not #2, which avoids the concerns Thomas and
> James have.
>
> On Mon, Jun 3, 2019 at 10:46 AM James Taylor 
> wrote:
>
> > The problem with committing docs with code is that the release happens
> much
> > later. We’ve always tried to get the doc changes pushed out before the
> > release is cut.
> >
> > On Mon, Jun 3, 2019 at 9:30 AM William Shen 
> > wrote:
> >
> > > Thomas, that makes sense now. thanks for explaining. didnt catch that
> the
> > > first time. would it make sense to you guys that we publish the website
> > > from master, or should we publish from say the latest of 4.x?
> > >
> > > On Mon, Jun 3, 2019 at 9:11 AM Thomas D'Silva
> > >  wrote:
> > >
> > > > William, I was referring to your comment about ensuring doc changes
> are
> > > > checked in with code changes.
> > > > I assume you meant this meant that the doc change would go in to the
> > same
> > > > pull request as the code change?
> > > > But I guess since we currently mostly commit patches to all branches
> > this
> > > > should be fine. We could have the website
> > > > module in one of the branches.
> > > >
> > > > On Sun, Jun 2, 2019 at 10:53 PM William Shen <
> > wills...@marinsoftware.com
> > > >
> > > > wrote:
> > > >
> > > > > I think we do need to come up with a strategy on how to maintain
> > > website
> > > > > documentation given that we have several versions that may
> > potentially
> > > > > conflict in documentation at times. Thomas, is this your main
> > concern?
> > > > >
> > > > >
> > > > > Josh - Would love to help drive it, though I’m not sure if i have
> all
> > > the
> > > > > right access to do so.
> > > > > Seems like we would need to:
> > > > > - commit the svn site directory into git master (I can create a
> patch
> > > but
> > > > > would need help committing this)
> > > > > - file an infra ticket to migrate the website, and enable
> git-pubsub
> > > > > (though I’m totally speak out of my knowledge here...)
> > > > >
> > > > > On Sun, Jun 2, 2019 at 11:42 AM Josh Elser 
> > > wrote:
> > > > >
> > > > > > Yeah, not sure I get your concern, Thomas. We only have one
> > website.
> > > > > >
> > > > > > From the ASF Infra side, svn-pubsub (what deploys our code on SVN
> > > > > > check-in) works the same as git-pubsub. It should just be a
> request
> > > to
> > > > > > Infra to migrate the website from SVN to Git and then enable
> > > > > > git-pubsub.
> > > > > >
> > > > > > No concerns in doing this from me. Even better if you'd like to
> > drive
> > > > > > it, William ;)
> > > > > >
> > > > > > On Fri, May 31, 2019 at 2:24 PM William Shen <
> > > > wills...@marinsoftware.com
> > > > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > Thomas,
> > > > > > >
> > > > > > > Which release line do we currently base our documentation on?
> Do
> > > you
> > > > > > think
> > > > > > > it makes sense to bring the site source into master, and always
> > > > update
> > > > > > the
> > > > > > > site from master?
> > > > > > >
> > > > > > > - Will
> > > > > > >
> > > > > > > On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Cur

Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-06-03 Thread William Shen
Thomas, that makes sense now. thanks for explaining. didnt catch that the
first time. would it make sense to you guys that we publish the website
from master, or should we publish from say the latest of 4.x?

On Mon, Jun 3, 2019 at 9:11 AM Thomas D'Silva
 wrote:

> William, I was referring to your comment about ensuring doc changes are
> checked in with code changes.
> I assume you meant this meant that the doc change would go in to the same
> pull request as the code change?
> But I guess since we currently mostly commit patches to all branches this
> should be fine. We could have the website
> module in one of the branches.
>
> On Sun, Jun 2, 2019 at 10:53 PM William Shen 
> wrote:
>
> > I think we do need to come up with a strategy on how to maintain website
> > documentation given that we have several versions that may potentially
> > conflict in documentation at times. Thomas, is this your main concern?
> >
> >
> > Josh - Would love to help drive it, though I’m not sure if i have all the
> > right access to do so.
> > Seems like we would need to:
> > - commit the svn site directory into git master (I can create a patch but
> > would need help committing this)
> > - file an infra ticket to migrate the website, and enable git-pubsub
> > (though I’m totally speak out of my knowledge here...)
> >
> > On Sun, Jun 2, 2019 at 11:42 AM Josh Elser  wrote:
> >
> > > Yeah, not sure I get your concern, Thomas. We only have one website.
> > >
> > > From the ASF Infra side, svn-pubsub (what deploys our code on SVN
> > > check-in) works the same as git-pubsub. It should just be a request to
> > > Infra to migrate the website from SVN to Git and then enable
> > > git-pubsub.
> > >
> > > No concerns in doing this from me. Even better if you'd like to drive
> > > it, William ;)
> > >
> > > On Fri, May 31, 2019 at 2:24 PM William Shen <
> wills...@marinsoftware.com
> > >
> > > wrote:
> > > >
> > > > Thomas,
> > > >
> > > > Which release line do we currently base our documentation on? Do you
> > > think
> > > > it makes sense to bring the site source into master, and always
> update
> > > the
> > > > site from master?
> > > >
> > > > - Will
> > > >
> > > > On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
> > > >  wrote:
> > > >
> > > > > Currently this would not be easy to do since we have multiple
> > > branches. If
> > > > > we decide to
> > > > > implement Lars' proposal to have a single branch and a module per
> > > supported
> > > > > HBase version
> > > > > then we could have a module for the website as well.
> > > > >
> > > > > On Thu, May 30, 2019 at 7:03 PM swaroopa kadam <
> > > swaroopa.kada...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Huge +1!
> > > > > >
> > > > > > On Thu, May 30, 2019 at 4:38 PM William Shen <
> > > wills...@marinsoftware.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > Currently, the Phoenix site is maintained in and built from SVN
> > > > > > > <https://svn.apache.org/repos/asf/phoenix/site>. Not sure what
> > > level
> > > > > of
> > > > > > > work it would require, but does it make sense to move the
> source
> > > from
> > > > > svn
> > > > > > > to git, so contribution to the website can follow the same
> > JIRA/git
> > > > > > > workflow as the rest of the project? It could also make sure
> > > changes to
> > > > > > > Phoenix code are checked in with corresponding documentation
> > > changes
> > > > > when
> > > > > > > needed.
> > > > > > >
> > > > > > > - Will
> > > > > > >
> > > > > > --
> > > > > >
> > > > > >
> > > > > > Swaroopa Kadam
> > > > > > [image: https://]about.me/swaroopa_kadam
> > > > > > <
> > > > > >
> > > > >
> > >
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > > > > > >
> > > > > >
> > > > >
> > >
> >
>


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-06-02 Thread William Shen
I think we do need to come up with a strategy on how to maintain website
documentation given that we have several versions that may potentially
conflict in documentation at times. Thomas, is this your main concern?


Josh - Would love to help drive it, though I’m not sure if i have all the
right access to do so.
Seems like we would need to:
- commit the svn site directory into git master (I can create a patch but
would need help committing this)
- file an infra ticket to migrate the website, and enable git-pubsub
(though I’m totally speak out of my knowledge here...)

On Sun, Jun 2, 2019 at 11:42 AM Josh Elser  wrote:

> Yeah, not sure I get your concern, Thomas. We only have one website.
>
> From the ASF Infra side, svn-pubsub (what deploys our code on SVN
> check-in) works the same as git-pubsub. It should just be a request to
> Infra to migrate the website from SVN to Git and then enable
> git-pubsub.
>
> No concerns in doing this from me. Even better if you'd like to drive
> it, William ;)
>
> On Fri, May 31, 2019 at 2:24 PM William Shen 
> wrote:
> >
> > Thomas,
> >
> > Which release line do we currently base our documentation on? Do you
> think
> > it makes sense to bring the site source into master, and always update
> the
> > site from master?
> >
> > - Will
> >
> > On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
> >  wrote:
> >
> > > Currently this would not be easy to do since we have multiple
> branches. If
> > > we decide to
> > > implement Lars' proposal to have a single branch and a module per
> supported
> > > HBase version
> > > then we could have a module for the website as well.
> > >
> > > On Thu, May 30, 2019 at 7:03 PM swaroopa kadam <
> swaroopa.kada...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Huge +1!
> > > >
> > > > On Thu, May 30, 2019 at 4:38 PM William Shen <
> wills...@marinsoftware.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > Currently, the Phoenix site is maintained in and built from SVN
> > > > > <https://svn.apache.org/repos/asf/phoenix/site>. Not sure what
> level
> > > of
> > > > > work it would require, but does it make sense to move the source
> from
> > > svn
> > > > > to git, so contribution to the website can follow the same JIRA/git
> > > > > workflow as the rest of the project? It could also make sure
> changes to
> > > > > Phoenix code are checked in with corresponding documentation
> changes
> > > when
> > > > > needed.
> > > > >
> > > > > - Will
> > > > >
> > > > --
> > > >
> > > >
> > > > Swaroopa Kadam
> > > > [image: https://]about.me/swaroopa_kadam
> > > > <
> > > >
> > >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > > > >
> > > >
> > >
>


[jira] [Updated] (PHOENIX-5275) Remove accidental imports from curator-client-2.12.0

2019-05-31 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5275:
--
Attachment: PHOENIX-5275.master.v2.patch

> Remove accidental imports from curator-client-2.12.0
> 
>
> Key: PHOENIX-5275
> URL: https://issues.apache.org/jira/browse/PHOENIX-5275
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Jacob Isaac
>    Assignee: William Shen
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5275.master.v1.patch, 
> PHOENIX-5275.master.v2.patch
>
>
> The following imports 
> import org.apache.curator.shaded.com.google.common.*
> were accidentally introduced in
> phoenix-core/src/test/java/org/apache/phoenix/query/QueryServicesTestImpl.java
> phoenix-core/src/it/java/org/apache/phoenix/end2end/UpgradeIT.java
> phoenix-core/src/test/java/org/apache/phoenix/compile/WhereOptimizerTest.java



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-05-31 Thread William Shen
Thomas,

Which release line do we currently base our documentation on? Do you think
it makes sense to bring the site source into master, and always update the
site from master?

- Will

On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
 wrote:

> Currently this would not be easy to do since we have multiple branches. If
> we decide to
> implement Lars' proposal to have a single branch and a module per supported
> HBase version
> then we could have a module for the website as well.
>
> On Thu, May 30, 2019 at 7:03 PM swaroopa kadam  >
> wrote:
>
> > Huge +1!
> >
> > On Thu, May 30, 2019 at 4:38 PM William Shen  >
> > wrote:
> >
> > > Hi all,
> > >
> > > Currently, the Phoenix site is maintained in and built from SVN
> > > <https://svn.apache.org/repos/asf/phoenix/site>. Not sure what level
> of
> > > work it would require, but does it make sense to move the source from
> svn
> > > to git, so contribution to the website can follow the same JIRA/git
> > > workflow as the rest of the project? It could also make sure changes to
> > > Phoenix code are checked in with corresponding documentation changes
> when
> > > needed.
> > >
> > > - Will
> > >
> > --
> >
> >
> > Swaroopa Kadam
> > [image: https://]about.me/swaroopa_kadam
> > <
> >
> https://about.me/swaroopa_kadam?promo=email_sig_source=product_medium=email_sig_campaign=gmail_api
> > >
> >
>


[DISCUSS] Maintaining the Site in Git Instead of SVN

2019-05-30 Thread William Shen
Hi all,

Currently, the Phoenix site is maintained in and built from SVN
. Not sure what level of
work it would require, but does it make sense to move the source from svn
to git, so contribution to the website can follow the same JIRA/git
workflow as the rest of the project? It could also make sure changes to
Phoenix code are checked in with corresponding documentation changes when
needed.

- Will


[jira] [Assigned] (PHOENIX-5306) Misleading statement in document

2019-05-30 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen reassigned PHOENIX-5306:
-

  Assignee: William Shen
Attachment: PHOENIX-5306.patch

That makes sense to me [~elserj]. I've created a patch accordingly. 
[~krishnam], would you like to review the change?

> Misleading statement in document
> 
>
> Key: PHOENIX-5306
> URL: https://issues.apache.org/jira/browse/PHOENIX-5306
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Krishna Maheshwari
>    Assignee: William Shen
>Priority: Minor
> Attachments: PHOENIX-5306.patch
>
>
> [https://svn.apache.org/repos/asf/phoenix/site/source/src/site/markdown/views.md]
>   has the following misleading statement as HBase scaling is not limited by 
> number of tables but rather number of overall regions.
> "The standard SQL view syntax (with some limitations) is now supported by 
> Phoenix to enable multiple virtual tables to all share the same underlying 
> physical HBase table. This is especially important in HBase, as you cannot 
> realistically expect to have more than perhaps up to a hundred physical 
> tables and continue to get reasonable performance from HBase."
> This should be revised to state:
> "The standard SQL view syntax (with some limitations) is now supported by 
> Phoenix to enable multiple virtual tables to all share the same underlying 
> physical HBase table."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: PHOENIX-4863: Setup Travis-CI & CodeCoverage

2019-05-28 Thread William Shen
+1 It would be awesome to be able to do this.
Any concerns if we choose to run long IT as part of this setup?

On Tue, May 28, 2019 at 1:00 PM Pedro Boado  wrote:

> What IT would you suggest to run? Testsuite (including long IT) takes ~2h.
>
> On Tue, 28 May 2019, 20:40 Thomas D'Silva,  .invalid>
> wrote:
>
> > +1 I think its a great idea. This would make it easier for new
> contributors
> > to run tests
> > and also make it easier for committers to verify a patch doesn't break
> > functionality.
> >
> > On Tue, May 28, 2019 at 12:34 PM Priyank Porwal  >
> > wrote:
> >
> > > Hi,
> > >
> > > What do you guys think about this work to setup Travis-CI and
> > CodeCoverage
> > > for Phoenix? The objective would be to run unit and integration tests
> on
> > > each PR, show code-coverage reports and perhaps also do checkstyle
> checks
> > > (after initial scrubbing effort). This would help rid of manual patch
> > > uploads that we need currently, plus bring visibility into code health.
> > > Thoughts?
> > >
> > > https://issues.apache.org/jira/browse/PHOENIX-4863
> > >
> > > Thanks,
> > > Priyank
> > >
> >
>


Re: The patch command could not apply the patch

2019-05-24 Thread William Shen
Thank you Thomas!

On Fri, May 24, 2019 at 2:15 PM Thomas D'Silva
 wrote:

> The list of branches that pre commit patches can be applied to are found in
> the BRANCH_NAMES variable dev/test-patch.properties of the master branch.
>
> BRANCH_NAMES="4.x-HBase-1.2 4.x-HBase-1.3 4.x-HBase-1.4 4.14-HBase-1.4
> master"
>
> Its trying to apply the patch again the master branch (by default) and
> failing. I will white list the CDH branches.
>
> On Fri, May 24, 2019 at 11:48 AM William Shen 
> wrote:
>
> > Just saw this again at
> >
> >
> https://issues.apache.org/jira/browse/PHOENIX-5295?focusedCommentId=16847823=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16847823
> >
> > Is this a known issue?
> >
> > On Mon, May 20, 2019 at 4:18 PM William Shen  >
> > wrote:
> >
> > > Hi all,
> > >
> > > On https://issues.apache.org/jira/browse/PHOENIX-5286, I encountered
> the
> > > Hadoop QA feedback of:
> > >
> > > -1 patch. The patch command could not apply the patch.
> > >
> > > Console output:
> > > https://builds.apache.org/job/PreCommit-PHOENIX-Build/2587//console
> > >
> > > Not totally sure what I am doing wrong here. I had followed instruction
> > > and naming convention on http://phoenix.apache.org/contributing.html
> to
> > > generate the patch.
> > >
> > > Any help would be appreciated!
> > >
> > > - Will
> > >
> > >
> > >
> >
>


Re: The patch command could not apply the patch

2019-05-24 Thread William Shen
Just saw this again at
https://issues.apache.org/jira/browse/PHOENIX-5295?focusedCommentId=16847823=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16847823

Is this a known issue?

On Mon, May 20, 2019 at 4:18 PM William Shen 
wrote:

> Hi all,
>
> On https://issues.apache.org/jira/browse/PHOENIX-5286, I encountered the
> Hadoop QA feedback of:
>
> -1 patch. The patch command could not apply the patch.
>
> Console output:
> https://builds.apache.org/job/PreCommit-PHOENIX-Build/2587//console
>
> Not totally sure what I am doing wrong here. I had followed instruction
> and naming convention on http://phoenix.apache.org/contributing.html to
> generate the patch.
>
> Any help would be appreciated!
>
> - Will
>
>
>


The patch command could not apply the patch

2019-05-20 Thread William Shen
Hi all,

On https://issues.apache.org/jira/browse/PHOENIX-5286, I encountered the
Hadoop QA feedback of:

-1 patch. The patch command could not apply the patch.

Console output:
https://builds.apache.org/job/PreCommit-PHOENIX-Build/2587//console

Not totally sure what I am doing wrong here. I had followed instruction and
naming convention on http://phoenix.apache.org/contributing.html to
generate the patch.

Any help would be appreciated!

- Will


Re: [VOTE] Release of Apache Phoenix 4.14.2 RC1

2019-05-20 Thread William Shen
+1 Thanks for driving this Thomas.

On Mon, May 20, 2019 at 7:17 AM cheng...@apache.org 
wrote:

> +1, verified in my Environment.
>
>
>
>
>
>
>
>
> At 2019-05-20 09:16:37, "Jaanai Zhang"  wrote:
> >+1
> >
> >
> >   Jaanai Zhang
> >   Best regards!
> >
> >
> >
> >Thomas D'Silva  于2019年5月18日周六 下午3:38写道:
> >
> >> Hello Everyone,
> >>
> >> This is a call for a vote on Apache Phoenix 4.14.2 RC1. This is a patch
> >> release of Phoenix 4.14 and is compatible with Apache HBase 1.3 and 1.4.
> >> The release includes both a source-only release and a convenience binary
> >> release for each supported HBase version.
> >>
> >> This release has feature parity with supported HBase versions and
> includes
> >> several critical bug fixes.
> >>
> >> The source tarball, including signatures, digests, etc can be found at:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.3-rc1/src/
> >>
> >>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.4-rc1/src/
> >>
> >> The binary artifacts can be found at:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.3-rc1/bin/
> >>
> >>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.4-rc1/bin/
> >>
> >> For a complete list of changes, see:
> >>
> >>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12344379
> >>
> >> Artifacts are signed with my "CODE SIGNING KEY": DFD86C02
> >>
> >> KEYS file available here:
> >> https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> >>
> >> The hash and tag to be voted upon:
> >>
> >>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=e3505d91e46de1a1756a145d396f27a3c70e927f
> >>
> >>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=4b717825fb284c1f9fbdeee3d0e5391f5baf13bb
> >>
> >>
> >> Vote will be open for at least 72 hours. Please vote:
> >>
> >> [ ] +1 approve
> >> [ ] +0 no opinion
> >> [ ] -1 disapprove (and reason why)
> >>
> >> Thanks,
> >> The Apache Phoenix Team
> >>
>


[jira] [Assigned] (PHOENIX-5286) IndexScrutinyToolIT Ignored in 4.x-CDH

2019-05-17 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen reassigned PHOENIX-5286:
-

Assignee: William Shen

> IndexScrutinyToolIT Ignored in 4.x-CDH
> --
>
> Key: PHOENIX-5286
> URL: https://issues.apache.org/jira/browse/PHOENIX-5286
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh
>    Reporter: William Shen
>Assignee: William Shen
>Priority: Major
>
> The entire {{IndexScrutinyToolIT}} is marked with {{@ignored}} in 4.x-CDH, 
> but based on James' [comment| 
> https://issues.apache.org/jira/browse/PHOENIX-4372?focusedCommentId=16258577=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16258577],
>  it seems like we should've only marked {{testOutputInvalidRowsToFile}} as 
> {{@ignored}}.
> PHOENIX-4388 is open to fix testOutputInvalidRowsToFile, but maybe we should 
> re-enable the other test cases in {{IndexScrutinyToolIT}} to make sure they 
> continue to pass with the releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5286) IndexScrutinyToolIT Ignored in 4.x-CDH

2019-05-17 Thread William Shen (JIRA)
William Shen created PHOENIX-5286:
-

 Summary: IndexScrutinyToolIT Ignored in 4.x-CDH
 Key: PHOENIX-5286
 URL: https://issues.apache.org/jira/browse/PHOENIX-5286
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.2-cdh
Reporter: William Shen


The entire {{IndexScrutinyToolIT}} is marked with {{@ignored}} in 4.x-CDH, but 
based on James' [comment| 
https://issues.apache.org/jira/browse/PHOENIX-4372?focusedCommentId=16258577=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16258577],
 it seems like we should've only marked {{testOutputInvalidRowsToFile}} as 
{{@ignored}}.

PHOENIX-4388 is open to fix testOutputInvalidRowsToFile, but maybe we should 
re-enable the other test cases in {{IndexScrutinyToolIT}} to make sure they 
continue to pass with the releases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4373) Local index variable length key can have trailing nulls while upserting

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-4373:
--
Fix Version/s: 4.13.2-cdh

> Local index variable length key can have trailing nulls while upserting
> ---
>
> Key: PHOENIX-4373
> URL: https://issues.apache.org/jira/browse/PHOENIX-4373
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Vincent Poon
>Assignee: Vincent Poon
>Priority: Major
> Fix For: 5.0.0-alpha, 4.14.0, 4.13.2-cdh
>
> Attachments: PHOENIX-4373.v1.master.patch
>
>
> In the UpsertCompiler#setValues() , if it's a local index, the key is 
> prefixed with regionPrefix.  During that process, ptr.get() is called to get 
> the base key, and the code assumes the entire array should be used.  However, 
> if it's a variable length key, we could have trailing nulls since the base 
> key ptr array size is just an estimate. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4754) Fix flapping of DerivedTableIT after PHOENIX-4728

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen resolved PHOENIX-4754.
---
Resolution: Won't Fix

Closing as Won't Fix, since 4.x-HBase1.1 is now EOL per 
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201807.mbox/%3CCAHBoBrUALp0f3aSA%2BrFJ7c6mjEhgooVZyqpkRdvO9mRXE70jbw%40mail.gmail.com%3E


> Fix flapping of DerivedTableIT after PHOENIX-4728
> -
>
> Key: PHOENIX-4754
> URL: https://issues.apache.org/jira/browse/PHOENIX-4754
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: James Taylor
>Assignee: Xavier Jodoin
>Priority: Minor
>
> Looks like PHOENIX-4728 introduced some flappiness in DerivedTableIT in HBase 
> 1.1. Please investigate, [~xjodoin]. See 
> [https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/757/] and subsequent 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-3848) Fix MutableIndexFailureIT for 1.1 and 0.98 branches

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen closed PHOENIX-3848.
-

> Fix MutableIndexFailureIT for 1.1 and 0.98 branches
> ---
>
> Key: PHOENIX-3848
> URL: https://issues.apache.org/jira/browse/PHOENIX-3848
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> It's currently tagged as @Ignore due to row lock failures (these don't occur 
> for HBase 1.2 and 1.3)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3848) Fix MutableIndexFailureIT for 1.1 and 0.98 branches

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen resolved PHOENIX-3848.
---
Resolution: Won't Fix

Closing as Won't Fix, since 0.98 and 1.1 are now EOL per 
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201807.mbox/%3CCAHBoBrUALp0f3aSA%2BrFJ7c6mjEhgooVZyqpkRdvO9mRXE70jbw%40mail.gmail.com%3E


> Fix MutableIndexFailureIT for 1.1 and 0.98 branches
> ---
>
> Key: PHOENIX-3848
> URL: https://issues.apache.org/jira/browse/PHOENIX-3848
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Priority: Major
>
> It's currently tagged as @Ignore due to row lock failures (these don't occur 
> for HBase 1.2 and 1.3)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3864) PhoenixConsumerIT is timing out frequently in 4.x-HBase-0.98

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen resolved PHOENIX-3864.
---
Resolution: Won't Fix

Closing as Won't Fix, since 0.98 is now EOL per 
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201807.mbox/%3CCAHBoBrUALp0f3aSA%2BrFJ7c6mjEhgooVZyqpkRdvO9mRXE70jbw%40mail.gmail.com%3E


> PhoenixConsumerIT is timing out frequently in 4.x-HBase-0.98
> 
>
> Key: PHOENIX-3864
> URL: https://issues.apache.org/jira/browse/PHOENIX-3864
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Kalyan
>Priority: Major
> Attachments: PHOENIX-3864.patch
>
>
> PhoenixConsumerIT is timing out quite frequently in the 4.x-HBase-0.98 branch 
> causing our builds to fail. I've added an Ignore tag for now, but please 
> investigate. Not sure if you have any quick ideas, [~jmahonin].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4754) Fix flapping of DerivedTableIT after PHOENIX-4728

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen closed PHOENIX-4754.
-

> Fix flapping of DerivedTableIT after PHOENIX-4728
> -
>
> Key: PHOENIX-4754
> URL: https://issues.apache.org/jira/browse/PHOENIX-4754
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: James Taylor
>Assignee: Xavier Jodoin
>Priority: Minor
>
> Looks like PHOENIX-4728 introduced some flappiness in DerivedTableIT in HBase 
> 1.1. Please investigate, [~xjodoin]. See 
> [https://builds.apache.org/job/Phoenix-4.x-HBase-1.1/757/] and subsequent 
> test runs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-3864) PhoenixConsumerIT is timing out frequently in 4.x-HBase-0.98

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen closed PHOENIX-3864.
-

> PhoenixConsumerIT is timing out frequently in 4.x-HBase-0.98
> 
>
> Key: PHOENIX-3864
> URL: https://issues.apache.org/jira/browse/PHOENIX-3864
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: Kalyan
>Priority: Major
> Attachments: PHOENIX-3864.patch
>
>
> PhoenixConsumerIT is timing out quite frequently in the 4.x-HBase-0.98 branch 
> causing our builds to fail. I've added an Ignore tag for now, but please 
> investigate. Not sure if you have any quick ideas, [~jmahonin].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3450) Fix hanging tests in IndexExtendedIT for 4.x-HBase-0.98

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen resolved PHOENIX-3450.
---
Resolution: Won't Fix

Closing as Won't Fix, since 0.98 is now EOL per 
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201807.mbox/%3CCAHBoBrUALp0f3aSA%2BrFJ7c6mjEhgooVZyqpkRdvO9mRXE70jbw%40mail.gmail.com%3E


> Fix hanging tests in IndexExtendedIT for 4.x-HBase-0.98
> ---
>
> Key: PHOENIX-3450
> URL: https://issues.apache.org/jira/browse/PHOENIX-3450
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Priority: Major
>
> It appears that many of the tests in IndexExtendedIT are hanging when run 
> against local indexes. We should investigate this and get the tests working 
> again to make sure there are no lurking issues here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-3450) Fix hanging tests in IndexExtendedIT for 4.x-HBase-0.98

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen closed PHOENIX-3450.
-

> Fix hanging tests in IndexExtendedIT for 4.x-HBase-0.98
> ---
>
> Key: PHOENIX-3450
> URL: https://issues.apache.org/jira/browse/PHOENIX-3450
> Project: Phoenix
>  Issue Type: Test
>Reporter: James Taylor
>Priority: Major
>
> It appears that many of the tests in IndexExtendedIT are hanging when run 
> against local indexes. We should investigate this and get the tests working 
> again to make sure there are no lurking issues here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4117) Add hadoop qa files for 4.x-HBase-0.98 branch

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen resolved PHOENIX-4117.
---
Resolution: Won't Do

Closing as Won't Do, since 0.98 is now EOL per 
http://mail-archives.apache.org/mod_mbox/phoenix-dev/201807.mbox/%3CCAHBoBrUALp0f3aSA%2BrFJ7c6mjEhgooVZyqpkRdvO9mRXE70jbw%40mail.gmail.com%3E

> Add hadoop qa files for 4.x-HBase-0.98 branch
> -
>
> Key: PHOENIX-4117
> URL: https://issues.apache.org/jira/browse/PHOENIX-4117
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
> Attachments: PHOENIX-4117.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Closed] (PHOENIX-4117) Add hadoop qa files for 4.x-HBase-0.98 branch

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen closed PHOENIX-4117.
-

> Add hadoop qa files for 4.x-HBase-0.98 branch
> -
>
> Key: PHOENIX-4117
> URL: https://issues.apache.org/jira/browse/PHOENIX-4117
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
> Attachments: PHOENIX-4117.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4994) Update New Data in HBase to Phoenix Secondary Index Table in Real Time

2019-05-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4994?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-4994:
--
Description: 
我们现在通过flume实时向hbase写数据,是否有办法将hbase新增的数据实时同步到phoenix的映射表和phoenix的二级索引表中。

Attempted translation:
bq. We're currently writing data to HBase in real time via flume, is there a 
way to have the new data updated to Phoenix secondary index table in real time.

  was:我们现在通过flume实时向hbase写数据,是否有办法将hbase新增的数据实时同步到phoenix的映射表和phoenix的二级索引表中。

Summary: Update New Data in HBase to Phoenix Secondary Index Table in 
Real Time  (was: 将hbase新增数据同步到phoenix的二级索引表中)

> Update New Data in HBase to Phoenix Secondary Index Table in Real Time
> --
>
> Key: PHOENIX-4994
> URL: https://issues.apache.org/jira/browse/PHOENIX-4994
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
> Environment: hbase:1.2.1-cdh5.14.0
> phoenix: 4.14.0
>Reporter: 申俊伯
>Priority: Major
>  Labels: patch
> Fix For: 4.14.0
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 我们现在通过flume实时向hbase写数据,是否有办法将hbase新增的数据实时同步到phoenix的映射表和phoenix的二级索引表中。
> Attempted translation:
> bq. We're currently writing data to HBase in real time via flume, is there a 
> way to have the new data updated to Phoenix secondary index table in real 
> time.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release of Apache Phoenix 4.14.2 RC0

2019-05-07 Thread William Shen
Not sure if this would be the right thread to attach this discussion: do we
plan to also release 4.14.x patch for the CDH branches?

On Thu, May 2, 2019 at 12:12 PM Thomas D'Silva
 wrote:

> -1
>
> PHOENIX-5101 and PHOENIX-5266 are blockers.
>
> On Tue, Apr 23, 2019 at 5:24 PM William Shen 
> wrote:
>
> > The exception seems to be unique to our use case where the server uses
> > swagger and has a conflict on guava with the version of guava needed in
> the
> > HBase client. I was able to get the code running by removing swagger.
> >
> > However, using 4.12.2-HBase-1.2, I am still running into the NPE in
> > https://issues.apache.org/jira/browse/PHOENIX-5101
> > I had left a comment
> > <
> >
> https://issues.apache.org/jira/browse/PHOENIX-5101?focusedCommentId=16812747=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16812747
> > >
> > on the ticket previously because the fix might not work for HBase 1.2 and
> > 1.3
> >
> > Stacktrace:
> >
> > Caused by: java.lang.NullPointerException: null
> >at
> >
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:100)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >at
> >
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:80)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >at
> >
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >at
> >
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >at
> >
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1442)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >at
> >
> org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
> > ~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
> >    ... 64 common frames omitted
> >
> >
> > On Mon, Apr 22, 2019 at 5:11 PM William Shen  >
> > wrote:
> >
> > > Seems to be due to https://issues.apache.org/jira/browse/HBASE-14963
> > which
> > > is not in HBase-1.2
> > >
> > > On Mon, Apr 22, 2019 at 4:38 PM William Shen <
> wills...@marinsoftware.com
> > >
> > > wrote:
> > >
> > >> the bottom of the stacktrace:
> > >> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
> > >> java.lang.IllegalAccessError: tried to access method
> > >> com.google.common.base.Stopwatch.()V from class
> > >> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:239)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
> > >> at
> > >>
> > org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
> > >> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
> > >> at
> > >>
> >
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097)
> > >> ... 30 more
> > >> Caused by: java.lang.IllegalAccessError: tried to access method
> > >> com.google.common.base.Stopwatch.()V from class
> > >> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> > >> at
> > >>
> >
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604)
> > >> at
> &g

Re: [VOTE] Release of Apache Phoenix 4.14.2 RC0

2019-04-23 Thread William Shen
The exception seems to be unique to our use case where the server uses
swagger and has a conflict on guava with the version of guava needed in the
HBase client. I was able to get the code running by removing swagger.

However, using 4.12.2-HBase-1.2, I am still running into the NPE in
https://issues.apache.org/jira/browse/PHOENIX-5101
I had left a comment
<https://issues.apache.org/jira/browse/PHOENIX-5101?focusedCommentId=16812747=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16812747>
on the ticket previously because the fix might not work for HBase 1.2 and
1.3

Stacktrace:

Caused by: java.lang.NullPointerException: null
   at 
org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:100)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   at 
org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:80)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   at 
org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   at 
org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   at 
org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1442)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   at 
org.apache.phoenix.iterate.RoundRobinResultIterator.close(RoundRobinResultIterator.java:125)
~[phoenix-core-4.14.2-HBase-1.2.jar:4.14.2-HBase-1.2]
   ... 64 common frames omitted


On Mon, Apr 22, 2019 at 5:11 PM William Shen 
wrote:

> Seems to be due to https://issues.apache.org/jira/browse/HBASE-14963 which
> is not in HBase-1.2
>
> On Mon, Apr 22, 2019 at 4:38 PM William Shen 
> wrote:
>
>> the bottom of the stacktrace:
>> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
>> java.lang.IllegalAccessError: tried to access method
>> com.google.common.base.Stopwatch.()V from class
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
>> at
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:239)
>> at
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
>> at
>> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
>> at
>> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
>> at
>> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
>> at
>> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
>> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
>> at
>> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
>> at
>> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
>> at
>> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097)
>> ... 30 more
>> Caused by: java.lang.IllegalAccessError: tried to access method
>> com.google.common.base.Stopwatch.()V from class
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
>> at
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604)
>> at
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
>> at
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
>> at
>> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
>> at
>> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1211)
>> at
>> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1178)
>> at
>> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
>> at
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
>> at
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
>> at
>> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
>> ... 39 more
>>
>> On Mon, Apr 22, 2019 at 4:26 PM William Shen 
>> wrote:
>>
>>> Not sure if it's a local issue unique to my setup, but when I set up my
>>> Java client to use 4.14.2, I encounter t

Re: [VOTE] Release of Apache Phoenix 4.14.2 RC0

2019-04-22 Thread William Shen
Seems to be due to https://issues.apache.org/jira/browse/HBASE-14963 which
is not in HBase-1.2

On Mon, Apr 22, 2019 at 4:38 PM William Shen 
wrote:

> the bottom of the stacktrace:
> Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
> java.lang.IllegalAccessError: tried to access method
> com.google.common.base.Stopwatch.()V from class
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:239)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
> at
> org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
> at
> org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
> at
> org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
> at
> org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
> at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
> at
> org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
> at
> org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
> at
> org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097)
> ... 30 more
> Caused by: java.lang.IllegalAccessError: tried to access method
> com.google.common.base.Stopwatch.()V from class
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> at
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604)
> at
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
> at
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
> at
> org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1211)
> at
> org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1178)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
> at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
> at
> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
> at
> org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
> ... 39 more
>
> On Mon, Apr 22, 2019 at 4:26 PM William Shen 
> wrote:
>
>> Not sure if it's a local issue unique to my setup, but when I set up my
>> Java client to use 4.14.2, I encounter the follow upon making a JDBC
>> connection. Anyone else?
>> Exception in thread "main"
>> org.apache.phoenix.exception.PhoenixIOException:
>> java.lang.IllegalAccessError: tried to access method
>> com.google.common.base.Stopwatch.()V from class
>> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
>> at
>> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
>> at
>> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2731)
>> at
>> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1115)
>> at
>> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2536)
>> at
>> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499)
>> at
>> org.apache.phoenix.util.PhoenixContextExecutor

Re: [VOTE] Release of Apache Phoenix 4.14.2 RC0

2019-04-22 Thread William Shen
the bottom of the stacktrace:
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException:
java.lang.IllegalAccessError: tried to access method
com.google.common.base.Stopwatch.()V from class
org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.translateException(RpcRetryingCaller.java:239)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:212)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:327)
at
org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:302)
at
org.apache.hadoop.hbase.client.ClientScanner.initializeScannerInConstruction(ClientScanner.java:167)
at
org.apache.hadoop.hbase.client.ClientScanner.(ClientScanner.java:162)
at org.apache.hadoop.hbase.client.HTable.getScanner(HTable.java:797)
at
org.apache.hadoop.hbase.MetaTableAccessor.fullScan(MetaTableAccessor.java:602)
at
org.apache.hadoop.hbase.MetaTableAccessor.tableExists(MetaTableAccessor.java:366)
at
org.apache.hadoop.hbase.client.HBaseAdmin.tableExists(HBaseAdmin.java:406)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1097)
... 30 more
Caused by: java.lang.IllegalAccessError: tried to access method
com.google.common.base.Stopwatch.()V from class
org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:604)
at
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:588)
at
org.apache.hadoop.hbase.zookeeper.MetaTableLocator.blockUntilAvailable(MetaTableLocator.java:561)
at
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:61)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateMeta(ConnectionManager.java:1211)
at
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1178)
at
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:305)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:156)
at
org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
... 39 more

On Mon, Apr 22, 2019 at 4:26 PM William Shen 
wrote:

> Not sure if it's a local issue unique to my setup, but when I set up my
> Java client to use 4.14.2, I encounter the follow upon making a JDBC
> connection. Anyone else?
> Exception in thread "main"
> org.apache.phoenix.exception.PhoenixIOException:
> java.lang.IllegalAccessError: tried to access method
> com.google.common.base.Stopwatch.()V from class
> org.apache.hadoop.hbase.zookeeper.MetaTableLocator
> at
> org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
> at
> org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2731)
> at
> org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1115)
> at
> org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
> at
> org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
> at
> org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2536)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499)
> at
> org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
> at
> org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
> at
> org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
> at
> org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
> at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
> at java.sql.DriverManager.getConnection(DriverManager.java:664)
> at java.sql.Dri

Re: [VOTE] Release of Apache Phoenix 4.14.2 RC0

2019-04-22 Thread William Shen
Not sure if it's a local issue unique to my setup, but when I set up my
Java client to use 4.14.2, I encounter the follow upon making a JDBC
connection. Anyone else?
Exception in thread "main" org.apache.phoenix.exception.PhoenixIOException:
java.lang.IllegalAccessError: tried to access method
com.google.common.base.Stopwatch.()V from class
org.apache.hadoop.hbase.zookeeper.MetaTableLocator
at
org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1197)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1491)
at
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2731)
at
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1115)
at
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:192)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at
org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:390)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2536)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2499)
at
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2499)
at
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:147)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:208)
at
com.marin.data.store.connection.ConnectionProvider.getConnection(ConnectionProvider.java:104)
at
com.marin.data.store.connection.ConnectionProvider.testConnection(ConnectionProvider.java:123)
at
com.marin.data.store.connection.ConnectionProvider.(ConnectionProvider.java:52)
at
com.marin.data.store.connection.ConnectionProvider.createInstance(ConnectionProvider.java:66)

On Sun, Apr 21, 2019 at 3:00 PM Thomas D'Silva  wrote:

> Hello Everyone,
>
> This is a call for a vote on Apache Phoenix 4.14.2 RC0. This is a patch
> release of Phoenix 4.14 and is compatible with Apache HBase 1.2, 1.3 and
> 1.4.
> The release includes both a source-only release and a convenience binary
> release for each supported HBase version.
>
> This release has feature parity with supported HBase versions and includes
> critical bug fixes for secondary indexes.
>
> The source tarball, including signatures, digests, etc can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.2-rc0/src/
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.3-rc0/src/
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.4-rc0/src/
>
> The binary artifacts can be found at:
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.2-rc0/bin/
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.3-rc0/bin/
>
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.14.2-HBase-1.4-rc0/bin/
>
> For a complete list of changes, see:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315120=12344379
>
> Artifacts are signed with my "CODE SIGNING KEY": DFD86C02
>
> KEYS file available here:
> https://dist.apache.org/repos/dist/dev/phoenix/KEYS
>
> The hash and tag to be voted upon:
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=2a61e639ebc4f373cd9dc3b17e628fd2e3f14c4e
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=b9842b6a8f1b94ca148e2f657a5d16da6cb43a41
>
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=6e2e1bed79961a31d9d01db7e53dc481b7ab521c
>
>
> Vote will be open for at least 72 hours. Please vote:
>
> [ ] +1 approve
> [ ] +0 no opinion
> [ ] -1 disapprove (and reason why)
>
> Thanks,
> The Apache Phoenix Team
>


[jira] [Assigned] (PHOENIX-5254) Broken Link to Installation Section of the Download Page

2019-04-22 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen reassigned PHOENIX-5254:
-

Assignee: Xinyi Yan

> Broken Link to Installation Section of the Download Page
> 
>
> Key: PHOENIX-5254
> URL: https://issues.apache.org/jira/browse/PHOENIX-5254
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: William Shen
>Assignee: Xinyi Yan
>Priority: Minor
>
> In https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html, we 
> reference https://phoenix.apache.org/download.html#Installation for 
> installation instruction. However, the section does not exist on the download 
> page. The installation instruction is actually at 
> https://phoenix.apache.org/installation.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5254) Broken Link to Installation Section of the Download Page

2019-04-22 Thread William Shen (JIRA)
William Shen created PHOENIX-5254:
-

 Summary: Broken Link to Installation Section of the Download Page
 Key: PHOENIX-5254
 URL: https://issues.apache.org/jira/browse/PHOENIX-5254
 Project: Phoenix
  Issue Type: Bug
Reporter: William Shen


In https://phoenix.apache.org/Phoenix-in-15-minutes-or-less.html, we reference 
https://phoenix.apache.org/download.html#Installation for installation 
instruction. However, the section does not exist on the download page. The 
installation instruction is actually at 
https://phoenix.apache.org/installation.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Making a 4.14.2 release for bug fixes

2019-04-16 Thread William Shen
Bump, anyone else would like to chime in regarding what should or should
not be included as part of 4.14.2?

Right now we have, based on the filter:
Issue Type Issue key Summary Status Resolution
Bug PHOENIX-5243 PhoenixResultSet#next() closes the result set if scanner
returns null Patch Available
Bug PHOENIX-5207 Create index if not exists fails incorrectly if table has
'maxIndexesPerTable' indexes already  Resolved Fixed
Bug PHOENIX-5184 HBase and Phoenix connection leaks in Indexing code path,
OrphanViewTool and PhoenixConfigurationUtil Resolved Fixed
Bug PHOENIX-5169 Query logger is still initialized for each query when the
log level is off Resolved Fixed
Improvement PHOENIX-5131 Make spilling to disk for order/group by
configurable Resolved Fixed
Bug PHOENIX-5126 RegionScanner leak leading to store files not getting
cleared Resolved Fixed
Bug PHOENIX-5123 Avoid using MappedByteBuffers for server side GROUP BY
Resolved Fixed
Bug PHOENIX-5122 PHOENIX-4322 breaks client backward compatibility Resolved
Fixed
Bug PHOENIX-5101 ScanningResultIterator getScanMetrics throws NPE Resolved
Fixed
Bug PHOENIX-5084 Changes from Transactional Tables are not visible to query
in different client Resolved Fixed
Bug PHOENIX-5073 Invalid PIndexState during rolling upgrade from 4.13 to
4.14 Resolved Fixed
Improvement PHOENIX-5069 Use asynchronous refresh to provide non-blocking
Phoenix Stats Client Cache Resolved Fixed
Improvement PHOENIX-5026 Add client setting to disable server side mutations
Resolved Fixed
Improvement PHOENIX-4900 Modify MAX_MUTATION_SIZE_EXCEEDED and
MAX_MUTATION_SIZE_BYTES_EXCEEDED exception message to recommend turning
autocommit on for deletes Resolved Fixed

On Sat, Apr 13, 2019 at 5:59 PM William Shen 
wrote:

> Also, there're a handful of tickets already marked with Fix Version 4.14.2:
>
> https://issues.apache.org/jira/browse/PHOENIX-5207?jql=project%20%3D%20PHOENIX%20AND%20fixVersion%20%3D%204.14.2
>
> On Fri, Apr 12, 2019 at 3:09 PM Chinmay Kulkarni <
> chinmayskulka...@gmail.com> wrote:
>
>> +1 to the idea of a 4.14.2 release. Here are some JIRAs to backport that
>> come to mind:
>>
>> https://issues.apache.org/jira/browse/PHOENIX-5184
>> https://issues.apache.org/jira/browse/PHOENIX-5169
>> https://issues.apache.org/jira/browse/PHOENIX-5131
>>
>> Thanks,
>> Chinmay
>>
>>
>> On Fri, Apr 12, 2019 at 1:33 PM William Shen 
>> wrote:
>>
>> > Hi,
>> >
>> > Since there're still some blockers for 4.15, what does the community
>> feel
>> > about back-porting some of the bug fixes for a 4.14.2 release?
>> >
>> > One issue I have in mind is the NPE fix (
>> > https://issues.apache.org/jira/browse/PHOENIX-5101). Is there anything
>> > else
>> > that would be of interest?
>> >
>> > Thanks
>> >
>> > - Will
>> >
>>
>>
>> --
>> Chinmay Kulkarni
>> M.S. Computer Science,
>> University of Illinois at Urbana-Champaign.
>> B. Tech Computer Engineering,
>> College of Engineering, Pune.
>>
>


[jira] [Updated] (PHOENIX-5184) HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and PhoenixConfigurationUtil

2019-04-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5184:
--
Fix Version/s: 4.14.2

> HBase and Phoenix connection leaks in Indexing code path, OrphanViewTool and 
> PhoenixConfigurationUtil
> -
>
> Key: PHOENIX-5184
> URL: https://issues.apache.org/jira/browse/PHOENIX-5184
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5184-4.x-HBase-1.3-v1.patch, 
> PHOENIX-5184-4.x-HBase-1.3.patch, PHOENIX-5184-v1.patch, PHOENIX-5184.patch
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> I was debugging a connection leak issue and ran into a few areas where there 
> are connection leaks. I decided to take a broader look overall and see if 
> there were other places where we leak connections and found some candidates. 
> This is by no means an exhaustive search for connection leaks.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5101) ScanningResultIterator getScanMetrics throws NPE

2019-04-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5101:
--
Fix Version/s: 4.14.2

> ScanningResultIterator getScanMetrics throws NPE
> 
>
> Key: PHOENIX-5101
> URL: https://issues.apache.org/jira/browse/PHOENIX-5101
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
>Reporter: Reid Chan
>Assignee: Karan Mehta
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5101.414-HBase-1.4.001.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> {code}
> java.lang.NullPointerException
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.getScanMetrics(ScanningResultIterator.java:92)
>   at 
> org.apache.phoenix.iterate.ScanningResultIterator.close(ScanningResultIterator.java:79)
>   at 
> org.apache.phoenix.iterate.TableResultIterator.close(TableResultIterator.java:144)
>   at 
> org.apache.phoenix.iterate.LookAheadResultIterator$1.close(LookAheadResultIterator.java:42)
>   at 
> org.apache.phoenix.iterate.BaseResultIterators.close(BaseResultIterators.java:1439)
>   at 
> org.apache.phoenix.iterate.MergeSortResultIterator.close(MergeSortResultIterator.java:44)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.close(PhoenixResultSet.java:176)
>   at 
> org.apache.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:807)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.frame(JdbcResultSet.java:148)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:101)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcResultSet.create(JdbcResultSet.java:81)
>   at 
> org.apache.calcite.avatica.jdbc.JdbcMeta.prepareAndExecute(JdbcMeta.java:759)
>   at 
> org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:206)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:927)
>   at 
> org.apache.calcite.avatica.remote.Service$PrepareAndExecuteRequest.accept(Service.java:879)
>   at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
>   at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:123)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler$2.call(AvaticaProtobufHandler.java:121)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback$1.run(QueryServer.java:500)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>   at 
> org.apache.phoenix.queryserver.server.QueryServer$PhoenixDoAsCallback.doAsRemoteUser(QueryServer.java:497)
>   at 
> org.apache.calcite.avatica.server.HttpServer$Builder$1.doAsRemoteUser(HttpServer.java:884)
>   at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:120)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:542)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:311)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.apache.phoenix.shaded.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5131) Make spilling to disk for order/group by configurable

2019-04-16 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5131:
--
Fix Version/s: 4.14.2

> Make spilling to disk for order/group by configurable
> -
>
> Key: PHOENIX-5131
> URL: https://issues.apache.org/jira/browse/PHOENIX-5131
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Abhishek Singh Chouhan
>Assignee: Abhishek Singh Chouhan
>Priority: Major
> Fix For: 4.15.0, 5.1.0, 4.14.2
>
> Attachments: PHOENIX-5131-4.x-HBase-1.2.patch, 
> PHOENIX-5131-4.x-HBase-1.3.patch, PHOENIX-5131-4.x-HBase-1.4.patch, 
> PHOENIX-5131-master-v2.patch, PHOENIX-5131-master-v2.patch, 
> PHOENIX-5131-master-v3.patch, PHOENIX-5131-master-v4.patch, 
> PHOENIX-5131-master.patch, PHOENIX-5131-master.patch
>
>
> We've observed that large queries, doing order/group by leading to issues on 
> the regionserver (crashes/long gc pauses/file handler exhaustion etc.). We 
> should make spilling to disk configurable and in case its disabled, fail the 
> query once it hits the spilling limit on any of the region servers. Also make 
> spooling threshold server-side property only to prevent clients from 
> controlling memory allocation on the rs side.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Making a 4.14.2 release for bug fixes

2019-04-13 Thread William Shen
Also, there're a handful of tickets already marked with Fix Version 4.14.2:
https://issues.apache.org/jira/browse/PHOENIX-5207?jql=project%20%3D%20PHOENIX%20AND%20fixVersion%20%3D%204.14.2

On Fri, Apr 12, 2019 at 3:09 PM Chinmay Kulkarni 
wrote:

> +1 to the idea of a 4.14.2 release. Here are some JIRAs to backport that
> come to mind:
>
> https://issues.apache.org/jira/browse/PHOENIX-5184
> https://issues.apache.org/jira/browse/PHOENIX-5169
> https://issues.apache.org/jira/browse/PHOENIX-5131
>
> Thanks,
> Chinmay
>
>
> On Fri, Apr 12, 2019 at 1:33 PM William Shen 
> wrote:
>
> > Hi,
> >
> > Since there're still some blockers for 4.15, what does the community feel
> > about back-porting some of the bug fixes for a 4.14.2 release?
> >
> > One issue I have in mind is the NPE fix (
> > https://issues.apache.org/jira/browse/PHOENIX-5101). Is there anything
> > else
> > that would be of interest?
> >
> > Thanks
> >
> > - Will
> >
>
>
> --
> Chinmay Kulkarni
> M.S. Computer Science,
> University of Illinois at Urbana-Champaign.
> B. Tech Computer Engineering,
> College of Engineering, Pune.
>


[DISCUSS] Making a 4.14.2 release for bug fixes

2019-04-12 Thread William Shen
Hi,

Since there're still some blockers for 4.15, what does the community feel
about back-porting some of the bug fixes for a 4.14.2 release?

One issue I have in mind is the NPE fix (
https://issues.apache.org/jira/browse/PHOENIX-5101). Is there anything else
that would be of interest?

Thanks

- Will


Re: Plan to release 4.15.0?

2019-04-12 Thread William Shen
Thanks Thomas. I will take a look and follow up if there's any other
questions!

On Thu, Apr 11, 2019 at 5:31 PM Thomas D'Silva
 wrote:

> Do you mean instructions on how to cut a RC for 4.14.2 ? We have
> instructions here :
> https://phoenix.apache.org/release.html
> I will forward you the email thread about how we did the 4.14.1 release.
>
> On Thu, Apr 11, 2019 at 5:09 PM William Shen 
> wrote:
>
> > Thomas,
> > I would love to help, but need someone to show me how :)
> >
> > On Thu, Apr 11, 2019 at 4:43 PM Thomas D'Silva
> >  wrote:
> >
> > > I have been busy with $dayjob so didn't get time to work on those
> > blockers,
> > > so that we can get the 4.15 release out.
> > > Another option is to do a 4.14.2, which includes other bug fixes as
> well.
> > > Will are you interested in being RM for 4.14.2 ?
> > >
> > > On Thu, Apr 11, 2019 at 4:17 PM William Shen <
> wills...@marinsoftware.com
> > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Looking to pick up the fix for PHOENIX-5101
> > > > <https://issues.apache.org/jira/browse/PHOENIX-5101> from 4.15.0,
> and
> > > > wondering what's the expectation on when we think 4.15.0 will be
> > > considered
> > > > for release. Took a quick look, and seems like current blockers are:
> > > > PHOENIX-5103 <https://issues.apache.org/jira/browse/PHOENIX-5103>,
> > > > PHOENIX-5104 <https://issues.apache.org/jira/browse/PHOENIX-5104>,
> and
> > > > PHOENIX-5057 <https://issues.apache.org/jira/browse/PHOENIX-5057>.
> > > >
> > > > Thanks
> > > >
> > >
> >
>


Re: Plan to release 4.15.0?

2019-04-11 Thread William Shen
Thomas,
I would love to help, but need someone to show me how :)

On Thu, Apr 11, 2019 at 4:43 PM Thomas D'Silva
 wrote:

> I have been busy with $dayjob so didn't get time to work on those blockers,
> so that we can get the 4.15 release out.
> Another option is to do a 4.14.2, which includes other bug fixes as well.
> Will are you interested in being RM for 4.14.2 ?
>
> On Thu, Apr 11, 2019 at 4:17 PM William Shen 
> wrote:
>
> > Hi,
> >
> > Looking to pick up the fix for PHOENIX-5101
> > <https://issues.apache.org/jira/browse/PHOENIX-5101> from 4.15.0, and
> > wondering what's the expectation on when we think 4.15.0 will be
> considered
> > for release. Took a quick look, and seems like current blockers are:
> > PHOENIX-5103 <https://issues.apache.org/jira/browse/PHOENIX-5103>,
> > PHOENIX-5104 <https://issues.apache.org/jira/browse/PHOENIX-5104>, and
> > PHOENIX-5057 <https://issues.apache.org/jira/browse/PHOENIX-5057>.
> >
> > Thanks
> >
>


Plan to release 4.15.0?

2019-04-11 Thread William Shen
Hi,

Looking to pick up the fix for PHOENIX-5101
 from 4.15.0, and
wondering what's the expectation on when we think 4.15.0 will be considered
for release. Took a quick look, and seems like current blockers are:
PHOENIX-5103 ,
PHOENIX-5104 , and
PHOENIX-5057 .

Thanks


[jira] [Updated] (PHOENIX-5166) DELETE didn't work on phoenix

2019-04-11 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5166:
--
Fix Version/s: 4.15.0

Restoring the fix version, as I was informed that a future fix version 
indicates a target.

> DELETE didn't work on phoenix
> -
>
> Key: PHOENIX-5166
> URL: https://issues.apache.org/jira/browse/PHOENIX-5166
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: Apache HBase 1.4
> Phoenix 4.14.1
>Reporter: Yuan Yifan
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: image-2019-02-26-13-56-44-201.png, 
> image-2019-02-26-14-34-50-580.png, image-2019-02-26-15-35-53-229.png, 
> image-2019-02-26-15-36-08-649.png, image-2019-02-26-15-36-47-364.png
>
>
> I executed an SQL that delete some data in table T_LOCATION, the data 
> decreased.
> But there're still 256 rows of data which I can't delete them however I run 
> DELETE repeatedly.
> !image-2019-02-26-13-56-44-201.png!
>  
> I'm preparing to fix it but I don't know where the code should I debug, is 
> there any document of the structure/design of Phoenix?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Unresolved JIRA issue with Fix Version

2019-04-11 Thread William Shen
Thanks for the clarification. I will restore the fix version for a JIRA
ticket that I removed.

On Thu, Apr 11, 2019 at 12:04 PM Thomas D'Silva
 wrote:

> The future release version is used to indicate a planned target.
>
> On Wed, Apr 10, 2019 at 4:37 PM William Shen 
> wrote:
>
> > Hi,
> >
> > Quick question about issue tracking for Apache Phoenix. There're a
> handful
> > of tickets currently in unresolved states (Open, In Progress, Patch
> > Available), and yet have the Fix Version specified as 4.15.0. Are these
> > incorrectly marked with a fix version, or do we use future release in the
> > Fix Version to indicate a planned target?
> >
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%204.15.0
> >
> > Thanks
> >
>


[jira] [Created] (PHOENIX-5238) Pass NO_CACHE Hint with PhoenixRDD

2019-04-11 Thread William Shen (JIRA)
William Shen created PHOENIX-5238:
-

 Summary: Pass NO_CACHE Hint with PhoenixRDD
 Key: PHOENIX-5238
 URL: https://issues.apache.org/jira/browse/PHOENIX-5238
 Project: Phoenix
  Issue Type: New Feature
Reporter: William Shen


As a Spark developer, I want to query with NO_CACHE hint using PhoenixRDD, so I 
can prevent large one-time scans from affecting the block cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Unresolved JIRA issue with Fix Version

2019-04-10 Thread William Shen
Hi,

Quick question about issue tracking for Apache Phoenix. There're a handful
of tickets currently in unresolved states (Open, In Progress, Patch
Available), and yet have the Fix Version specified as 4.15.0. Are these
incorrectly marked with a fix version, or do we use future release in the
Fix Version to indicate a planned target?

https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%204.15.0

Thanks


[jira] [Updated] (PHOENIX-5166) DELETE didn't work on phoenix

2019-04-10 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5166:
--
Fix Version/s: (was: 4.15.0)

I am removing the fixed version from this JIRA, as I cannot find an associated 
patch.

[~TsingJyujing] were you able to create a reproducible case for this issue you 
encountered?

> DELETE didn't work on phoenix
> -
>
> Key: PHOENIX-5166
> URL: https://issues.apache.org/jira/browse/PHOENIX-5166
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.1
> Environment: Apache HBase 1.4
> Phoenix 4.14.1
>Reporter: Yuan Yifan
>Priority: Major
> Attachments: image-2019-02-26-13-56-44-201.png, 
> image-2019-02-26-14-34-50-580.png, image-2019-02-26-15-35-53-229.png, 
> image-2019-02-26-15-36-08-649.png, image-2019-02-26-15-36-47-364.png
>
>
> I executed an SQL that delete some data in table T_LOCATION, the data 
> decreased.
> But there're still 256 rows of data which I can't delete them however I run 
> DELETE repeatedly.
> !image-2019-02-26-13-56-44-201.png!
>  
> I'm preparing to fix it but I don't know where the code should I debug, is 
> there any document of the structure/design of Phoenix?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


PartialIndexRebuilderIT Flaky?

2019-03-08 Thread William Shen
Hi,

Saw that we've tried in the past to address the flakiness of
PartialIndexRebuilderIT (PHOENIX-4239), but wondering if this is still an
issue. I am on phoenix-4.13-HBase-1.2, and 9 times out of 10 this test
fails running out of time. Anyone knows if there're any tricks to make the
build less flaky when running locally?

Thanks,

[INFO] ---
[INFO]  T E S T S
[INFO] ---
[INFO] Running org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
[ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed:
511.708 s <<< FAILURE! - in
org.apache.phoenix.end2end.index.PartialIndexRebuilderIT
[ERROR]
testConcurrentUpsertsWithRebuild(org.apache.phoenix.end2end.index.PartialIndexRebuilderIT)
Time elapsed: 511.708 s  <<< FAILURE!
java.lang.AssertionError: Ran out of time
at
org.apache.phoenix.end2end.index.PartialIndexRebuilderIT.testConcurrentUpsertsWithRebuild(PartialIndexRebuilderIT.java:220)


[jira] [Updated] (PHOENIX-5122) PHOENIX-4322 breaks client backward compatibility

2019-03-04 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5122:
--
Description: 
Scenario :

*4.13 client -> 4.14.1 server*
{noformat}
Connected to: Phoenix (version 4.13)
Driver: PhoenixEmbeddedDriver (version 4.13)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true 
to skip)...
135/135 (100%) Done
Done
sqlline version 1.1.9
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
No rows affected (1.31 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
'v0001');
1 row affected (0.033 seconds)
0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
'v0002');
1 row affected (0.004 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
(('0001', 'v0001'), ('0002', 'v0002'));
+--+--+
| OID | CODE |
+--+--+
+--+--+
{color:#FF}+*No rows selected (0.033 seconds)*+{color}
0: jdbc:phoenix:localhost> select * from P_T02 ;
+--+--+
| OID | CODE |
+--+--+
| 0002 | v0002 |
| 0001 | v0001 |
+--+--+
2 rows selected (0.016 seconds)
0: jdbc:phoenix:localhost>
 {noformat}


*4.14.1 client -> 4.14.1 server* 
{noformat}
Connected to: Phoenix (version 4.14)
Driver: PhoenixEmbeddedDriver (version 4.14)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true 
to skip)...
133/133 (100%) Done
Done
sqlline version 1.1.9
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> CREATE table P_T01 (oid VARCHAR NOT NULL, code 
VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
No rows affected (1.273 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0001', 
'v0001');
1 row affected (0.056 seconds)
0: jdbc:phoenix:localhost> upsert into P_T01 (oid, code) values ('0002', 
'v0002');
1 row affected (0.004 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> select * from P_T01 where (oid, code) IN 
(('0001', 'v0001'), ('0002', 'v0002'));
+--+--+
| OID | CODE |
+--+--+
| 0002 | v0002 |
| 0001 | v0001 |
+--+--+
2 rows selected (0.051 seconds)
0: jdbc:phoenix:localhost> select * from P_T01 ;
+--+--+
| OID | CODE |
+--+--+
| 0002 | v0002 |
| 0001 | v0001 |
+--+--+
2 rows selected (0.017 seconds)
0: jdbc:phoenix:localhost>
{noformat}


  was:
Scenario :

*4.13 client -> 4.14.1 server*

Connected to: Phoenix (version 4.13)
Driver: PhoenixEmbeddedDriver (version 4.13)
Autocommit status: true
Transaction isolation: TRANSACTION_READ_COMMITTED
Building list of tables and columns for tab-completion (set fastconnect to true 
to skip)...
135/135 (100%) Done
Done
sqlline version 1.1.9
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> CREATE table P_T02 (oid VARCHAR NOT NULL, code 
VARCHAR NOT NULL constraint pk primary key (oid DESC, code DESC));
No rows affected (1.31 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0001', 
'v0001');
1 row affected (0.033 seconds)
0: jdbc:phoenix:localhost> upsert into P_T02 (oid, code) values ('0002', 
'v0002');
1 row affected (0.004 seconds)
0: jdbc:phoenix:localhost> 
0: jdbc:phoenix:localhost> select * from P_T02 where (oid, code) IN 
(('0001', 'v0001'), ('0002', 'v0002'));
+--+--+
| OID | CODE |
+--+--+
+---

Re: CDH 6.X road map

2019-02-21 Thread William Shen
Thank you Pedro for supporting the CDH branches... Looking forward to your
patch.

On Thu, Feb 21, 2019 at 5:50 PM Pedro Boado  wrote:

> Sorry for not being on top of this for a long time. I have ongoing work in
> my laptop for this task. Give me a couple of days so I can review non
> working bits and push it.
>
> On Fri, 22 Feb 2019, 00:45 William Shen, 
> wrote:
>
> > Mahdi,
> > I did a little bit of mailinglist/JIRA-diving, and found the following
> > thread, which may be helpful to get you started in addition to the two
> JIRA
> > tickets/patches for previous work (
> > https://issues.apache.org/jira/browse/PHOENIX-4372 and
> > https://issues.apache.org/jira/browse/PHOENIX-4956):
> >
> > -- Forwarded message -
> > From: Pedro Boado 
> > Date: Thu, Sep 13, 2018 at 11:17 PM
> > Subject: Re: Phoenix 5.0 + Cloudera CDH 6.0 Integration
> > To: 
> >
> >
> > Hi Curtis,
> >
> > As far as I am aware nobody is working on it.
> >
> > The approach I followed is, starting from master:
> > - Change parent pom to use cloudera dependencies instead of Apache's for
> > compilation
> > - Make it compile (relatively easy for 4.x)
> > - Make it pass tests (not that easy)
> > - Once it passed, wrote the parcel module.
> >
> > There aren't a lot of code differences between cdh and base branches so
> I'd
> > try to follow the same approach - basing it on changes in the current
> > differences-.
> >
> > I really think this effort is needed to try to bring both branches again
> as
> > close to each other as possible. Maybe some code differences are no
> longer
> > needed because CDH is already closer to the Apache version of Hadoop 3.0
> > and Hbase 2.0 than it was when I did the migration.
> >
> > Hope it helps,
> > Pedro.
> >
> > On 14 Sep 2018 04:57, "Curtis Howard" 
> > wrote:
> >
> > Hi all,
> >
> > Is there anyone working towards Phoenix 5.0 / Cloudera (CDH) 6.0
> > integration at this point?  I could not find any related JIRA for this
> > after a quick search, and wanted to check here first before adding one.
> >
> > If I were to attempt this myself, is there a suggested approach?  I can
> see
> > from previous 4.x-cdh5.* branches supporting these releases that the
> > changes for PHOENIX-4372 (
> > https://issues.apache.org/jira/browse/PHOENIX-4372
> > )
> > move the builds to CDH dependencies - for example:
> >
> >
> https://github.com/apache/phoenix/commit/024f0f22a5929da6f095dc0025b8e899e2f0c47b
> >
> > Would following the pattern of that commit (or attempting a cherry-pick)
> > onto the the v5.0.0-HBase-2.0 tagged release (
> > https://github.com/apache/phoenix/tree/v5.0.0-HBase-2.0) be a reasonable
> > starting point?
> >
> > Thanks in advance
> >
> > Curtis
> >
> > On Wed, Feb 20, 2019 at 1:28 PM William Shen  >
> > wrote:
> >
> > > Same here. Would love to help with adding more CDH versions, if someone
> > > can help guide us on how to best go about it based on the current cdh
> > > branches.
> > >
> > > On Tue, Feb 12, 2019 at 5:10 PM Mahdi Salarkia
>  > >
> > > wrote:
> > >
> > >> Hi
> > >> Is there a plan to release a Hbase-2.0-CDH (Cloudera 6.X) compatible
> > >> version of Phoenix anytime soon?
> > >> I can see Phoenix currently supports older versions of CDH (5.11, ...)
> > but
> > >> doesn't seem to be much work being done for the version 6.
> > >> P.S : I'll be happy to help given instructions to help build the CDH
> 6.X
> > >> version
> > >>
> > >> Thanks
> > >> Mehdi
> > >>
> > >
> >
>


Re: CDH 6.X road map

2019-02-21 Thread William Shen
Mahdi,
I did a little bit of mailinglist/JIRA-diving, and found the following
thread, which may be helpful to get you started in addition to the two JIRA
tickets/patches for previous work (
https://issues.apache.org/jira/browse/PHOENIX-4372 and
https://issues.apache.org/jira/browse/PHOENIX-4956):

-- Forwarded message -
From: Pedro Boado 
Date: Thu, Sep 13, 2018 at 11:17 PM
Subject: Re: Phoenix 5.0 + Cloudera CDH 6.0 Integration
To: 


Hi Curtis,

As far as I am aware nobody is working on it.

The approach I followed is, starting from master:
- Change parent pom to use cloudera dependencies instead of Apache's for
compilation
- Make it compile (relatively easy for 4.x)
- Make it pass tests (not that easy)
- Once it passed, wrote the parcel module.

There aren't a lot of code differences between cdh and base branches so I'd
try to follow the same approach - basing it on changes in the current
differences-.

I really think this effort is needed to try to bring both branches again as
close to each other as possible. Maybe some code differences are no longer
needed because CDH is already closer to the Apache version of Hadoop 3.0
and Hbase 2.0 than it was when I did the migration.

Hope it helps,
Pedro.

On 14 Sep 2018 04:57, "Curtis Howard"  wrote:

Hi all,

Is there anyone working towards Phoenix 5.0 / Cloudera (CDH) 6.0
integration at this point?  I could not find any related JIRA for this
after a quick search, and wanted to check here first before adding one.

If I were to attempt this myself, is there a suggested approach?  I can see
from previous 4.x-cdh5.* branches supporting these releases that the
changes for PHOENIX-4372 (https://issues.apache.org/jira/browse/PHOENIX-4372
)
move the builds to CDH dependencies - for example:
https://github.com/apache/phoenix/commit/024f0f22a5929da6f095dc0025b8e899e2f0c47b

Would following the pattern of that commit (or attempting a cherry-pick)
onto the the v5.0.0-HBase-2.0 tagged release (
https://github.com/apache/phoenix/tree/v5.0.0-HBase-2.0) be a reasonable
starting point?

Thanks in advance

Curtis

On Wed, Feb 20, 2019 at 1:28 PM William Shen 
wrote:

> Same here. Would love to help with adding more CDH versions, if someone
> can help guide us on how to best go about it based on the current cdh
> branches.
>
> On Tue, Feb 12, 2019 at 5:10 PM Mahdi Salarkia 
> wrote:
>
>> Hi
>> Is there a plan to release a Hbase-2.0-CDH (Cloudera 6.X) compatible
>> version of Phoenix anytime soon?
>> I can see Phoenix currently supports older versions of CDH (5.11, ...) but
>> doesn't seem to be much work being done for the version 6.
>> P.S : I'll be happy to help given instructions to help build the CDH 6.X
>> version
>>
>> Thanks
>> Mehdi
>>
>


Re: How Phoenix JDBC connection get hbase configuration

2019-02-20 Thread William Shen
Whatever is in super.getConf() should get overriden by hbase-site.xml
because addHbaseResources because will layer on hbase-site.xml last. The
question is which one got picked up... (maybe there is another one on the
classpath, is that possible?)

On Wed, Feb 20, 2019 at 4:10 PM Xiaoxiao Wang 
wrote:

> I'm trying out on the mapreduce application, I made it work on my toy
> application
>
> On Wed, Feb 20, 2019 at 4:09 PM William Shen 
> wrote:
>
> > A bit of a long shot, but do you happen to have another hbase-site.xml
> > bundled in your jar accidentally that might be overriding what is on the
> > classpath?
> >
> > On Wed, Feb 20, 2019 at 3:58 PM Xiaoxiao Wang  >
> > wrote:
> >
> > > A bit more information, I feel the classpath didn't get passed in
> > correctly
> > > by doing
> > >
> > > conf = HBaseConfiguration.addHbaseResources(super.getConf());
> > >
> > > and this conf also didn't pick up the expected properties
> > >
> > >
> > > On Wed, Feb 20, 2019 at 3:56 PM Xiaoxiao Wang 
> > wrote:
> > >
> > > > Pedro
> > > >
> > > > thanks for your info, yes, I have tried both
> > > > HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml and
> > > > HADOOP_CLASSPATH=/etc/hbase/conf/ (without file), and yes checked
> > > > hadoop-env.sh as well to make sure it did
> > > > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/others
> > > >
> > > > And also for your second question, it is indeed a map reduce job, and
> > it
> > > > is trying to query phoenix from map function! (and we make sure all
> the
> > > > nodes have hbase-site.xml installed properly )
> > > >
> > > > thanks
> > > >
> > > > On Wed, Feb 20, 2019 at 3:53 PM Pedro Boado 
> > > wrote:
> > > >
> > > >> Your classpath variable should be pointing to the folder containing
> > your
> > > >> hbase-site.xml and not directly to the file.
> > > >>
> > > >> But certain distributions tend to override that envvar inside
> > > >> hadoop-env.sh
> > > >> or hadoop.sh .
> > > >>
> > > >> Out of curiosity, have you written a map-reduce application and are
> > you
> > > >> querying phoenix from map functions?
> > > >>
> > > >> On Wed, 20 Feb 2019, 23:34 Xiaoxiao Wang,
>  > >
> > > >> wrote:
> > > >>
> > > >> > HI Pedro
> > > >> >
> > > >> > thanks for your help, I think we know that we need to set the
> > > classpath
> > > >> to
> > > >> > the hadoop program, and what we tried was
> > > >> > HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml hadoop jar
> $test_jar
> > > >> but it
> > > >> > didn't work
> > > >> > So we are wondering if anything we did wrong?
> > > >> >
> > > >> > On Wed, Feb 20, 2019 at 3:24 PM Pedro Boado 
> > > wrote:
> > > >> >
> > > >> > > Hi,
> > > >> > >
> > > >> > > How many concurrent client connections are we talking about? You
> > > >> might be
> > > >> > > opening more connections than the RS can handle ( under these
> > > >> > circumstances
> > > >> > > most of the client threads would end exhausting their retry
> count
> > )
> > > .
> > > >> I
> > > >> > > would bet that you've get a bottleneck in the RS keeping
> > > >> SYSTEM.CATALOG
> > > >> > > table (this was an issue in 4.7 ) as every new connection would
> be
> > > >> > querying
> > > >> > > this table first.
> > > >> > >
> > > >> > > Try to update to our cloudera-compatible parcels instead of
> using
> > > >> clabs -
> > > >> > > which are discontinued by Cloudera and not supported by the
> Apache
> > > >> > Phoenix
> > > >> > > project - .
> > > >> > >
> > > >> > > Once updated to phoenix 4.14 you should be able to use
> > > >> > > UPDATE_CACHE_FREQUENCY
> > > >> > > property in order to reduce pressure on system tables.
> > > >> > >
> > > >> > > Adding an hbase-site.xml with the

Re: How Phoenix JDBC connection get hbase configuration

2019-02-20 Thread William Shen
A bit of a long shot, but do you happen to have another hbase-site.xml
bundled in your jar accidentally that might be overriding what is on the
classpath?

On Wed, Feb 20, 2019 at 3:58 PM Xiaoxiao Wang 
wrote:

> A bit more information, I feel the classpath didn't get passed in correctly
> by doing
>
> conf = HBaseConfiguration.addHbaseResources(super.getConf());
>
> and this conf also didn't pick up the expected properties
>
>
> On Wed, Feb 20, 2019 at 3:56 PM Xiaoxiao Wang  wrote:
>
> > Pedro
> >
> > thanks for your info, yes, I have tried both
> > HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml and
> > HADOOP_CLASSPATH=/etc/hbase/conf/ (without file), and yes checked
> > hadoop-env.sh as well to make sure it did
> > HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/others
> >
> > And also for your second question, it is indeed a map reduce job, and it
> > is trying to query phoenix from map function! (and we make sure all the
> > nodes have hbase-site.xml installed properly )
> >
> > thanks
> >
> > On Wed, Feb 20, 2019 at 3:53 PM Pedro Boado 
> wrote:
> >
> >> Your classpath variable should be pointing to the folder containing your
> >> hbase-site.xml and not directly to the file.
> >>
> >> But certain distributions tend to override that envvar inside
> >> hadoop-env.sh
> >> or hadoop.sh .
> >>
> >> Out of curiosity, have you written a map-reduce application and are you
> >> querying phoenix from map functions?
> >>
> >> On Wed, 20 Feb 2019, 23:34 Xiaoxiao Wang, 
> >> wrote:
> >>
> >> > HI Pedro
> >> >
> >> > thanks for your help, I think we know that we need to set the
> classpath
> >> to
> >> > the hadoop program, and what we tried was
> >> > HADOOP_CLASSPATH=/etc/hbase/conf/hbase-site.xml hadoop jar $test_jar
> >> but it
> >> > didn't work
> >> > So we are wondering if anything we did wrong?
> >> >
> >> > On Wed, Feb 20, 2019 at 3:24 PM Pedro Boado 
> wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > How many concurrent client connections are we talking about? You
> >> might be
> >> > > opening more connections than the RS can handle ( under these
> >> > circumstances
> >> > > most of the client threads would end exhausting their retry count )
> .
> >> I
> >> > > would bet that you've get a bottleneck in the RS keeping
> >> SYSTEM.CATALOG
> >> > > table (this was an issue in 4.7 ) as every new connection would be
> >> > querying
> >> > > this table first.
> >> > >
> >> > > Try to update to our cloudera-compatible parcels instead of using
> >> clabs -
> >> > > which are discontinued by Cloudera and not supported by the Apache
> >> > Phoenix
> >> > > project - .
> >> > >
> >> > > Once updated to phoenix 4.14 you should be able to use
> >> > > UPDATE_CACHE_FREQUENCY
> >> > > property in order to reduce pressure on system tables.
> >> > >
> >> > > Adding an hbase-site.xml with the required properties to the client
> >> > > application classpath should just work.
> >> > >
> >> > > I hope it helps.
> >> > >
> >> > > On Wed, 20 Feb 2019, 22:50 Xiaoxiao Wang,
>  >> >
> >> > > wrote:
> >> > >
> >> > > > Hi, who may help
> >> > > >
> >> > > > We are running a Hadoop application that needs to use phoenix JDBC
> >> > > > connection from the workers.
> >> > > > The connection works, but when too many connection established at
> >> the
> >> > > same
> >> > > > time, it throws RPC timeouts
> >> > > >
> >> > > > Error: java.io.IOException:
> >> > > > org.apache.phoenix.exception.PhoenixIOException: Failed after
> >> > > attempts=36,
> >> > > > exceptions: Wed Feb 20 20:02:43 UTC 2019, null, java.net
> >> > > .SocketTimeoutException:
> >> > > > callTimeout=6, callDuration=60506. ...
> >> > > >
> >> > > > So we have figured we should probably set a higher
> >> hbase.rpc.timeout
> >> > > > value, but then it comes to the issue:
> >> > > >
> >> > > > A little bit background on how we run the application
> >> > > >
> >> > > > Here is how we get PhoenixConnection from java program
> >> > > > DriverManager.getConnection("jdbc:phoenix:host", props)
> >> > > > And we trigger the program by using
> >> > > > hadoop jar $test_jar
> >> > > >
> >> > > >
> >> > > > We have tried multiple approaches to load hbase/phoenix
> >> configuration,
> >> > > but
> >> > > > none of them get respected by PhoenixConnection, here are the
> >> methods
> >> > we
> >> > > > tried
> >> > > > * Pass hbase_conf_dir through HADOOP_CLASSPATH, so run the hadoop
> >> > > > application like HADOOP_CLASSPATH=/etc/hbase/conf/ hadoop jar
> >> > $test_jar .
> >> > > > However, PhoenixConnection doesn’t respect the parameters
> >> > > > * Tried passing -Dhbase.rpc.timeout=1800, which is picked up by
> >> hbase
> >> > > conf
> >> > > > object, but not PhoniexConnection
> >> > > > * Explicitly set those parameters and pass them to the
> >> > PhoenixConnection
> >> > > > props.setProperty("hbase.rpc.timeout", "1800");
> >> > > > props.setProperty(“phoenix.query.timeoutMs", "1800");
> >> > > > Also didn’t get respected by PhoenixConnection
> >> > > > * also tried what is 

Re: How Phoenix JDBC connection get hbase configuration

2019-02-20 Thread William Shen
Hi Xiaoxiao,

Have you tried including hbase-site.xml in your conf on classpath?

Will

On Wed, Feb 20, 2019 at 2:50 PM Xiaoxiao Wang 
wrote:

> Hi, who may help
>
> We are running a Hadoop application that needs to use phoenix JDBC
> connection from the workers.
> The connection works, but when too many connection established at the same
> time, it throws RPC timeouts
>
> Error: java.io.IOException:
> org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36,
> exceptions: Wed Feb 20 20:02:43 UTC 2019, null, 
> java.net.SocketTimeoutException:
> callTimeout=6, callDuration=60506. ...
>
> So we have figured we should probably set a higher  hbase.rpc.timeout
> value, but then it comes to the issue:
>
> A little bit background on how we run the application
>
> Here is how we get PhoenixConnection from java program
> DriverManager.getConnection("jdbc:phoenix:host", props)
> And we trigger the program by using
> hadoop jar $test_jar
>
>
> We have tried multiple approaches to load hbase/phoenix configuration, but
> none of them get respected by PhoenixConnection, here are the methods we
> tried
> * Pass hbase_conf_dir through HADOOP_CLASSPATH, so run the hadoop
> application like HADOOP_CLASSPATH=/etc/hbase/conf/ hadoop jar $test_jar .
> However, PhoenixConnection doesn’t respect the parameters
> * Tried passing -Dhbase.rpc.timeout=1800, which is picked up by hbase conf
> object, but not PhoniexConnection
> * Explicitly set those parameters and pass them to the PhoenixConnection
> props.setProperty("hbase.rpc.timeout", "1800");
> props.setProperty(“phoenix.query.timeoutMs", "1800");
> Also didn’t get respected by PhoenixConnection
> * also tried what is suggested by phoenix here
> https://phoenix.apache.org/#connStr , use :longRunning together with
> those properties, still didn’t seem to work
>
>
> Besides all those approaches we tried, I have explicitly output those
> parameters we care from the connection,
> connection.getQueryServices().getProps()
> The default values I got are 6 for hbase.rpc.timeout, and 600k for
> phoenix.query.timeoutMs , so I have tried to run a query lthat would run
> longer than 10 mins, Ideally it should timeout, however, it runs over 20
> mins and didn’t timeout. So I’m wondering how PhoenixConnection respect
> those properties?
>
>
> So with some of your help, we’d like to know if there’s any thing wrong
> with our approaches. And we’d like to get rid of those SocketTimeExceptions.
> We are using phoenix-core version is 4.7.0-clabs-phoenix1.3.0 , and our
> phoenix-client version is phoenix-4.7.0-clabs-phoenix1.3.0.23  (we have
> tried phoenix-4.14.0-HBase-1.3 as well, which didn’t work either).
>
>
> Thanks for your time
>
>
>
>
>


Re: CDH 6.X road map

2019-02-20 Thread William Shen
Same here. Would love to help with adding more CDH versions, if someone can
help guide us on how to best go about it based on the current cdh branches.

On Tue, Feb 12, 2019 at 5:10 PM Mahdi Salarkia 
wrote:

> Hi
> Is there a plan to release a Hbase-2.0-CDH (Cloudera 6.X) compatible
> version of Phoenix anytime soon?
> I can see Phoenix currently supports older versions of CDH (5.11, ...) but
> doesn't seem to be much work being done for the version 6.
> P.S : I'll be happy to help given instructions to help build the CDH 6.X
> version
>
> Thanks
> Mehdi
>


Re: Integration Testing Requirements

2019-02-20 Thread William Shen
Funny. The specific IT passed locally for me. Seems more like a Jenkins
setup to troubleshoot for us... Thanks for your help Ankit.

On Wed, Feb 20, 2019 at 12:07 PM William Shen 
wrote:

> Thanks Ankit.
>
> What is the difference between these hive tests
> (e.g., org.apache.phoenix.hive.HiveTezIT)  and other tests that also use a
> mini cluster
> (org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT), which
> are passing? Specifically, what does it need from HADOOP_HOME?
>
> On Tue, Feb 19, 2019 at 11:39 AM Ankit Singhal 
> wrote:
>
>> you can check if HADOOP_HOME and JAVA_HOME are properly set in the
>> environment.
>>
>> On Tue, Feb 19, 2019 at 11:23 AM William Shen > >
>> wrote:
>>
>> > Hi everyone,
>> >
>> > I'm trying to set up the Jenkins job at work to build Phoenix and run
>> the
>> > integration tests. However, repeatedly I encounter issues with the hive
>> > module when I run mvn verify. Does the hive integration tests require
>> any
>> > special set up for them to pass? The other modules passed integration
>> > testing successfully.
>> >
>> > Attaching below is a sample failure trace.
>> >
>> > Thanks!
>> >
>> > - Will
>> >
>> > [ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time
>> > elapsed: 48.302 s <<< FAILURE! - in
>> > org.apache.phoenix.hive.HiveTezIT[ERROR]
>> > simpleColumnMapTest(org.apache.phoenix.hive.HiveTezIT)  Time elapsed:
>> > 6.727 s  <<< FAILURE!junit.framework.AssertionFailedError:
>> > Unexpected exception java.lang.RuntimeException:
>> > org.apache.tez.dag.api.SessionNotRunning: TezSession has already
>> > shutdown. Application application_1550371508120_0001 failed 2 times
>> > due to AM Container for appattempt_1550371508120_0001_02 exited
>> > with  exitCode: 127
>> > For more detailed output, check application tracking
>> > page:
>> >
>> http://40a4dd0e8959:38843/cluster/app/application_1550371508120_0001Then,
>> > click on links to logs of each attempt.
>> > Diagnostics: Exception from container-launch.
>> > Container id: container_1550371508120_0001_02_01
>> > Exit code: 127
>> > Stack trace: ExitCodeException exitCode=127:
>> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
>> > at org.apache.hadoop.util.Shell.run(Shell.java:456)
>> > at
>> >
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
>> > at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
>> > at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
>> > at
>> >
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
>> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> > at
>> >
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> > at
>> >
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> > at java.lang.Thread.run(Thread.java:748)
>> >
>> >
>> > Container exited with a non-zero exit code 127
>> > Failing this attempt. Failing the application.
>> > at
>> >
>> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
>> > at
>> > org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:637)
>> > at
>> > org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:590)
>> > at
>> >
>> org.apache.phoenix.hive.BaseHivePhoenixStoreIT.runTest(BaseHivePhoenixStoreIT.java:117)
>> > at
>> >
>> org.apache.phoenix.hive.HivePhoenixStoreIT.simpleColumnMapTest(HivePhoenixStoreIT.java:103)
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> > at
>> >
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> > at
>> >
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> > at java.lang.reflect.Method.invoke(Method.java:498)
>> > at
>> >
>> org.junit.runners.model.FrameworkMethod$

Re: Integration Testing Requirements

2019-02-20 Thread William Shen
Thanks Ankit.

What is the difference between these hive tests
(e.g., org.apache.phoenix.hive.HiveTezIT)  and other tests that also use a
mini cluster
(org.apache.phoenix.end2end.MigrateSystemTablesToSystemNamespaceIT), which
are passing? Specifically, what does it need from HADOOP_HOME?

On Tue, Feb 19, 2019 at 11:39 AM Ankit Singhal 
wrote:

> you can check if HADOOP_HOME and JAVA_HOME are properly set in the
> environment.
>
> On Tue, Feb 19, 2019 at 11:23 AM William Shen 
> wrote:
>
> > Hi everyone,
> >
> > I'm trying to set up the Jenkins job at work to build Phoenix and run the
> > integration tests. However, repeatedly I encounter issues with the hive
> > module when I run mvn verify. Does the hive integration tests require any
> > special set up for them to pass? The other modules passed integration
> > testing successfully.
> >
> > Attaching below is a sample failure trace.
> >
> > Thanks!
> >
> > - Will
> >
> > [ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time
> > elapsed: 48.302 s <<< FAILURE! - in
> > org.apache.phoenix.hive.HiveTezIT[ERROR]
> > simpleColumnMapTest(org.apache.phoenix.hive.HiveTezIT)  Time elapsed:
> > 6.727 s  <<< FAILURE!junit.framework.AssertionFailedError:
> > Unexpected exception java.lang.RuntimeException:
> > org.apache.tez.dag.api.SessionNotRunning: TezSession has already
> > shutdown. Application application_1550371508120_0001 failed 2 times
> > due to AM Container for appattempt_1550371508120_0001_02 exited
> > with  exitCode: 127
> > For more detailed output, check application tracking
> > page:
> > http://40a4dd0e8959:38843/cluster/app/application_1550371508120_0001Then
> ,
> > click on links to logs of each attempt.
> > Diagnostics: Exception from container-launch.
> > Container id: container_1550371508120_0001_02_01
> > Exit code: 127
> > Stack trace: ExitCodeException exitCode=127:
> > at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
> > at org.apache.hadoop.util.Shell.run(Shell.java:456)
> > at
> > org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
> > at
> >
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > at
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:748)
> >
> >
> > Container exited with a non-zero exit code 127
> > Failing this attempt. Failing the application.
> > at
> >
> org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
> > at
> > org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:637)
> > at
> > org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:590)
> > at
> >
> org.apache.phoenix.hive.BaseHivePhoenixStoreIT.runTest(BaseHivePhoenixStoreIT.java:117)
> > at
> >
> org.apache.phoenix.hive.HivePhoenixStoreIT.simpleColumnMapTest(HivePhoenixStoreIT.java:103)
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> > at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> > at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.lang.reflect.Method.invoke(Method.java:498)
> > at
> >
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> > at
> >
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> > at
> >
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> > at
> >
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> > at
> >
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> > at
> >
> org.junit.

Integration Testing Requirements

2019-02-19 Thread William Shen
Hi everyone,

I'm trying to set up the Jenkins job at work to build Phoenix and run the
integration tests. However, repeatedly I encounter issues with the hive
module when I run mvn verify. Does the hive integration tests require any
special set up for them to pass? The other modules passed integration
testing successfully.

Attaching below is a sample failure trace.

Thanks!

- Will

[ERROR] Tests run: 6, Failures: 6, Errors: 0, Skipped: 0, Time
elapsed: 48.302 s <<< FAILURE! - in
org.apache.phoenix.hive.HiveTezIT[ERROR]
simpleColumnMapTest(org.apache.phoenix.hive.HiveTezIT)  Time elapsed:
6.727 s  <<< FAILURE!junit.framework.AssertionFailedError:
Unexpected exception java.lang.RuntimeException:
org.apache.tez.dag.api.SessionNotRunning: TezSession has already
shutdown. Application application_1550371508120_0001 failed 2 times
due to AM Container for appattempt_1550371508120_0001_02 exited
with  exitCode: 127
For more detailed output, check application tracking
page:http://40a4dd0e8959:38843/cluster/app/application_1550371508120_0001Then,
click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1550371508120_0001_02_01
Exit code: 127
Stack trace: ExitCodeException exitCode=127:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
at org.apache.hadoop.util.Shell.run(Shell.java:456)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
at 
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at 
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)


Container exited with a non-zero exit code 127
Failing this attempt. Failing the application.
at 
org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:535)
at org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:637)
at org.apache.phoenix.hive.HiveTestUtil.cliInit(HiveTestUtil.java:590)
at 
org.apache.phoenix.hive.BaseHivePhoenixStoreIT.runTest(BaseHivePhoenixStoreIT.java:117)
at 
org.apache.phoenix.hive.HivePhoenixStoreIT.simpleColumnMapTest(HivePhoenixStoreIT.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
at 

[jira] [Updated] (PHOENIX-5067) Support for secure Phoenix cluster in Pherf

2018-12-14 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5067:
--
Description: Currently Phoenix performance and functional testing tool 
{{Pherf}} doesn't have options to pass in Kerberos principal and Keytab to 
connect to a secure (Kerberized) Phoenix cluster. This prevents running the 
tool against a Kerberized clusters.  (was: Currently Phoenix performance and 
functional testing tool {{Phref}} doesn't have options to pass in Kerberos 
principal and Keytab to connect to a secure (Kerberized) Phoenix cluster. This 
prevents running the tool against a Kerberized clusters.)

> Support for secure Phoenix cluster in Pherf
> ---
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Pherf}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5067) Support for secure Phoenix cluster in Pherf

2018-12-14 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-5067:
--
Summary: Support for secure Phoenix cluster in Pherf  (was: Support for 
secure Phoenix cluster in Phref)

> Support for secure Phoenix cluster in Pherf
> ---
>
> Key: PHOENIX-5067
> URL: https://issues.apache.org/jira/browse/PHOENIX-5067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Biju Nair
>Assignee: Biju Nair
>Priority: Minor
> Attachments: PHOENIX-5067-4.x-HBase-1.1
>
>
> Currently Phoenix performance and functional testing tool {{Phref}} doesn't 
> have options to pass in Kerberos principal and Keytab to connect to a secure 
> (Kerberized) Phoenix cluster. This prevents running the tool against a 
> Kerberized clusters.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: System.Catalog Table

2018-11-29 Thread William Shen
We've also run into this problem in Phoenix 4.13
Here are steps to reproduce:

1) create original table in phoenix

CREATE TABLE IF NOT EXISTS "test"."TRACKING_VALUES" (
  "cstId"   BIGINT NOT NULL,
  "cltId"   BIGINT NOT NULL,
  "trkblTp" VARCHAR NOT NULL,
  "trkblId" BIGINT NOT NULL,
  "id"  BIGINT NOT NULL,
  "vl"  VARCHAR,
  "dstTp"   VARCHAR,
  "crdAt"   TIMESTAMP,
  "crdBy"   BIGINT,
  "updAt"   TIMESTAMP,
  "updBy"   BIGINT,
  "stts"VARCHAR,
  "lgcyId"  VARCHAR,
  CONSTRAINT "tracking_values_pk" PRIMARY KEY ("cstId", "cltId",
"trkblTp", "trkblId", "id")
)SALT_BUCKETS=10, DEFAULT_COLUMN_FAMILY='TV';

# Respective system.catalog table
0: jdbc:phoenix:labs-kumki-namenode-lv-101,la> select TENANT_ID,
TABLE_SCHEM, TABLE_NAME, COLUMN_NAME, COLUMN_FAMILY from
SYSTEM.CATALOG where table_schem = 'test' and table_name =
'TRACKING_VALUES';
++--+--+--++
| TENANT_ID  | TABLE_SCHEM  |TABLE_NAME| COLUMN_NAME  | COLUMN_FAMILY  |
++--+--+--++
|| test | TRACKING_VALUES  |  ||
|| test | TRACKING_VALUES  |  | TV |
|| test | TRACKING_VALUES  | cltId||
|| test | TRACKING_VALUES  | crdAt| TV |
|| test | TRACKING_VALUES  | crdBy| TV |
|| test | TRACKING_VALUES  | cstId||
|| test | TRACKING_VALUES  | dstTp| TV |
|| test | TRACKING_VALUES  | id   ||
|| test | TRACKING_VALUES  | lgcyId   | TV |
|| test | TRACKING_VALUES  | stts | TV |
|| test | TRACKING_VALUES  | trkblId  ||
|| test | TRACKING_VALUES  | trkblTp  ||
|| test | TRACKING_VALUES  | updAt| TV |
|| test | TRACKING_VALUES  | updBy| TV |
|| test | TRACKING_VALUES  | vl   | TV |
++--+--+--++
15 rows selected (0.079 seconds)

2) Populate data into the table

0: jdbc:phoenix:labs-kumki-namenode-lv-101,la> select * from
"test".TRACKING_VALUES;
++--++-++--+--+--++--++--+--+
| cstId  |  cltId   |  trkblTp   |   trkblId   |
id |  vl  |dstTp |  crdAt   |
crdBy  |  updAt   | updBy  | stts |lgcyId
  |
++--++-++--+--+--++--++--+--+
| -42| 4291717  | KeywordInstance| 9224773823  |
81606793   | tlm0YryK | SEARCH   | 2014-04-22 15:24:21.000  |
null   | 2014-04-22 15:24:21.000  | null   |  |
38783873798  |
| -42| 4291717  | PublisherCreative  | 1971851927  |
81450003   | sRE2Ds8hZ| SEARCH   | 2014-04-10 20:12:21.000  |
null   | 2014-04-10 20:12:21.000  | null   |  |
38615971930  |
| 100| 100  | randomValue| 100 |
157124916  | randomValue  | randomValue  | 2018-08-16 05:34:42.000  |
100| 2018-08-16 05:34:42.000  | 100| randomValue  |
randomValue  |
| 7242   | 62235| KEYWORD| 74564665|
105322310  | Qwerty123|  | 2017-08-10 18:35:48.000  |
34447  |  | null   | ACTIVE   |
  |
| 7242   | 64555| CREATIVE   | 144115188096252274  |
157126212  | 20180903 |  | 2018-09-04 06:36:16.000  |
34447  |  | null   | ACTIVE   |
  |
++--++-++--+--+--++--++--+--+
5 rows selected (0.26 seconds)


3) Hbase snapshot

hbase(main):001:0> snapshot 'test.TRACKING_VALUES', 'test-TRACKING_VALUES-SNAP'
0 row(s) in 0.7690 seconds

4) cloned Hbase snapshot

hbase(main):002:0> clone_snapshot 'test-TRACKING_VALUES-SNAP',
'testNew.TRACKING_VALUES'
0 row(s) in 0.6080 seconds

5) Created table in phoenix

0: jdbc:phoenix:labs-kumki-namenode-lv-101,la> CREATE TABLE IF NOT
EXISTS 

Access Client Side Metrics for PhoenixRDD usage

2018-09-13 Thread William Shen
Hi all,

I see that LLAM-1819 had implemented client metric collection mechanism in
the PhoenixRecordReader class, which I believe is then used by
PhoenixInputFormat
and then used by PhoenixRDD,  but I am having trouble locating example of
how we can access the metric on https://phoenix.apache.org/metrics.html which
is limited only to example with the Java Client.

I understand that PHOENIX-4701 would send the metric to SYSTEM.LOG
asynchronously in 4.14, but wondering is there a way to access the metric
in 4.13?

Thanks in advance!

- Will


[jira] [Updated] (PHOENIX-4428) Trace details can't be shown in the Phoenix Tracing Web Application

2018-09-10 Thread William Shen (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

William Shen updated PHOENIX-4428:
--
Comment: was deleted

(was: Saw the fix here: 
[https://github.com/apache/phoenix/commit/b9ed1398d91ed140315f52014b4aee9783f6d965]

Any interest to merge the fix in?)

> Trace details can't be shown in the Phoenix Tracing Web Application
> ---
>
> Key: PHOENIX-4428
> URL: https://issues.apache.org/jira/browse/PHOENIX-4428
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0
>Reporter: Alex Chistyakov
>Priority: Major
> Attachments: Screen Shot 2017-12-01 at 18.34.07.png, Screen Shot 
> 2017-12-01 at 18.34.12.png
>
>
> A timeline or a dependency tree can't be shown for a trace.
> {code}
> 298752 [qtp440434003-46] WARN org.eclipse.jetty.servlet.ServletHandler  - 
> /trace/
> java.lang.RuntimeException: The passed parentId/traceId is not a number.
> at 
> org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:117)
> at 
> org.apache.phoenix.tracingwebapp.http.TraceServlet.doGet(TraceServlet.java:67)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
> at 
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:648)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:559)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:365)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:926)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:988)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:635)
> at 
> org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:627)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:51)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.NumberFormatException: null
> at java.lang.Long.parseLong(Long.java:552)
> at java.lang.Long.parseLong(Long.java:631)
> at 
> org.apache.phoenix.tracingwebapp.http.TraceServlet.searchTrace(TraceServlet.java:114)
> ... 26 more
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4876) Delete returns incorrect number of rows affected in some case

2018-08-28 Thread William Shen (JIRA)
William Shen created PHOENIX-4876:
-

 Summary: Delete returns incorrect number of rows affected in some 
case
 Key: PHOENIX-4876
 URL: https://issues.apache.org/jira/browse/PHOENIX-4876
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.13.0
Reporter: William Shen


Running Phoenix 4.13 and encountering deletion of a non-existing row returning 
"1 row affected" instead of "No rows affected".

Here is a simplified reproducible case:
{code:java}
> CREATE TABLE IF NOT EXISTS TEST (A BIGINT PRIMARY KEY, B BIGINT);

No rows affected (2.524 seconds)

> DELETE FROM TEST WHERE A = 0;

1 row affected (0.107 seconds)

> DELETE FROM TEST WHERE B = 0;

No rows affected (0.007 seconds)

> DELETE FROM TEST WHERE A = 0 AND B = 0;

No rows affected (0.007 seconds)

> DELETE FROM TEST WHERE A = 0;

1 row affected (0.007 seconds)

> SELECT * FROM TEST;

+++

| A | B |

+++

+++

No rows selected (0.023 seconds)

> SELECT COUNT(*) FROM TEST;

+---+

| COUNT(1) |

+---+

| 0         |

+---+

1 row selected (0.014 seconds){code}
Expected: 
{code:java}
> DELETE FROM TEST WHERE A = 0;
No rows affected{code}
Actual:
{code:java}
> DELETE FROM TEST WHERE A = 0;
1 row affected{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4551) Possible ColumnAlreadyExistsException is thrown from delete when autocommit off

2018-06-29 Thread William Shen (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528148#comment-16528148
 ] 

William Shen edited comment on PHOENIX-4551 at 6/29/18 7:56 PM:


[~rajeshbabu],

We are running into a similar issue (but not related to auto commit, it happens 
with and without autocommit). Here is a reproducible case in 4.13:
{code:java}
CREATE TABLE IF NOT EXISTS test.t (a INTEGER PRIMARY KEY,b UNSIGNED_INT,c 
BIGINT);
CREATE INDEX IF NOT EXISTS "i1" ON test.t (c) INCLUDE (b);
CREATE INDEX IF NOT EXISTS "i2" ON test.t (c) INCLUDE (a);
delete from test.t where b > 25;{code}
produces
{code:java}
Error: ERROR 514 (42892): A duplicate column name was detected in the object 
definition or ALTER TABLE/VIEW statement. columnName=TEST.TEST.T.C 
(state=42892,code=514)

org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
duplicate column name was detected in the object definition or ALTER TABLE/VIEW 
statement. columnName=TEST.TEST.T.C

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:529)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:305)

at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:730)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:771)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:759)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:376)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)

at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)

at sqlline.Commands.execute(Commands.java:822)

at sqlline.Commands.sql(Commands.java:732)

at sqlline.SqlLine.dispatch(SqlLine.java:813)

at sqlline.SqlLine.begin(SqlLine.java:686)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:291){code}
 

And by having just one instead of the indices, the issue goes away:
{code:java}
drop index "i2" on test.t;
No rows affected (5.579 seconds)
delete from test.t where b > 25;
No rows affected (0.011 seconds){code}
 

Do you think this is the same issue resolved in 4.14, or should I file a 
separate bug against 4.13?


was (Author: willshen):
[~rajeshbabu],

We are running into a similar issue (but not related to auto commit, it happens 
with and without autocommit). Here is a reproducible case in 4.13:
{code:java}
CREATE TABLE IF NOT EXISTS test.t (a INTEGER PRIMARY KEY,b UNSIGNED_INT,c 
BIGINT);
CREATE INDEX IF NOT EXISTS "i1" ON test.t (c) INCLUDE (b);
CREATE INDEX IF NOT EXISTS "i2" ON test.t (c) INCLUDE (a);
delete from test.t where b > 25;{code}
produces
{code:java}
Error: ERROR 514 (42892): A duplicate column name was detected in the object 
definition or ALTER TABLE/VIEW statement. columnName=TEST.TEST.T.C 
(state=42892,code=514)

org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
duplicate column name was detected in the object definition or ALTER TABLE/VIEW 
statement. columnName=TEST.TEST.T.C

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:529)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:305)

at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:730)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:771)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:759)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:376)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)

at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)

at sqlline.Commands.execute(Commands.java:822)

at sqlline.Commands.sql(Commands.java:732)

at sqlline.SqlLine.dispatch(SqlLine.java:813)

at sqlline.SqlLine.begin(SqlLine.java:686)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:291){code}
 

And by having just one instead of the indices, the issue goes away:
{code:java}
0: jdbc:phoenix:labs-boba-namenode-lv-101,lab> drop index "i2" on test.t;
No row

[jira] [Commented] (PHOENIX-4551) Possible ColumnAlreadyExistsException is thrown from delete when autocommit off

2018-06-29 Thread William Shen (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16528148#comment-16528148
 ] 

William Shen commented on PHOENIX-4551:
---

[~rajeshbabu],

We are running into a similar issue (but not related to auto commit, it happens 
with and without autocommit). Here is a reproducible case in 4.13:
{code:java}
CREATE TABLE IF NOT EXISTS test.t (a INTEGER PRIMARY KEY,b UNSIGNED_INT,c 
BIGINT);
CREATE INDEX IF NOT EXISTS "i1" ON test.t (c) INCLUDE (b);
CREATE INDEX IF NOT EXISTS "i2" ON test.t (c) INCLUDE (a);
delete from test.t where b > 25;{code}
produces
{code:java}
Error: ERROR 514 (42892): A duplicate column name was detected in the object 
definition or ALTER TABLE/VIEW statement. columnName=TEST.TEST.T.C 
(state=42892,code=514)

org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
duplicate column name was detected in the object definition or ALTER TABLE/VIEW 
statement. columnName=TEST.TEST.T.C

at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:529)

at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)

at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:305)

at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:730)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:771)

at 
org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:759)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:387)

at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:377)

at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:376)

at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:364)

at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1738)

at sqlline.Commands.execute(Commands.java:822)

at sqlline.Commands.sql(Commands.java:732)

at sqlline.SqlLine.dispatch(SqlLine.java:813)

at sqlline.SqlLine.begin(SqlLine.java:686)

at sqlline.SqlLine.start(SqlLine.java:398)

at sqlline.SqlLine.main(SqlLine.java:291){code}
 

And by having just one instead of the indices, the issue goes away:
{code:java}
0: jdbc:phoenix:labs-boba-namenode-lv-101,lab> drop index "i2" on test.t;
No rows affected (5.579 seconds)
0: jdbc:phoenix:labs-boba-namenode-lv-101,lab> delete from test.t where b > 25;
No rows affected (0.011 seconds){code}
 

Do you think this is the same issue resolved in 4.14, or should I file a 
separate bug against 4.13?

> Possible ColumnAlreadyExistsException is thrown from delete when autocommit 
> off
> ---
>
> Key: PHOENIX-4551
> URL: https://issues.apache.org/jira/browse/PHOENIX-4551
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Romil Choksi
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 5.0.0-alpha, 4.14.0
>
> Attachments: PHOENIX-4551.patch, PHOENIX-4551_v2.patch, 
> PHOENIX-4551_v3.patch
>
>
> Here are the simple steps to reproduce it.
> {noformat}
> 0: jdbc:phoenix:localhost> CREATE TABLE IF NOT EXISTS A (a INTEGER PRIMARY 
> KEY,b UNSIGNED_INT,c BIGINT);
> No rows affected (2.3 seconds)
> 0: jdbc:phoenix:localhost> CREATE INDEX idx_global ON A (c);
> No rows affected (7.282 seconds)
> 0: jdbc:phoenix:localhost> CREATE LOCAL INDEX idx_local ON A (c);
> No rows affected (11.322 seconds)
> 0: jdbc:phoenix:localhost> !autocommit off
> *Autocommit status: false*
> 0: jdbc:phoenix:localhost> delete from A where a > 5;
> *Error: ERROR 514 (42892): A duplicate column name was detected in the object 
> definition or ALTER TABLE/VIEW statement. columnName=A.C 
> (state=42892,code=514)*
> org.apache.phoenix.schema.ColumnAlreadyExistsException: ERROR 514 (42892): A 
> duplicate column name was detected in the object definition or ALTER 
> TABLE/VIEW statement. columnName=A.C
>  at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:529)
>  at org.apache.phoenix.schema.PTableImpl.(PTableImpl.java:421)
>  at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:305)
>  at org.apache.phoenix.compile.DeleteCompiler.compile(DeleteCompiler.java:730)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:771)
>  at 
> org.apache.phoenix.jdbc.PhoenixStatement$ExecutableDeleteStatement.compilePlan(PhoenixStatement.java:759)
>  at org.apache.phoenix.jdbc.Pho

Re: Almost ready for HBaseCon+PhoenixCon 2018 SanJose CFP

2018-06-17 Thread William Shen
Hi Josh,
Looking forward to HBaseCon/PhoenixCon tomorrow. Just a quick question
about the agenda, the timeline posted on the sites seem to diff slightly
(trying to plan ahead :) ), and I am assuming HBase/PhoenixCon site is the
source of truth?

https://dataworkssummit.com/san-jose-2018/agenda-hbase/ has keynotes
starting at 9:30AM
https://hbase.apache.org/hbasecon-2018/#about has keynotes starting at 10 AM

On Tue, Mar 20, 2018 at 7:51 PM Josh Elser  wrote:

> Hi all,
>
> I've published a new website for the upcoming event in June in
> California at [1][2] for the HBase and Phoenix websites, respectively. 1
> & 2 are identical.
>
> I've not yet updated any links on either website to link to the new
> page. I'd appreciate if folks can give their feedback on anything
> outwardly wrong, incorrect, etc. If folks are happy, then I'll work on
> linking from the main websites, and coordinating an official
> announcement via mail lists, social media, etc.
>
> The website is generated from [3]. If you really want to be my
> best-friend, let me know about the above things which are wrong via
> pull-request ;)
>
> - Josh
>
> [1] https://hbase.apache.org/hbasecon-phoenixcon-2018/
> [2] https://phoenix.apache.org/hbasecon-phoenixcon-2018/
> [3] https://github.com/joshelser/hbasecon-jekyll
>


[jira] [Commented] (PHOENIX-3431) Generate DDL from an existing Phoenix Schema

2018-05-09 Thread William Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16469480#comment-16469480
 ] 

William Shen commented on PHOENIX-3431:
---

Great, thanks for the pointers [~jamestaylor]. We will try to find some time to 
catch up the code with 4.x and 5.x for the contribution. Is there an easy way 
for me to figure out DDL related changes that's gone into 4.x and 5.x? Is 
DatabaseMetaData the best place to keep track of such changes? Thank you.

> Generate DDL from an existing Phoenix Schema
> 
>
> Key: PHOENIX-3431
> URL: https://issues.apache.org/jira/browse/PHOENIX-3431
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kumar Palaniappan
>Assignee: Kumar Palaniappan
>Priority: Minor
> Fix For: 4.8.1
>
>
> A tool to generate DDLs for Phoenix tables and indices from an existing 
> schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3431) Generate DDL from an existing Phoenix Schema

2018-05-08 Thread William Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468113#comment-16468113
 ] 

William Shen commented on PHOENIX-3431:
---

Thanks [~jamestaylor] for the speedy reply.

The tool was created for internal use, so it was not created with automated 
tests and it is currently on Phoenix 4.10. Can you guide us regarding how to 
bring this into the project? Specifically,
 * Do we need to add tests to meet coverage level
 * Do we need to bring this up to speed to a given version of phoenix
 * Do we add this as a new module to the main project?

Thank you!

> Generate DDL from an existing Phoenix Schema
> 
>
> Key: PHOENIX-3431
> URL: https://issues.apache.org/jira/browse/PHOENIX-3431
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kumar Palaniappan
>Assignee: Kumar Palaniappan
>Priority: Minor
> Fix For: 4.8.1
>
>
> A tool to generate DDLs for Phoenix tables and indices from an existing 
> schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3431) Generate DDL from an existing Phoenix Schema

2018-05-08 Thread William Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16468103#comment-16468103
 ] 

William Shen commented on PHOENIX-3431:
---

[~jamestaylor], we have created a standalone tool to create DDL for 
table/indices from an existing schema. Would this be something that the 
community would be interested in taking in? Or would the project be interested 
only in integrating the functionality into something like a SHOW CREATE TABLE?

The interface to our tool looks like:
- Once built using {{mvn clean install}}, this tool can be run from the command 
line using {{java -jar path/to/ddl-generator.jar}}.  Run with the -h or --help 
flags for usage instructions.
- The DDL Generator takes the following command-line arguments
{quote} --connection  The JDBC connection string the generator should use to 
connect to Phoenix.  
 --schema  The schema to generate DDLs for.  
 --tables  An optional comma-delimited list of tables to generate DDLs for. 
 
 --indices An optional comma-delimited list of base tables to generate 
index DDLs for.  
 --newSchema   Optionally, the schema name to use in the generated DDLs instead 
of --schema.  
 --indicesOnly If this flag is present, generate DDLs for indices only.  
 --tablesOnly  If this flag is present, generate DDLs for tables only.  
 -h, --helpPrint the usage instructions.{quote}

> Generate DDL from an existing Phoenix Schema
> 
>
> Key: PHOENIX-3431
> URL: https://issues.apache.org/jira/browse/PHOENIX-3431
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.6.0, 4.8.0
>Reporter: Kumar Palaniappan
>Assignee: Kumar Palaniappan
>Priority: Minor
> Fix For: 4.8.1
>
>
> A tool to generate DDLs for Phoenix tables and indices from an existing 
> schema.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-3112) Partial row scan not handled correctly

2017-11-07 Thread William Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16242794#comment-16242794
 ] 

William Shen commented on PHOENIX-3112:
---

Thank you for the clarification [~sergey.soldatov].

> Partial row scan not handled correctly
> --
>
> Key: PHOENIX-3112
> URL: https://issues.apache.org/jira/browse/PHOENIX-3112
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3112-1.patch, PHOENIX-3112-ssa-v4.patch, 
> PHOENIX-3112-ssa-v5.patch, PHOENIX-3112-v6.patch, PHOENIX-3112_v3.patch, 
> PHOENIX-3112_wip2.patch
>
>
> When doing a select of a relatively large table (a few touthands rows) some 
> rows return partially missing.
> When increasing the fitler to return those specific rows, the values appear 
> as expected
> {noformat}
> CREATE TABLE IF NOT EXISTS TEST (
> BUCKET VARCHAR,
> TIMESTAMP_DATE TIMESTAMP,
> TIMESTAMP UNSIGNED_LONG NOT NULL,
> SRC VARCHAR,
> DST VARCHAR,
> ID VARCHAR,
> ION VARCHAR,
> IC BOOLEAN NOT NULL,
> MI UNSIGNED_LONG,
> AV UNSIGNED_LONG,
> MA UNSIGNED_LONG,
> CNT UNSIGNED_LONG,
> DUMMY VARCHAR
> CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC, DST, ID, ION, IC)
> );{noformat}
> using a python script to generate a CSV with 5000 rows
> {noformat}
> for i in xrange(5000):
> print "5SEC,2016-07-21 
> 07:25:35.{i},146908593500{i},,AAA,,,false,{i}1181000,1788000{i},2497001{i},{i},a{i}".format(i=i)
> {noformat}
> bulk inserting the csv in the table
> {noformat}
> phoenix/bin/psql.py localhost -t TEST large.csv
> {noformat}
> here we can see one row that contains no TIMESTAMP_DATE and null values in MI 
> and MA
> {noformat}
> 0: jdbc:phoenix:localhost:2181> select * from TEST 
> 
> +-+--+---+---+--+---+---++--+--+--+---++
> | BUCKET  |  TIMESTAMP_DATE  | TIMESTAMP |SRC| DST  | 
>  ID   |ION|   IC   |  MI  |  AV  |  MA  |  
> CNT  |   DUMMY
> |
> +-+--+---+---+--+---+---++--+--+--+---++
> | 5SEC| 2016-07-21 07:25:35.100  | 1469085935001000  |   | AAA  | 
>   |   | false  | 10001181000  | 17880001000  | 24970011000  | 
> 1000  | 
> a1000  |
> | 5SEC| 2016-07-21 07:25:35.999  | 146908593500999   |   | AAA  | 
>   |   | false  | 9991181000   | 1788000999   | 2497001999   | 999 
>   | a999  
>  |
> | 5SEC| 2016-07-21 07:25:35.998  | 146908593500998   |   | AAA  | 
>   |   | false  | 9981181000   | 1788000998   | 2497001998   | 998 
>   | a998  
>  |
> | 5SEC|  | 146908593500997   |   | AAA  | 
>   |   | false  | null | 1788000997   | null | 997 
>   |   
>  |
> | 5SEC| 2016-07-21 07:25:35.996  | 146908593500996   |   | AAA  | 
>   |   | false  | 9961181000   | 1788000996   | 2497001996   | 996 
>   | a996  
>  |
> | 5SEC| 2016-07-21 07:25:35.995  | 146908593500995   |   | AAA  | 
>   |   | false  | 9951181000   | 1788000995   | 2497001995   | 995 
>   | a995  
>  |
> | 5SEC| 2016-07-21 07:25:35.994  | 146908593500994   |   | AAA  | 
>   |   | false  | 9941181000   | 1788000994   | 2497001994   | 994 
>

[jira] [Commented] (PHOENIX-3112) Partial row scan not handled correctly

2017-11-06 Thread William Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/PHOENIX-3112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16241202#comment-16241202
 ] 

William Shen commented on PHOENIX-3112:
---

[~sergey.soldatov],
I see your comment:
bq. Committed to master and 4.x-HBase-1.x branches.

Is this change also patched in 4.10-HBase-1.2? I noticed the fix version is 
marked for 4.12.0 only.

Thanks!

> Partial row scan not handled correctly
> --
>
> Key: PHOENIX-3112
> URL: https://issues.apache.org/jira/browse/PHOENIX-3112
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.7.0
>Reporter: Pierre Lacave
>Assignee: Sergey Soldatov
>Priority: Blocker
> Fix For: 4.12.0
>
> Attachments: PHOENIX-3112-1.patch, PHOENIX-3112-ssa-v4.patch, 
> PHOENIX-3112-ssa-v5.patch, PHOENIX-3112-v6.patch, PHOENIX-3112_v3.patch, 
> PHOENIX-3112_wip2.patch
>
>
> When doing a select of a relatively large table (a few touthands rows) some 
> rows return partially missing.
> When increasing the fitler to return those specific rows, the values appear 
> as expected
> {noformat}
> CREATE TABLE IF NOT EXISTS TEST (
> BUCKET VARCHAR,
> TIMESTAMP_DATE TIMESTAMP,
> TIMESTAMP UNSIGNED_LONG NOT NULL,
> SRC VARCHAR,
> DST VARCHAR,
> ID VARCHAR,
> ION VARCHAR,
> IC BOOLEAN NOT NULL,
> MI UNSIGNED_LONG,
> AV UNSIGNED_LONG,
> MA UNSIGNED_LONG,
> CNT UNSIGNED_LONG,
> DUMMY VARCHAR
> CONSTRAINT pk PRIMARY KEY (BUCKET, TIMESTAMP DESC, SRC, DST, ID, ION, IC)
> );{noformat}
> using a python script to generate a CSV with 5000 rows
> {noformat}
> for i in xrange(5000):
> print "5SEC,2016-07-21 
> 07:25:35.{i},146908593500{i},,AAA,,,false,{i}1181000,1788000{i},2497001{i},{i},a{i}".format(i=i)
> {noformat}
> bulk inserting the csv in the table
> {noformat}
> phoenix/bin/psql.py localhost -t TEST large.csv
> {noformat}
> here we can see one row that contains no TIMESTAMP_DATE and null values in MI 
> and MA
> {noformat}
> 0: jdbc:phoenix:localhost:2181> select * from TEST 
> 
> +-+--+---+---+--+---+---++--+--+--+---++
> | BUCKET  |  TIMESTAMP_DATE  | TIMESTAMP |SRC| DST  | 
>  ID   |ION|   IC   |  MI  |  AV  |  MA  |  
> CNT  |   DUMMY
> |
> +-+--+---+---+--+---+---++--+--+--+---++
> | 5SEC| 2016-07-21 07:25:35.100  | 1469085935001000  |   | AAA  | 
>   |   | false  | 10001181000  | 17880001000  | 24970011000  | 
> 1000  | 
> a1000  |
> | 5SEC| 2016-07-21 07:25:35.999  | 146908593500999   |   | AAA  | 
>   |   | false  | 9991181000   | 1788000999   | 2497001999   | 999 
>   | a999  
>  |
> | 5SEC| 2016-07-21 07:25:35.998  | 146908593500998   |   | AAA  | 
>   |   | false  | 9981181000   | 1788000998   | 2497001998   | 998 
>   | a998  
>  |
> | 5SEC|  | 146908593500997   |   | AAA  | 
>   |   | false  | null | 1788000997   | null | 997 
>   |   
>  |
> | 5SEC| 2016-07-21 07:25:35.996  | 146908593500996   |   | AAA  | 
>   |   | false  | 9961181000   | 1788000996   | 2497001996   | 996 
>   | a996  
>  |
> | 5SEC| 2016-07-21 07:25:35.995  | 146908593500995   |   | AAA  | 
>   |   | false  | 9951181000   | 1788000995   | 2497001995   | 995 
>   | a995  
>  |
> | 5SEC| 2016-07-21 07:25:35.994  | 146908593500994   |   | AAA  | 
>   |   | false  | 9941181000   | 1788000994   | 24