Re: Committers please look at the Phoenix tests and fix your failures

2020-01-14 Thread James Taylor
How about we require the tests to pass as a prerequisite for commit?

On Tue, Jan 14, 2020 at 3:16 PM la...@apache.org  wrote:

>  And I cannot stress enough how important this is for the project. As an
> example: We had the tests fail for just a few days, during that time we
> have had check-ins that broke other test; now it's quite hard to figure out
> which recent change broke the other tests.
> We need the test suite *always* passing. It's impossible to maintain a
> stable code base the size of Phoenix otherwise.
> -- Lars
> On Tuesday, January 14, 2020, 10:04:12 AM PST, la...@apache.org <
> la...@apache.org> wrote:
>
>   I spent a lot of time making QA better. It can be better, but it's
> stable enough. There're now very little excuses. "Test failure seems
> unrelated" is not an excuse anymore.(4.x-HBase-1.3 has some issue where
> HBase can't seem to start a cluster reliably... but all others are pretty
> stable.)
> After chatting with Andrew Purtell, one things I was going to offer is to
> simply revert any change that breaks a test. Period.I'd volunteer some of
> my time (hey, isn't that what a Chief Architect in a Fortune 100 company
> should do?!)
> With their changes reverted, people will presumably start to care. :)If I
> hear no objects I'll start doing that a while.
> Cheers.
> -- Lars
> On Monday, January 13, 2020, 06:23:01 PM PST, Josh Elser <
> els...@apache.org> wrote:
>
>  How do we keep getting into this mess: unreliable QA, people ignoring
> QA, or something else?
>
> On 1/12/20 9:24 PM, la...@apache.org wrote:
> > ... Not much else to say here...
> > The tests have been failing again for a while... I will NOT fix them
> again this time! Sorry folks.
> >
> > -- Lars
> >
> >
>


Re: [VOTE] Release of Apache Phoenix 4.15.0 RC4

2019-12-19 Thread James Taylor
+1

Nice work everyone.

On Thu, Dec 19, 2019 at 10:55 AM Thomas D'Silva  wrote:

> +1 (Binding)
>
> On Wed, Dec 18, 2019 at 12:22 PM Chinmay Kulkarni <
> chinmayskulka...@gmail.com> wrote:
>
> > +1 (Binding)
> >
> > - Built from source (4.15.0-HBase-1.3) : OK
> > - mvn verify : OK
> > - Did some basic DDL, upserts, deletes, querying, and everything looked
> > fine : OK
> > - Did some upgrade testing and created tables, indexes, views, etc. with
> an
> > old client, upgraded the client to 4.15 and did some more DDL and
> queries,
> > all looks good: OK
> > - Verified checksums : OK
> > - Verified signatures : OK
> > - mvn clean apache-rat:check : OK
> >
> > On Wed, Dec 18, 2019 at 7:47 AM la...@apache.org 
> wrote:
> >
> > >  +1 (binding)
> > > Again... This time I did the following:
> > > - Ran through upgrade and various test scripts using phoenix_sandbox.py
> > > (with PHOENIX-5617 applied). This started with 4.14, created some
> tables,
> > > views, and indexes, then with a 4.15 client, added/removed views,
> > columns,
> > > and indexes, then with 4.14 accessed existing tables, views, and
> indexes,
> > > create more tables, views, and indexes, added/droped columns, etc, Then
> > > with 4.15 client again accessed those same tables, etc.
> > >
> > > - With an actual install, inserted millions of rows, updated, queried,
> > > with views and indexes.
> > > - All good.
> > > I did see a strange exception in the server logs - see PHOENIX-5639,
> but
> > > this only happend in the sandbox and not with an actual install. So all
> > > fine.
> > > -- Lars
> > >
> > > On Tuesday, December 17, 2019, 6:33:07 AM GMT+1, Chinmay Kulkarni <
> > > chinmayskulka...@gmail.com> wrote:
> > >
> > >  Hello Everyone,
> > >
> > > This is a call for a vote on Apache Phoenix 4.15.0 RC4. This is the
> next
> > > minor release of Phoenix 4, compatible with Apache HBase 1.3, 1.4 &
> > > 1.5. The release includes both a source-only release and a convenience
> > > binary release for each supported HBase version.
> > >
> > > This release has feature parity with supported HBase versions and
> > includes
> > > the following improvements:
> > > - Support for multi-region SYSTEM.CATALOG
> > > - Omid integration with Phoenix
> > > - Orphan view tool
> > > - Separation of the Phoenix-Connectors and Phoenix-Queryserver projects
> > > - 150+ bug fixes
> > > and much more
> > >
> > > The source tar ball, including signatures, digests, etc can be found
> at:
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.3-rc4/src/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.4-rc4/src/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.5-rc4/src/
> > >
> > > The binary artifacts can be found at:
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.3-rc4/bin/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.4-rc4/bin/
> > >
> > >
> >
> https://dist.apache.org/repos/dist/dev/phoenix/apache-phoenix-4.15.0-HBase-1.5-rc4/bin/
> > >
> > > For a complete list of changes, see:
> > >
> > >
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12343162&projectId=12315120
> > >
> > > Artifacts are signed with my "CODE SIGNING KEY":
> > > 7C5FC713DE4C59D7
> > >
> > > KEYS file available here:
> > > https://dist.apache.org/repos/dist/dev/phoenix/KEYS
> > >
> > > The hash and tag to be voted upon:
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=65ee44ada9390ef46f6b2853e65067aa9ca1d598
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.15.0-HBase-1.3-rc4
> > >
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=f11afb2d609cbbfeaaee1cf981514be4c50a4bd0
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.15.0-HBase-1.4-rc4
> > >
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=a2adf5e572c5a4bcccee7f8ac43bad6b84293ec6
> > >
> > >
> >
> https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;h=refs/tags/v4.15.0-HBase-1.5-rc4
> > >
> > > Vote will be open for at least 72 hours. Please vote:
> > >
> > > [ ] +1 approve
> > > [ ] +0 no opinion
> > > [ ] -1 disapprove (and reason why)
> > >
> > > Thanks,
> > > The Apache Phoenix Team
> > >
> > > --
> > > Chinmay Kulkarni
> > >
> >
> >
> >
> > --
> > Chinmay Kulkarni
> > --
> > Chinmay Kulkarni
> >
>


Re: Nested Record

2019-11-10 Thread James Taylor
Hi Charles,
Phoenix doesn’t support nested data. I suggest you take a look at other
open source projects such as Apache Drill or Presto.
Thanks,
James

On Sat, Nov 9, 2019 at 8:16 AM Charles Wiese  wrote:

> I was very excited about the use case of creating Patient Records with 1:M
> records nested (such as Diagnosis, Procedures etc).  We have done this in
> MongoDB and could use a SQL driver to get the nested “rows”.   I understand
> that the idea of nested “rows” ate actually columns with “column_name:id”
> for each “row value”.   Seems a bit odd compare to a document / JSON
> layout, but okay.  However, how does one use a SQL driver for reporting and
> “unnesting” this data?
>
> Is that even possible … I want to “select count(nested_procedures),
> patient_type from patient group by patient_type)”….. in with Mongo driver
> it is a virtual table like "select count(patient_procedures
> \.nested_procedures), patient_type from patient, patient_procedures group
> by patient_type)”
>
> Is this possible?


Re: [VOTE] Accept Tephra and Omid podlings as Phoenix sub-projects

2019-10-30 Thread James Taylor
+1

On Wed, Oct 30, 2019 at 12:32 PM Geoffrey Jacoby  wrote:

> +1
>
> On Wed, Oct 30, 2019 at 1:25 PM Ankit Singhal 
> wrote:
>
> > +1
> >
> > On Wed, Oct 30, 2019 at 10:44 AM Andrew Purtell 
> > wrote:
> >
> > > +0 (binding)
> > >
> > > I support this but it isn't my time I'd be committing to maintaining
> the
> > > new repos with a +1...
> > >
> > > On Wed, Oct 30, 2019 at 8:27 AM Josh Elser  wrote:
> > >
> > > > Hi,
> > > >
> > > > As was previously discussed[1][2], there is a motion to "adopt" the
> > > > Tephra and Omid podlings as sub-projects to Apache Phoenix. A
> > > > sub-project is a distinct software project from some top-level
> project
> > > > (TLP) but operates under the guidance of a TLP (e.g. the TLP's PMC)
> > > >
> > > > Per the Incubator guidelines[3], we need to have a formal vote to
> adopt
> > > > them. While we still need details from these two podlings, I believe
> we
> > > > can have a VOTE now to keep things moving.
> > > >
> > > > Actions:
> > > > * Phoenix will incorporate Omid and Tephra as sub-projects and they
> > will
> > > > continue to function under the Apache Phoenix PMC guidance.
> > > > * Any current podling member may request to be added as a Phoenix
> > > > committer. Podling members would be expected to demonstrate the
> normal
> > > > level of commitment to grow from a committer to a PMC member.
> > > >
> > > > Stating what I see as an implicit decision (but to alleviate any
> > > > possible confusion): all new community members will be expected to
> > > > function in the way that Phoenix currently does today (e.g
> > > > review-then-commit). Future Omid and Tephra development would happen
> in
> > > > the same way that Phoenix development happens today.
> > > >
> > > > Please vote:
> > > >
> > > > +1: to accept Omid and Tephra as Phoenix sub-projects and to allow
> any
> > > > PPMC to become Phoenix committers.
> > > >
> > > > -1/-0/+0: No and why..
> > > >
> > > > Here is my +1 (binding)
> > > >
> > > > This vote will be open for at least 72hrs (2019/11/02 1600 UTC).
> > > >
> > > >
> > > > [1]
> > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/ec00615cbbb4225885e3776f1e8fd071600a9c50f35769f453b8a779@%3Cdev.phoenix.apache.org%3E
> > > > [2]
> > > >
> > > >
> > >
> >
> https://lists.apache.org/thread.html/692a030a27067c20b9228602af502199cd4d80eb0aa8ed6461ebe1ee@%3Cgeneral.incubator.apache.org%3E
> > > > [3]
> > > >
> > > >
> > >
> >
> https://incubator.apache.org/guides/graduation.html#graduating_to_a_subproject
> > > >
> > >
> > >
> > > --
> > > Best regards,
> > > Andrew
> > >
> > > Words like orphans lost among the crosstalk, meaning torn from truth's
> > > decrepit hands
> > >- A23, Crosstalk
> > >
> >
>


Re: [DISCUSS] Omid and Tephra podlings

2019-10-25 Thread James Taylor
+1 to pulling these projects into Phoenix. We’ve already done some work in
those projects and so have some familiarity with them. We also have some
overlap in the committers with Omid. At a minimum we’d need to create new
compat modules when necessary for future HBase releases. The alternative
would be to rip out the transaction support which would be a mistake IMHO.

On Fri, Oct 25, 2019 at 8:08 AM Josh Elser  wrote:

> I share your same hesitation. I'm worried about us adopting code that no
> one intends to actually maintain.
>
> My immediate concern is whether adoption of these codebases would impact
> the PMC's ability to develop on Phoenix -- I think there's a path
> forward to avoid that. Is that sufficient to say we "should" adopt them?
> I dunno.
>
> On 10/24/19 5:37 PM, Andrew Purtell wrote:
> > Tephra and Omid are interesting in that they serve a similar function - a
> > transaction oracle - but scale differently per different choices in
> design,
> > so one is more appropriate for some kinds of transactional workloads vs
> the
> > other. It's akin to secondary indexing, there is not an index type that
> > fits all use cases. If you are going to consider one, you should consider
> > both. That said my guess is you will find eventually one is the 'winner'
> > per second order measures like contributions or user issues. This should
> be
> > fine. As separate repositories they can move at their own speed and only
> > consume bandwidth as usage and uptake actually demands.
> >
> > For what it's worth I'd vote as PMC +0 on accepting these code bases. '+'
> > because Phoenix transactional indexes are a promising feature, and could
> be
> > compelling, and they need one of these transaction oracles. '0' because
> it
> > would be unfair to commit someone else's time. I'm not around here
> much...
> > but may be around more going forward.
> >
> > On Thu, Oct 24, 2019 at 10:14 AM Josh Elser  wrote:
> >
> >> Hiya folks,
> >>
> >> There's a discussion[1] on general@incubator about the Omid and Tephra
> >> podlings, their decrease in volume (commits, discussion, activity), and
> >> what to do about them. If you'd like to contribute to that discussion,
> >> please watch on general@incubator and the dev lists for those podlings.
> >>
> >> One idea that seems to have resonated was that the Phoenix PMC could
> >> "adopt" the codebases for Omid and Tephra.
> >>
> >> While this is by no means a "done decision", but I thought it would be
> >> good for us to think about this, decide if it's something we think we
> >> want to entertain, and how would would technically do this.
> >>
> >> As far as a PMC goes, we are allowed to have multiple projects under one
> >> PMC. We could move the tephra and omid repositories under the control of
> >> our PMC, and manage them just like we do phoenix, phoenix-connectors,
> >> phoenix-queryserver, etc.
> >>
> >> Thankfully, with the work of the transaction abstraction layer, we
> >> shouldn't be in any position where Phoenix development would get "stuck"
> >> by work that needed to be done in Omid or Tephra.
> >>
> >> What do folks think? Is this a good idea? Do we have enough interest to
> >> keep the codebases healthy?
> >>
> >> - Josh
> >>
> >> [1]
> >>
> >>
> https://lists.apache.org/thread.html/692a030a27067c20b9228602af502199cd4d80eb0aa8ed6461ebe1ee@%3Cgeneral.incubator.apache.org%3E
> >>
> >
> >
>


Re: [Question] Apache Calcite integration

2019-08-23 Thread James Taylor
To add to what Josh mentioned, the biggest hurdle was getting
Phoenix+Calcite to function *exactly* the same as current Phoenix. Without
this, it would be difficult to get users to migrate and it was clear we
didn't have the bandwidth to maintain two different code bases. If Phoenix
was being started from scratch, I'd definitely advocate that it be build it
on top of Calcite.


On Thu, Aug 22, 2019 at 8:18 AM Josh Elser  wrote:

> No, the effort has effectively stalled.
>
> It was a significant undertaking to get to where the Calcite integration
> was left, but, more importantly, required significantly more efforts to
> complete it than were available.
>
> I would assume that there is more that what exists today would still be
> generally applicable, but would require efforts to rebase.
>
> On 8/20/19 6:44 AM, Павлухин Иван wrote:
> > Hi Phoenix developers,
> >
> > It would be really great if you can shed a light on a current state of
> > Calcite integration in Phoenix.
> >
> > Currently we are considering Calcite for a new SQL engine in Apache
> > Ignite. And I feel that Phoenix experience might be extremely
> > relevant. I found that a great effort was put on Calcite integration
> > by Phoenix developers, I found a JIRA issue [1] and a related git
> > branch. But as I understood it was not integrated into master branch.
> > So, the questions:
> > 1. Is there an active development of Calcite integration? Otherwise
> > why was it stopped?
> > 2. Are there any blockers for integrating Calcite? Or any significant
> > downsides? (From the first glance Calcite looks as the best library
> > for implementing SQL in a system written in Java).
> >
> > Thank you in advance!
> >
> > [1] https://issues.apache.org/jira/browse/PHOENIX-1488
> >
>


Re: [DISCUSS] Maintaining the Site in Git Instead of SVN

2019-06-03 Thread James Taylor
The problem with committing docs with code is that the release happens much
later. We’ve always tried to get the doc changes pushed out before the
release is cut.

On Mon, Jun 3, 2019 at 9:30 AM William Shen 
wrote:

> Thomas, that makes sense now. thanks for explaining. didnt catch that the
> first time. would it make sense to you guys that we publish the website
> from master, or should we publish from say the latest of 4.x?
>
> On Mon, Jun 3, 2019 at 9:11 AM Thomas D'Silva
>  wrote:
>
> > William, I was referring to your comment about ensuring doc changes are
> > checked in with code changes.
> > I assume you meant this meant that the doc change would go in to the same
> > pull request as the code change?
> > But I guess since we currently mostly commit patches to all branches this
> > should be fine. We could have the website
> > module in one of the branches.
> >
> > On Sun, Jun 2, 2019 at 10:53 PM William Shen  >
> > wrote:
> >
> > > I think we do need to come up with a strategy on how to maintain
> website
> > > documentation given that we have several versions that may potentially
> > > conflict in documentation at times. Thomas, is this your main concern?
> > >
> > >
> > > Josh - Would love to help drive it, though I’m not sure if i have all
> the
> > > right access to do so.
> > > Seems like we would need to:
> > > - commit the svn site directory into git master (I can create a patch
> but
> > > would need help committing this)
> > > - file an infra ticket to migrate the website, and enable git-pubsub
> > > (though I’m totally speak out of my knowledge here...)
> > >
> > > On Sun, Jun 2, 2019 at 11:42 AM Josh Elser 
> wrote:
> > >
> > > > Yeah, not sure I get your concern, Thomas. We only have one website.
> > > >
> > > > From the ASF Infra side, svn-pubsub (what deploys our code on SVN
> > > > check-in) works the same as git-pubsub. It should just be a request
> to
> > > > Infra to migrate the website from SVN to Git and then enable
> > > > git-pubsub.
> > > >
> > > > No concerns in doing this from me. Even better if you'd like to drive
> > > > it, William ;)
> > > >
> > > > On Fri, May 31, 2019 at 2:24 PM William Shen <
> > wills...@marinsoftware.com
> > > >
> > > > wrote:
> > > > >
> > > > > Thomas,
> > > > >
> > > > > Which release line do we currently base our documentation on? Do
> you
> > > > think
> > > > > it makes sense to bring the site source into master, and always
> > update
> > > > the
> > > > > site from master?
> > > > >
> > > > > - Will
> > > > >
> > > > > On Thu, May 30, 2019 at 8:46 PM Thomas D'Silva
> > > > >  wrote:
> > > > >
> > > > > > Currently this would not be easy to do since we have multiple
> > > > branches. If
> > > > > > we decide to
> > > > > > implement Lars' proposal to have a single branch and a module per
> > > > supported
> > > > > > HBase version
> > > > > > then we could have a module for the website as well.
> > > > > >
> > > > > > On Thu, May 30, 2019 at 7:03 PM swaroopa kadam <
> > > > swaroopa.kada...@gmail.com
> > > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Huge +1!
> > > > > > >
> > > > > > > On Thu, May 30, 2019 at 4:38 PM William Shen <
> > > > wills...@marinsoftware.com
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi all,
> > > > > > > >
> > > > > > > > Currently, the Phoenix site is maintained in and built from
> SVN
> > > > > > > > . Not sure
> what
> > > > level
> > > > > > of
> > > > > > > > work it would require, but does it make sense to move the
> > source
> > > > from
> > > > > > svn
> > > > > > > > to git, so contribution to the website can follow the same
> > > JIRA/git
> > > > > > > > workflow as the rest of the project? It could also make sure
> > > > changes to
> > > > > > > > Phoenix code are checked in with corresponding documentation
> > > > changes
> > > > > > when
> > > > > > > > needed.
> > > > > > > >
> > > > > > > > - Will
> > > > > > > >
> > > > > > > --
> > > > > > >
> > > > > > >
> > > > > > > Swaroopa Kadam
> > > > > > > [image: https://]about.me/swaroopa_kadam
> > > > > > > <
> > > > > > >
> > > > > >
> > > >
> > >
> >
> https://about.me/swaroopa_kadam?promo=email_sig&utm_source=product&utm_medium=email_sig&utm_campaign=gmail_api
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>


Re: Handling of HBase version specifics.

2019-05-19 Thread James Taylor
I think it’s a good idea, but even before this you have to ask when a
Jenkins build actually succeeded. How about solving that first?

On Sun, May 19, 2019 at 1:34 PM Nick Dimiduk  wrote:

> I think this is generally a good idea for managing multiple target
> runtimes.
>
> One question I have though: is it really necessary that we support so many
> release branches and so many compile targets? What about the versions of
> Hadoop underneath each of those versions of HBase? Are we committed to
> running against ever version of HDFS that each of those HBase releases
> support? The test load will be massive and the suite itself must be
> rock-solid in order to run across so many permutations. I fear that’s an
> unreasonable burden for an open source community.
>
> Thanks,
> Nick
>
> On Sat, May 18, 2019 at 11:36 AM Thomas D'Silva
>  wrote:
>
> > +1, I think this is a good idea. This would make it easier to contribute
> > and commit since you would only have to create a single patch.
> > The tests would take longer to run (1.3, 1.4, 1.5 and 2.x). We should
> make
> > sure our precommit build will run the tests for all the modules.
> >
> >
> >
> > On Fri, May 17, 2019 at 11:23 AM la...@apache.org 
> > wrote:
> >
> > > Hi all,
> > > historically we have a branch of each version of HBase we want to
> > > support.As a result we have many branches, committing is a hassle and
> it
> > is
> > > easy to miss a change across branches.
> > > Instead we could have a maven module per version of HBase we want to
> > > support and move the version dependent code there.Take a look at what
> > > Tephra is doing: https://github.com/apache/incubator-tephra
> > > They have a compat module for each supported version of HBase, and
> > version
> > > dependent code is "simply" copied into those modules.There's still
> > > duplicate code, but at least there's one branch to maintain.
> > > It's somewhat of a bigger project now.
> > > Thoughts?
> > > -- Lars
> > >
> >
>


Re: How to map phoenix field to hbase timestamp of column family?

2019-02-21 Thread James Taylor
See ROW_TIMESTAMP on phoenix.apache.org

On Wed, Feb 20, 2019 at 5:53 PM 吴少  wrote:

> In some scenarios, one general Phoenix field need to be mapped to the
> timestamp of the HBase column family. If so ,I can read and write HBase's
> timestamp . Any one can help me?


Re: UNNEST function

2019-01-08 Thread James Taylor
The support for UNNEST was in the calcite branch and took advantage of the
compilation support there. We unfortunately don't have the same support in
the mainline branch. We'd welcome a contribution in this area, but it'd be
a fair amount of work.
Thanks,
James

On Tue, Jan 8, 2019 at 11:07 AM Thomas D'Silva 
wrote:

> Looks like UNNEST array support was added in (
> https://issues.apache.org/jira/browse/PHOENIX-953).
> I'm not sure what remaining work PHOENIX-4311 requires. There is a
> test UnnestArrayIT that was disabled,
> perhaps as a first step you can try running the test and see the current
> state of things.
>
> On Tue, Jan 8, 2019 at 10:06 AM Josh Elser  wrote:
>
> > Hi,
> >
> > Looking at PHOENIX-4311, I think any contribution you choose to make
> > would be appreciated. I don't have a good understanding of the
> > difficulties you would have in trying to implement it.
> >
> > In general, if there is no assignee and there isn't discussion on the
> > Jira issue, you may feel free to request it be assigned to yourself to
> > work on.
> >
> > - Josh
> >
> > On 1/8/19 9:38 AM, tricolor wrote:
> > >   Hi,
> > >
> > > I am currently working on a project where the function UNNEST would be
> > > useful to rotate an array in order to perform an aggregation. According
> > to
> > > the Jira ticket 4311 there are pending tests to surface this feature.
> The
> > > ticket was untouched since October 2017. My questions are the
> following :
> > >
> > > 1. Is it a feature that will eventually be implemented?
> > >
> > > 2. Is there any blocker making its implementation impossible at the
> > moment?
> > >
> > > 3. If there is no blocker what are the missing components to implement
> > it?
> > >
> > > 4. Is anyone going to implement it eventually? If someone takes it what
> > > would be an expected time of delivery? If no one implements it, would
> my
> > > contribution be appreciated?
> > >
> > >
> > > Best regards,
> > >
> >
>


[jira] [Updated] (PHOENIX-4889) Document Omid transaction support

2018-12-20 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4889:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-3623)

> Document Omid transaction support
> -
>
> Key: PHOENIX-4889
> URL: https://issues.apache.org/jira/browse/PHOENIX-4889
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>Priority: Major
>
> We'll need to update our transaction documentation to reflect what's required 
> to use Omid. See [http://phoenix.apache.org/transactions.html] with the 
> source in SVN at transaction.md (see 
> [http://phoenix.apache.org/building_website.html).]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4889) Document Omid transaction support

2018-12-20 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4889:
-

Assignee: Ohad Shacham

> Document Omid transaction support
> -
>
> Key: PHOENIX-4889
> URL: https://issues.apache.org/jira/browse/PHOENIX-4889
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Major
>
> We'll need to update our transaction documentation to reflect what's required 
> to use Omid. See [http://phoenix.apache.org/transactions.html] with the 
> source in SVN at transaction.md (see 
> [http://phoenix.apache.org/building_website.html).]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-5004) Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper

2018-12-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-5004:
-

Assignee: James Taylor  (was: Yonatan Gottesman)

> Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper
> 
>
> Key: PHOENIX-5004
> URL: https://issues.apache.org/jira/browse/PHOENIX-5004
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> Recent Jenkins runs on omid2 are flapping with the following exception:
> {code:java}
> com.google.common.util.concurrent.UncheckedExecutionException: 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:41016
>   at 
> org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
> {code}
> See [https://builds.apache.org/job/Phoenix-omid2/141/]
> Do you think we might be running into OOM errors given that the tso needs so 
> much memory? Maybe we need to increase our heapsize when the the tests are 
> run? Other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5004) Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper

2018-12-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-5004:
--
Fix Version/s: 4.15.0

> Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper
> 
>
> Key: PHOENIX-5004
> URL: https://issues.apache.org/jira/browse/PHOENIX-5004
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> Recent Jenkins runs on omid2 are flapping with the following exception:
> {code:java}
> com.google.common.util.concurrent.UncheckedExecutionException: 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:41016
>   at 
> org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
> {code}
> See [https://builds.apache.org/job/Phoenix-omid2/141/]
> Do you think we might be running into OOM errors given that the tso needs so 
> much memory? Maybe we need to increase our heapsize when the the tests are 
> run? Other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5004) Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper

2018-12-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-5004.
---
Resolution: Fixed

> Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper
> 
>
> Key: PHOENIX-5004
> URL: https://issues.apache.org/jira/browse/PHOENIX-5004
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> Recent Jenkins runs on omid2 are flapping with the following exception:
> {code:java}
> com.google.common.util.concurrent.UncheckedExecutionException: 
> org.jboss.netty.channel.ChannelException: Failed to bind to: 
> 0.0.0.0/0.0.0.0:41016
>   at 
> org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
> {code}
> See [https://builds.apache.org/job/Phoenix-omid2/141/]
> Do you think we might be running into OOM errors given that the tso needs so 
> much memory? Maybe we need to increase our heapsize when the the tests are 
> run? Other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3623) Integrate Omid with Phoenix

2018-12-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3623:
-

Assignee: Ohad Shacham  (was: James Taylor)

> Integrate Omid with Phoenix
> ---
>
> Key: PHOENIX-3623
> URL: https://issues.apache.org/jira/browse/PHOENIX-3623
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: 4.x-HBase-1.2.patch, 4.x-HBase-1.3.patch, 
> 4.x-HBase-1.4.patch, master.patch
>
>
> The purpose of this Jira is to propose a work plan for connecting Omid to 
> Phoenix.
> Each task of the following will be handled in a seperate sub Jira. Subtasks 
> 4.* are related to augmenting Omid to support features required by Phoenix 
> and therefore, their corresponding Jiras will appear under Omid and not under 
> Phoenix. 
> Each task is completed by a commit.
> Task 1: Adding transaction abstraction layer (TAL) - Currently Tephra calls 
> are integrated inside Phoenix code. Therefore, in order to support both Omid 
> and Tephra, we need to add another abstraction layer that later-on will be 
> connected to both Tephra and Omid. The first tasks is to define such an 
> interface.
> Task 2: Implement TAL functionality for Tephra. 
> Task 3: Refactor Phoenix to use TAL instead of direct calls to Tephra.
> Task 4: Implement Omid required features for Phoenix:
> Task 4.1: Add checkpoints to Omid. A checkpoint is a point in a transaction 
> where every write occurs after the checkpoint is not visible by the 
> transaction. Explanations for this feature can be seen in [TEPHRA-96].
> Task 4.2: Add an option to mark a key as non-conflicting. The motivation is 
> to reduce the size of the write set needed by the transaction manager upon 
> commit as well as reduce the conflict detection work.
> Task 4.3: Add support for transactions that never abort. Such transactions 
> will only make other inflight transactions abort and will abort only in case 
> of a transaction manager failure. 
> These transactions are needed for ‘create index’ and the scenario was 
> discussed in [TEPHRA-157] and [PHOENIX-2478]. Augmenting Omid with this kind 
> of transactions was also discussed in [OMID-56].
> Task 4.4: Add support for returning multiple versions in a scan. The use case 
> is described in [TEPHRA-134].
> Task 4.5: Change Omid's timestamp mechanism to return real time based 
> timestamp, while keeping monotonicity.
> Task 5: Implement TAL functionality for Omid.
> Task 6: Implement performance tests and tune Omid for Phoenix use. This task 
> requires understanding of common usage scenarios in Phoenix as well as 
> defining the tradeoff between throughput and latency. 
> Could you please review the proposed work plan?
> Also, could you please let me know whether I missed any augmentation needed 
> for Omid in order to support Phoenix operations?
> I opened a jira [OMID-82] that encapsulates all Omid related development for 
> Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-3623) Integrate Omid with Phoenix

2018-12-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-3623:
-

Assignee: James Taylor  (was: Ohad Shacham)

> Integrate Omid with Phoenix
> ---
>
> Key: PHOENIX-3623
> URL: https://issues.apache.org/jira/browse/PHOENIX-3623
> Project: Phoenix
>  Issue Type: New Feature
>Reporter: Ohad Shacham
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: 4.x-HBase-1.2.patch, 4.x-HBase-1.3.patch, 
> 4.x-HBase-1.4.patch, master.patch
>
>
> The purpose of this Jira is to propose a work plan for connecting Omid to 
> Phoenix.
> Each task of the following will be handled in a seperate sub Jira. Subtasks 
> 4.* are related to augmenting Omid to support features required by Phoenix 
> and therefore, their corresponding Jiras will appear under Omid and not under 
> Phoenix. 
> Each task is completed by a commit.
> Task 1: Adding transaction abstraction layer (TAL) - Currently Tephra calls 
> are integrated inside Phoenix code. Therefore, in order to support both Omid 
> and Tephra, we need to add another abstraction layer that later-on will be 
> connected to both Tephra and Omid. The first tasks is to define such an 
> interface.
> Task 2: Implement TAL functionality for Tephra. 
> Task 3: Refactor Phoenix to use TAL instead of direct calls to Tephra.
> Task 4: Implement Omid required features for Phoenix:
> Task 4.1: Add checkpoints to Omid. A checkpoint is a point in a transaction 
> where every write occurs after the checkpoint is not visible by the 
> transaction. Explanations for this feature can be seen in [TEPHRA-96].
> Task 4.2: Add an option to mark a key as non-conflicting. The motivation is 
> to reduce the size of the write set needed by the transaction manager upon 
> commit as well as reduce the conflict detection work.
> Task 4.3: Add support for transactions that never abort. Such transactions 
> will only make other inflight transactions abort and will abort only in case 
> of a transaction manager failure. 
> These transactions are needed for ‘create index’ and the scenario was 
> discussed in [TEPHRA-157] and [PHOENIX-2478]. Augmenting Omid with this kind 
> of transactions was also discussed in [OMID-56].
> Task 4.4: Add support for returning multiple versions in a scan. The use case 
> is described in [TEPHRA-134].
> Task 4.5: Change Omid's timestamp mechanism to return real time based 
> timestamp, while keeping monotonicity.
> Task 5: Implement TAL functionality for Omid.
> Task 6: Implement performance tests and tune Omid for Phoenix use. This task 
> requires understanding of common usage scenarios in Phoenix as well as 
> defining the tradeoff between throughput and latency. 
> Could you please review the proposed work plan?
> Also, could you please let me know whether I missed any augmentation needed 
> for Omid in order to support Phoenix operations?
> I opened a jira [OMID-82] that encapsulates all Omid related development for 
> Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [ANNOUNCE] New Phoenix committer: Jaanai Zhang

2018-12-12 Thread James Taylor
Congrats, Jaanai. Well deserved. Welcome to the team.

On Tue, Dec 11, 2018 at 11:20 AM Andrew Purtell  wrote:

> Congratulations Jaanai!
>
> On Mon, Dec 10, 2018 at 2:05 PM Thomas D'Silva  wrote:
>
> > On behalf of the Apache Phoenix PMC, I am pleased to announce that Jaanai
> > Zhang has accepted our invitation to become a committer. He has found and
> > fixed several bugs [1]. He is also very active on the mailing list
> helping
> > out users and providing feedback on new features.
> >
> > We are looking forward to more great work.
> >
> > Thank you,
> > Thomas
> >
> > [1] *
> >
> https://issues.apache.org/jira/browse/PHOENIX-4974?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Djaanai%20%20
> > <
> >
> https://issues.apache.org/jira/browse/PHOENIX-4974?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Djaanai%20%20
> > >*
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: [ANNOUNCE] New Phoenix committer: Chinmay Kulkarni

2018-12-12 Thread James Taylor
Congrats and welcome to the team, Chinmay. Keep up your excellent diligence!

On Tue, Dec 11, 2018 at 11:20 AM Andrew Purtell  wrote:

> Congratulations and welcome, Chinmay!
>
> On Mon, Dec 10, 2018 at 1:45 PM Thomas D'Silva  wrote:
>
> > On behalf of the Apache Phoenix PMC, I am pleased to announce that
> Chinmay
> > Kulkarni has accepted our invitation to become a committer. Chinmay has
> > contributed several metadata management improvements. He has also worked
> on
> > improving Phoenix code quality and fixed many bugs.
> >
> > Please welcome him to the Apache Phoenix team.
> >
> > Thank you,
> > Thomas
> >
> > [1]
> >
> >
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20assignee%3Dckulkarni%20AND%20status%3DResolved
> >
>
>
> --
> Best regards,
> Andrew
>
> Words like orphans lost among the crosstalk, meaning torn from truth's
> decrepit hands
>- A23, Crosstalk
>


Re: [ANNOUNCE] New Phoenix PMC member: Karan Mehta

2018-12-12 Thread James Taylor
Congrats, Karan! Keep up the great work!

On Wed, Dec 12, 2018 at 10:26 AM Vincent Poon 
wrote:

> Congrats Karan!
>
> On Wed, Dec 12, 2018 at 9:17 AM Andrew Purtell 
> wrote:
>
> > Congratulations and welcome, Karan.
> >
> > On Tue, Dec 11, 2018 at 8:38 PM Thomas D'Silva 
> wrote:
> >
> > > On behalf of the Apache Phoenix PMC, I'm pleased to announce that Karan
> > > Mehta has accepted our invitation to join the PMC. He has been working
> on
> > > enhancing the phoenix query server.
> > >
> > > Please join me in congratulating Karan.
> > >
> > > Thanks,
> > > Thomas
> > >
> >
> >
> > --
> > Best regards,
> > Andrew
> >
> > Words like orphans lost among the crosstalk, meaning torn from truth's
> > decrepit hands
> >- A23, Crosstalk
> >
>


[jira] [Resolved] (PHOENIX-4319) Zookeeper connection should be closed immediately

2018-12-08 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4319.
---
Resolution: Duplicate

Duplicate of PHOENIX-4489.

> Zookeeper connection should be closed immediately
> -
>
> Key: PHOENIX-4319
> URL: https://issues.apache.org/jira/browse/PHOENIX-4319
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.10.0
> Environment: phoenix4.10 hbase1.2.0
>Reporter: Jepson
>Priority: Major
>  Labels: patch
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> *Code:*
> {code:java}
> val zkUrl = "192.168.100.40,192.168.100.41,192.168.100.42:2181:/hbase"
> val configuration = new Configuration()
> configuration.set("hbase.zookeeper.quorum",zkUrl)
> val spark = SparkSession
>   .builder()
>   .appName("SparkPhoenixTest1")
>   .master("local[2]")
>   .getOrCreate()
>   for( a <- 1 to 100){
>   val wms_doDF = spark.sqlContext.phoenixTableAsDataFrame(
> "DW.wms_do",
> Array("WAREHOUSE_NO", "DO_NO"),
> predicate = Some(
>   """
> |MOD_TIME >= TO_DATE('begin_day', '-MM-dd')
> |and MOD_TIME < TO_DATE('end_day', '-MM-dd')
>   """.stripMargin.replaceAll("begin_day", 
> "2017-10-01").replaceAll("end_day", "2017-10-25")),
> conf = configuration
>   )
>   wms_doDF.show(100)
> }
> {code}
> *Description:*
> The connection to zookeeper is not getting closed,which causes the maximum 
> number of client connections to be reached from a host( we have 
> maxClientCnxns as 500 in zookeeper config).
> *Zookeeper connections:*
> [https://github.com/Hackeruncle/Images/blob/master/zookeeper%20connections.png]
> *Reference:*
> [https://community.hortonworks.com/questions/116832/hbase-zookeeper-connections-not-getting-closed.html]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Drop support for java 1.7 on the 1.2, 1.3 and 1.4 branches

2018-12-03 Thread James Taylor
+1. Good idea, Thomas.

On Mon, Dec 3, 2018 at 2:57 PM Thomas D'Silva 
wrote:

> I believe we will be maintaining the 4.x branches which support HBase 1.2,
> 1.3 and 1.4 for a while.
> Should we think about pulling out the connectors and queryserver into their
> own repo similar to
> what HBase did (see https://issues.apache.org/jira/browse/HBASE-20934).
> They could then have
> their own release schedule and Java support.
>
>
> On Sat, Dec 1, 2018 at 6:53 AM Pedro Boado  wrote:
>
> > Well I don't count with a lot more 4.x releases - maybe I'm wrong-headed
> .
> > For master branch and cdh6 we'd be looking at spark 2.x
> >
> > Part of the success of a project is about version stability. Not a lot of
> > corporate projects can afford keep upgrading to the latest versions -
> think
> > about it, you're in production, with a few thousand lines code running
> > spark 1.6 ... And for upgrading to 4.14 you need to review all of this
> > spark code -and maybe recompile to scala 2.11 btw- . It doesn't make
> sense.
> > Until now 4.x was pretty stable and in my opinion it should've never been
> > migrated to spark 2 and java 8. Minor versions should keep certain
> > stability in terms of dependencies.
> >
> > All these changes should've come with phoenix 5. But you're right it
> needs
> > a sensible solution as 4.14.1 is already out and compiled with java8.
> >
> >
> >
> > On Fri, 30 Nov 2018, 23:28 Thomas D'Silva  >
> > > Spark 1.6 is really old and doesn't support the newer Datasource v2 api
> > > that we have been looking at integrating with.
> > > As Alex points out you will might end up having to revert a lot more
> > > commits in the future.
> > > Seems like the queryserver and phoenix-spark modules on the cdh branch
> > > would end up diverging a lot from the standard open source branch.
> > >
> > >
> > > On Fri, Nov 30, 2018 at 2:23 PM Alex Araujo 
> > wrote:
> > >
> > > > > Only a downgrade to spark 1.6 (
> > > > changes are only needed in a few IT, basically going back from
> Datasets
> > > to
> > > > Dataframes)  and going back to Avatica 1.10 ( involving reverting
> > > > PHOENIX-4755, PHOENIX-4750 and PHOENIX-4805 ).
> > > >
> > > > We're talking about the 4.x branches, right? Doesn't seem prudent to
> do
> > > it
> > > > there as down-streamers may already be relying on the newer versions.
> > > >
> > > > On Fri, Nov 30, 2018 at 4:18 PM Pedro Boado 
> > > wrote:
> > > >
> > > > > Thinking about typical server installation in a corporate
> environment
> > > I'd
> > > > > keep everything compatible with the same JVM version.
> > > > >
> > > > > I've gone down the route for the cdh branch. Full JDK 7
> compatibility
> > > > > doesn't require changes in phoenix-core. Only a downgrade to spark
> > 1.6
> > > (
> > > > > changes are only needed in a few IT, basically going back from
> > Datasets
> > > > to
> > > > > Dataframes)  and going back to Avatica 1.10 ( involving reverting
> > > > > PHOENIX-4755, PHOENIX-4750 and PHOENIX-4805 ).
> > > > >
> > > > >
> > > > > On Fri, 30 Nov 2018, 18:57 Thomas D'Silva  > > > >  wrote:
> > > > >
> > > > > > We could allow individual submodules like the queryserver, or
> > > > > phoenix-spark
> > > > > > to be built with their own compiler configuration (1.8+).
> > > > > > This would allow these modules to use Java 1.8 features. I think
> > this
> > > > > would
> > > > > > be a good compromise given that they depend on
> > > > > > features that are provided by versions of spark and avatica that
> no
> > > > > longer
> > > > > > support Java 1.7.
> > > > > > We can still ensure phoenix-core supports Java 1.7. You would
> have
> > to
> > > > > skip
> > > > > > building modules that require Java 1.8, WDYT?
> > > > > >
> > > > > > On Thu, Nov 29, 2018 at 6:11 PM Jaanai Zhang <
> > cloud.pos...@gmail.com
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > I'd vote for keep using java7 on 4.x branches. if upgrades to
> > > java8,
> > > > it
> > > > > > > will impact users who want to upgrade the latest 4.x branches.
> > they
> > > > > must
> > > > > > > consider using java8 in their running environments,  maybe
> their
> > > > > > libraries
> > > > > > > do not support java8, then they have to give up to upgrade. So
> I
> > > > think
> > > > > > that
> > > > > > > drops support java7 is not friendly for some users.
> > > > > > >
> > > > > > >
> > > > > > > 
> > > > > > >Jaanai Zhang
> > > > > > >Best regards!
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Pedro Boado  于2018年11月30日周五 上午6:13写道:
> > > > > > >
> > > > > > > > I'd vote for keep compiling 4.x branches in java7. It makes
> > sense
> > > > as
> > > > > > it's
> > > > > > > > just a new minor release.
> > > > > > > >
> > > > > > > > It's pretty easy reverting back to spark 1.6 and also avatica
> > > > > > dependency
> > > > > > > > could be reverted to the previous version.
> > > > > > > >
> > > > > > > > On 29 Nov 2018 21:41, "Thomas D'Silva" <
> tdsi...@salesf

[jira] [Updated] (PHOENIX-5022) For 1.x branches, phoenix-queryserver encounters build failure

2018-11-26 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-5022:
--
Priority: Blocker  (was: Major)

> For 1.x branches, phoenix-queryserver encounters build failure
> --
>
> Key: PHOENIX-5022
> URL: https://issues.apache.org/jira/browse/PHOENIX-5022
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: 4.15.0
>
>
> It's likely been masked by frequent failures/flappers in phoenix-core, but 
> we're (maybe) finally able to run all the tests successfully. Maybe a Java 8 
> dependency snuck in again?
>  
> From this build: [https://builds.apache.org/job/Phoenix-omid2/150/console]
>  
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.20:test (default-test) on 
> project phoenix-queryserver: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.20:test failed: 
> java.lang.UnsupportedClassVersionError: 
> org/apache/calcite/avatica/server/HttpServer$Builder : Unsupported 
> major.minor version 52.0 -> [Help 1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5022) For 1.x branches, phoenix-queryserver encounters build failure

2018-11-26 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-5022:
--
Fix Version/s: 4.15.0

> For 1.x branches, phoenix-queryserver encounters build failure
> --
>
> Key: PHOENIX-5022
> URL: https://issues.apache.org/jira/browse/PHOENIX-5022
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.15.0
>
>
> It's likely been masked by frequent failures/flappers in phoenix-core, but 
> we're (maybe) finally able to run all the tests successfully. Maybe a Java 8 
> dependency snuck in again?
>  
> From this build: [https://builds.apache.org/job/Phoenix-omid2/150/console]
>  
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.20:test (default-test) on 
> project phoenix-queryserver: Execution default-test of goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.20:test failed: 
> java.lang.UnsupportedClassVersionError: 
> org/apache/calcite/avatica/server/HttpServer$Builder : Unsupported 
> major.minor version 52.0 -> [Help 1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5028) Delay acquisition of port and increase Tephra test discovery timeouts

2018-11-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5028?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-5028.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: 4.15.0

> Delay acquisition of port and increase Tephra test discovery timeouts
> -
>
> Key: PHOENIX-5028
> URL: https://issues.apache.org/jira/browse/PHOENIX-5028
> Project: Phoenix
>  Issue Type: Test
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5029) Increase parallelism of tests to decrease test time

2018-11-17 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-5029.
---
   Resolution: Fixed
 Assignee: James Taylor
Fix Version/s: 4.15.0

> Increase parallelism of tests to decrease test time
> ---
>
> Key: PHOENIX-5029
> URL: https://issues.apache.org/jira/browse/PHOENIX-5029
> Project: Phoenix
>  Issue Type: Test
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5029) Increase parallelism of tests to decrease test time

2018-11-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5029:
-

 Summary: Increase parallelism of tests to decrease test time
 Key: PHOENIX-5029
 URL: https://issues.apache.org/jira/browse/PHOENIX-5029
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5028) Delay acquisition of port and increase Tephra test discovery timeouts

2018-11-17 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5028:
-

 Summary: Delay acquisition of port and increase Tephra test 
discovery timeouts
 Key: PHOENIX-5028
 URL: https://issues.apache.org/jira/browse/PHOENIX-5028
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5022) For 1.x branches, phoenix-queryserver encounters build failure

2018-11-15 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5022:
-

 Summary: For 1.x branches, phoenix-queryserver encounters build 
failure
 Key: PHOENIX-5022
 URL: https://issues.apache.org/jira/browse/PHOENIX-5022
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Josh Elser


It's likely been masked by frequent failures/flappers in phoenix-core, but 
we're (maybe) finally able to run all the tests successfully. Maybe a Java 8 
dependency snuck in again?
 
>From this build: [https://builds.apache.org/job/Phoenix-omid2/150/console]
 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.20:test (default-test) on 
project phoenix-queryserver: Execution default-test of goal 
org.apache.maven.plugins:maven-surefire-plugin:2.20:test failed: 
java.lang.UnsupportedClassVersionError: 
org/apache/calcite/avatica/server/HttpServer$Builder : Unsupported major.minor 
version 52.0 -> [Help 1]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4012) Disable distributed upsert select when table has global mutable secondary indexes

2018-11-11 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4012.
---
   Resolution: Fixed
 Assignee: Samarth Jain
Fix Version/s: 4.14.0

This has already been fixed.

> Disable distributed upsert select when table has global mutable secondary 
> indexes
> -
>
> Key: PHOENIX-4012
> URL: https://issues.apache.org/jira/browse/PHOENIX-4012
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Samarth Jain
>Priority: Major
> Fix For: 4.14.0
>
>
> It can be enabled back on when PHOENIX-3995 is fixed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4764) Cleanup metadata of child views for a base table that has been dropped

2018-11-10 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4764:
---

The commits for this are causing {{sqlline.py}} to not start with the following 
exception:
{code:java}
org.apache.phoenix.exception.PhoenixIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.phoenix.coprocessor.TaskRegionObserver cannot be loaded Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1822)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1683)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1585)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2349)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:188)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:168)

at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:144)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1266)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1575)
at 
org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2896)
at 
org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:1140)
at 
org.apache.phoenix.compile.CreateTableCompiler$1.execute(CreateTableCompiler.java:193)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:408)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:391)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:389)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:378)
at 
org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1806)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.createOtherSystemTables(ConnectionQueryServicesImpl.java:3051)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.access$900(ConnectionQueryServicesImpl.java:279)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2912)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl$12.call(ConnectionQueryServicesImpl.java:2833)
at 
org.apache.phoenix.util.PhoenixContextExecutor.call(PhoenixContextExecutor.java:76)
at 
org.apache.phoenix.query.ConnectionQueryServicesImpl.init(ConnectionQueryServicesImpl.java:2833)
at 
org.apache.phoenix.jdbc.PhoenixDriver.getConnectionQueryServices(PhoenixDriver.java:255)
at 
org.apache.phoenix.jdbc.PhoenixEmbeddedDriver.createConnection(PhoenixEmbeddedDriver.java:150)
at org.apache.phoenix.jdbc.PhoenixDriver.connect(PhoenixDriver.java:221)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:157)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:203)
at sqlline.Commands.connect(Commands.java:1064)
at sqlline.Commands.connect(Commands.java:996)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:38)
at sqlline.SqlLine.dispatch(SqlLine.java:809)
at sqlline.SqlLine.initArgs(SqlLine.java:588)
at sqlline.SqlLine.begin(SqlLine.java:661)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: org.apache.hadoop.hbase.DoNotRetryIOException: 
org.apache.hadoop.hbase.DoNotRetryIOException: Class 
org.apache.phoenix.coprocessor.TaskRegionObserver cannot be loaded Set 
hbase.table.sanity.checks to false at conf or table descriptor if you want to 
bypass sanity checks
at 
org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1822)
at 
org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1683)
at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1585)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:58549)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2349)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:123)
at

[jira] [Updated] (PHOENIX-4764) Cleanup metadata of child views for a base table that has been dropped

2018-11-10 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4764:
--
Priority: Blocker  (was: Major)

> Cleanup metadata of child views for a base table that has been dropped
> --
>
> Key: PHOENIX-4764
> URL: https://issues.apache.org/jira/browse/PHOENIX-4764
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 
> 0001-PHOENIX-4764-Revised-fix-based-on-the-review-comment.patch, 
> PHOENIX-4764.4.x-HBase-1.4.0001.patch, PHOENIX-4764.4.x-HBase-1.4.0002.patch, 
> PHOENIX-4764.4.x-HBase-1.4.0003.patch, PHOENIX-4764.4.x-HBase-1.4.0004.patch, 
> PHOENIX-4764.4.x-HBase-1.4.0005.patch, PHOENIX-4764.master.0001.patch, 
> PHOENIX-4764.master.0002.patch, PHOENIX-4764.master.0003.patch, 
> PHOENIX-4764.master.0003.patch, PHOENIX-4764.master.0004.patch, 
> PHOENIX-4764.master.0005.patch, PHOENIX-4764.master.0006.patch
>
>
> When we drop a base table, we no longer drop all the child view metadata. 
> Clean up the child view metadata during compaction. 
> If we try to recreate a base table that was previously dropped but whose 
> child view metadata wasn't cleaned up throw an exception. Add a test for 
> this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5013) Increase timeout for Tephra discovery service

2018-11-10 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-5013.
---
Resolution: Fixed

> Increase timeout for Tephra discovery service
> -
>
> Key: PHOENIX-5013
> URL: https://issues.apache.org/jira/browse/PHOENIX-5013
> Project: Phoenix
>  Issue Type: Test
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> Sometimes Tephra tests flap because the discovery service cannot be found 
> within the default 10 seconds. We should increase the timeout to prevent this 
> flapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5012) Don't derive IndexToolIT from ParallelStatsEnabled

2018-11-10 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-5012.
---
Resolution: Fixed

> Don't derive IndexToolIT from ParallelStatsEnabled
> --
>
> Key: PHOENIX-5012
> URL: https://issues.apache.org/jira/browse/PHOENIX-5012
> Project: Phoenix
>  Issue Type: Test
>    Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> We shouldn't derive tests from ParallelStatsEnabled that need their own mini 
> cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5013) Increase timeout for Tephra discovery service

2018-11-10 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5013:
-

 Summary: Increase timeout for Tephra discovery service
 Key: PHOENIX-5013
 URL: https://issues.apache.org/jira/browse/PHOENIX-5013
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.15.0


Sometimes Tephra tests flap because the discovery service cannot be found 
within the default 10 seconds. We should increase the timeout to prevent this 
flapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5012) Don't derive IndexToolIT from ParallelStatsEnabled

2018-11-10 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5012:
-

 Summary: Don't derive IndexToolIT from ParallelStatsEnabled
 Key: PHOENIX-5012
 URL: https://issues.apache.org/jira/browse/PHOENIX-5012
 Project: Phoenix
  Issue Type: Test
Reporter: James Taylor
Assignee: James Taylor
 Fix For: 4.15.0


We shouldn't derive tests from ParallelStatsEnabled that need their own mini 
cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] Release of Apache Phoenix 4.14.1 RC0

2018-11-07 Thread James Taylor
+1

Ran various manual sanity tests around loading/querying/indexing data

On Tue, Nov 6, 2018 at 3:53 PM Thomas D'Silva 
wrote:

> +1
>
> Ran some manual tests with indexes, sequences.
> Regarding Artem's question in PhoenixDatabaseMetdata.getDriverVersion() we
> only return the majorVersion.minorVersion
> perhaps you can file a JIRA to also inlcude the patch version.
>
> On Fri, Nov 2, 2018 at 11:47 AM, Josh Elser  wrote:
>
> > +1 (binding)
> >
> > Some observations:
> > * Some cruft (0.98, 1.2, and 1.3 seem to have this), not a blocker,
> > but good to ensure is cleaned up in the future:
> > ./bin/phoenix_utils.pyc
> >
> > Everything else looks good to me.
> >
> > - Josh
> >
> > On Tue, Oct 30, 2018 at 4:47 PM Vincent Poon 
> > wrote:
> > >
> > > Hello Everyone,
> > >
> > > This is a call for a vote on Apache Phoenix 4.14.1 RC0. This is a patch
> > > release of Phoenix 4.14 and is compatible with Apache HBase 0.98, 1.1,
> > 1.2,
> > > 1.3, 1.4. The release includes both a
> > > source-only release and a convenience binary release for each supported
> > > HBase version.
> > >
> > > This release has feature parity with supported HBase versions and
> > includes
> > > critical bug fixes for secondary indexes.
> > >
> > > The source tarball, including signatures, digests, etc can be found at:
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-0.98-rc0/src/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.1-rc0/src/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.2-rc0/src/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.3-rc0/src/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.4-rc0/src/
> > >
> > > The binary artifacts can be found at:
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-0.98-rc0/bin/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.1-rc0/bin/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.2-rc0/bin/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.3-rc0/bin/
> > > https://dist.apache.org/repos/dist/dev/phoenix/apache-
> > phoenix-4.14.1-HBase-1.4-rc0/bin/
> > >
> > > For a complete list of changes, see:
> > > https://issues.apache.org/jira/secure/ReleaseNote.jspa?
> > projectId=12315120&version=12343452
> > >
> > > Release artifacts are signed with the following key:
> > > https://people.apache.org/keys/committer/vincentpoon.asc
> > > https://dist.apache.org/repos/dist/release/phoenix/KEYS
> > >
> > > The hash and tag to be voted upon:
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> > b27613ea409f81cd2d5a51e4ab14b1ca4d81f478
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v4.14.1-HBase-0.98-rc0
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> > ca3597d842d00fbb4cda5adb84fcb792ab06f3a2
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v4.14.1-HBase-1.1-rc0
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> > 537ad56e0683fedf252aef49dd82ca2851151385
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v4.14.1-HBase-1.2-rc0
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> > 1c06ebc1c54a736b3c390c58ca949450bf16e6a7
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v4.14.1-HBase-1.3-rc0
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=commit;h=
> > c5ca547923bedf28a2d1b96362adb7e5e40dbd0c
> > > https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=tag;
> > h=refs/tags/v4.14.1-HBase-1.4-rc0
> > >
> > >
> > > Vote will be open for at least 72 hours. Please vote:
> > >
> > > [ ] +1 approve
> > > [ ] +0 no opinion
> > > [ ] -1 disapprove (and reason why)
> > >
> > > Thanks,
> > > The Apache Phoenix Team
> >
>


[jira] [Created] (PHOENIX-5004) Fix org.jboss.netty.channel.ChannelException: Failed to bind to: flapper

2018-11-06 Thread James Taylor (JIRA)
James Taylor created PHOENIX-5004:
-

 Summary: Fix org.jboss.netty.channel.ChannelException: Failed to 
bind to: flapper
 Key: PHOENIX-5004
 URL: https://issues.apache.org/jira/browse/PHOENIX-5004
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: Yonatan Gottesman


Recent Jenkins runs on omid2 are flapping with the following exception:
{code:java}
com.google.common.util.concurrent.UncheckedExecutionException: 
org.jboss.netty.channel.ChannelException: Failed to bind to: 
0.0.0.0/0.0.0.0:41016
at 
org.apache.phoenix.end2end.IndexToolIT.testSecondaryIndex(IndexToolIT.java:150)
{code}
See [https://builds.apache.org/job/Phoenix-omid2/141/]

Do you think we might be running into OOM errors given that the tso needs so 
much memory? Maybe we need to increase our heapsize when the the tests are run? 
Other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4782) Set transaction on OmidTransactionTable or throw if created called before transaction started

2018-11-06 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4782:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-3623)

> Set transaction on OmidTransactionTable or throw if created called before 
> transaction started
> -
>
> Key: PHOENIX-4782
> URL: https://issues.apache.org/jira/browse/PHOENIX-4782
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Minor
>
> When PhoenixTransactionContext.getTransactionalTable() or 
> getTransactionalTableWriter() are called, but the transaction has not yet 
> been started, we should either:
>  * throw an exception or
>  * set the transaction on all OmidTransactionTable instances that have been 
> opened before
> See FIXME in FlappingTransactionIT.testExternalTxContext() for an example



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4888) Create script that starts Omid transaction manager

2018-11-06 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4888.
---
Resolution: Fixed

Thanks, [~yonigo]. I've committed your patch to the omid2 branch.

Our website is in SVN and uses markdown in general. The file you'd want to 
change is {{transaction.md}}. Directions are here: 
http://phoenix.apache.org/building_website.html.

> Create script that starts Omid transaction manager
> --
>
> Key: PHOENIX-4888
> URL: https://issues.apache.org/jira/browse/PHOENIX-4888
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: James Taylor
>Assignee: Yonatan Gottesman
>Priority: Major
> Attachments: PHOENIX-4888.patch, PHOENIX-4888_v2.patch, 
> omid4888.patch, omid4888_v2.patch
>
>
> To support local testing of Omid support, we need to create a simple script 
> (similar to bin/tephra) that starts the transaction manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4986) Support low latency version of Omid

2018-10-19 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4986.
---
   Resolution: Fixed
Fix Version/s: 4.15.0

> Support low latency version of Omid
> ---
>
> Key: PHOENIX-4986
> URL: https://issues.apache.org/jira/browse/PHOENIX-4986
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>Assignee: Yonatan Gottesman
>Priority: Major
> Fix For: 4.15.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4987) Update Omid version to latest on master

2018-10-19 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4987.
---
   Resolution: Fixed
Fix Version/s: 4.15.0

> Update Omid version to latest on master
> ---
>
> Key: PHOENIX-4987
> URL: https://issues.apache.org/jira/browse/PHOENIX-4987
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4987) Update Omid version to latest on master

2018-10-19 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4987:
-

 Summary: Update Omid version to latest on master
 Key: PHOENIX-4987
 URL: https://issues.apache.org/jira/browse/PHOENIX-4987
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4986) Support low latency version of Omid

2018-10-19 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4986:
-

Assignee: Yonatan Gottesman

> Support low latency version of Omid
> ---
>
> Key: PHOENIX-4986
> URL: https://issues.apache.org/jira/browse/PHOENIX-4986
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>Assignee: Yonatan Gottesman
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4986) Support low latency version of Omid

2018-10-19 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4986:
-

Assignee: (was: James Taylor)

> Support low latency version of Omid
> ---
>
> Key: PHOENIX-4986
> URL: https://issues.apache.org/jira/browse/PHOENIX-4986
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4986) Support low latency version of Omid

2018-10-19 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4986:
-

 Summary: Support low latency version of Omid
 Key: PHOENIX-4986
 URL: https://issues.apache.org/jira/browse/PHOENIX-4986
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4975) Fix failing unit tests for Omid due to shadow cells and no local indexes

2018-10-16 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4975.
---
   Resolution: Fixed
Fix Version/s: 4.15.0

> Fix failing unit tests for Omid due to shadow cells and no local indexes
> 
>
> Key: PHOENIX-4975
> URL: https://issues.apache.org/jira/browse/PHOENIX-4975
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.15.0
>    Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4975.patch, PHOENIX-4975_v2.patch
>
>
> Fix unit tests failing in OMID due to:
>  * Not supporting local indexes on OMID transactional tables
>  * Taking into account shadow cells for guideposts width.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4975) Fix failing unit tests for Omid due to shadow cells and no local indexes

2018-10-16 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4975:
--
Attachment: PHOENIX-4975_v2.patch

> Fix failing unit tests for Omid due to shadow cells and no local indexes
> 
>
> Key: PHOENIX-4975
> URL: https://issues.apache.org/jira/browse/PHOENIX-4975
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.15.0
>    Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4975.patch, PHOENIX-4975_v2.patch
>
>
> Fix unit tests failing in OMID due to:
>  * Not supporting local indexes on OMID transactional tables
>  * Taking into account shadow cells for guideposts width.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4975) Fix failing unit tests for Omid due to shadow cells and no local indexes

2018-10-16 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4975:
--
Attachment: PHOENIX-4975.patch

> Fix failing unit tests for Omid due to shadow cells and no local indexes
> 
>
> Key: PHOENIX-4975
> URL: https://issues.apache.org/jira/browse/PHOENIX-4975
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.15.0
>    Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Attachments: PHOENIX-4975.patch
>
>
> Fix unit tests failing in OMID due to:
>  * Not supporting local indexes on OMID transactional tables
>  * Taking into account shadow cells for guideposts width.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4975) Fix failing unit tests for Omid due to shadow cells and no local indexes

2018-10-16 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4975:
--
Issue Type: Test  (was: Bug)

> Fix failing unit tests for Omid due to shadow cells and no local indexes
> 
>
> Key: PHOENIX-4975
> URL: https://issues.apache.org/jira/browse/PHOENIX-4975
> Project: Phoenix
>  Issue Type: Test
>Affects Versions: 4.15.0
>    Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
>
> Fix unit tests failing in OMID due to:
>  * Not supporting local indexes on OMID transactional tables
>  * Taking into account shadow cells for guideposts width.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4975) Fix failing unit tests for Omid due to shadow cells and no local indexes

2018-10-16 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4975:
-

 Summary: Fix failing unit tests for Omid due to shadow cells and 
no local indexes
 Key: PHOENIX-4975
 URL: https://issues.apache.org/jira/browse/PHOENIX-4975
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.15.0
Reporter: James Taylor
Assignee: James Taylor


Fix unit tests failing in OMID due to:
 * Not supporting local indexes on OMID transactional tables
 * Taking into account shadow cells for guideposts width.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4874) psql doesn't support date/time with values smaller than milliseconds

2018-10-15 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4874:
--
Priority: Blocker  (was: Major)

> psql doesn't support date/time with values smaller than milliseconds
> 
>
> Key: PHOENIX-4874
> URL: https://issues.apache.org/jira/browse/PHOENIX-4874
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Rajeshbabu Chintaguntla
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4874.patch, PHOENIX-4874_v2.patch
>
>
> [https://phoenix.apache.org/tuning.html] lacks entries for 
> phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by 
> psql to parse out TIME and TIMESTAMP data types.
> Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4874) psql doesn't support date/time with values smaller than milliseconds

2018-10-15 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4874:
---

> psql doesn't support date/time with values smaller than milliseconds
> 
>
> Key: PHOENIX-4874
> URL: https://issues.apache.org/jira/browse/PHOENIX-4874
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Rajeshbabu Chintaguntla
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4874.patch, PHOENIX-4874_v2.patch
>
>
> [https://phoenix.apache.org/tuning.html] lacks entries for 
> phoenix.query.timeFormat, phoenix.query.timestampFormat which are used by 
> psql to parse out TIME and TIMESTAMP data types.
> Add them.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Java 1.8 dependencies in 4.x?

2018-10-05 Thread James Taylor
I've reopened PHOENIX-4825 where this came in and marked it as a blocker.

On Fri, Oct 5, 2018 at 10:21 AM Josh Elser  wrote:

> Agreed in theory. I'm surprised the maven-compiler-plugin doesn't fail
> on this in the same manner (maybe that's not something it can do..)
>
> On 10/5/18 1:45 AM, James Taylor wrote:
> > I'm not able to compile 4.x-HBase-1.3 in Eclipse because the pom
> specifies
> > Java 1.7 but there are Java 1.8 dependencies with java.util.Base64.
> >
> > Anyone else seeing this? IMHO, we should stick with 1.7 for 4.x if that's
> > what HBase supports.
> >
> > Thanks,
> > James
> >
>


[jira] [Reopened] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-10-05 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4825:
---

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4825.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4825) Replace usage of HBase Base64 implementation with java.util.Base64

2018-10-05 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4825:
--
Priority: Blocker  (was: Major)

> Replace usage of HBase Base64 implementation with java.util.Base64
> --
>
> Key: PHOENIX-4825
> URL: https://issues.apache.org/jira/browse/PHOENIX-4825
> Project: Phoenix
>  Issue Type: Task
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4825.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Java 1.8 dependencies in 4.x?

2018-10-04 Thread James Taylor
I'm not able to compile 4.x-HBase-1.3 in Eclipse because the pom specifies
Java 1.7 but there are Java 1.8 dependencies with java.util.Base64.

Anyone else seeing this? IMHO, we should stick with 1.7 for 4.x if that's
what HBase supports.

Thanks,
James


[jira] [Updated] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-10-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4731:
--
Attachment: PHOENIX-4731_v3-4.x-HBase-1.3.patch

> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4731-4.x-HBase-1.3.patch, 
> PHOENIX-4731_v2-4.x-HBase-1.3.patch, PHOENIX-4731_v3-4.x-HBase-1.3.patch
>
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4944) Ensure timeouts are configured low for RPCs to commit table

2018-10-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4944:
--
Issue Type: Bug  (was: Sub-task)
Parent: (was: PHOENIX-3623)

> Ensure timeouts are configured low for RPCs to commit table
> ---
>
> Key: PHOENIX-4944
> URL: https://issues.apache.org/jira/browse/PHOENIX-4944
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-10-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4731:
--
Attachment: PHOENIX-4731_v2-4.x-HBase-1.3.patch

> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4731-4.x-HBase-1.3.patch, 
> PHOENIX-4731_v2-4.x-HBase-1.3.patch
>
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-10-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4731:
--
Attachment: (was: PHOENIX-4731_v2-4.x-HBase-1.3.patch)

> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4731-4.x-HBase-1.3.patch, 
> PHOENIX-4731_v2-4.x-HBase-1.3.patch
>
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-10-03 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4731:
--
Attachment: PHOENIX-4731_v2-4.x-HBase-1.3.patch

> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4731-4.x-HBase-1.3.patch, 
> PHOENIX-4731_v2-4.x-HBase-1.3.patch
>
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4943) Write Omid shadow cells when building global index synchronous

2018-10-03 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4943.
---
Resolution: Fixed

> Write Omid shadow cells when building global index synchronous
> --
>
> Key: PHOENIX-4943
> URL: https://issues.apache.org/jira/browse/PHOENIX-4943
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4944) Ensure timeouts are configured low for RPCs to commit table

2018-10-02 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4944:
-

 Summary: Ensure timeouts are configured low for RPCs to commit 
table
 Key: PHOENIX-4944
 URL: https://issues.apache.org/jira/browse/PHOENIX-4944
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4943) Write Omid shadow cells when building global index synchronous

2018-10-02 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4943:
-

 Summary: Write Omid shadow cells when building global index 
synchronous
 Key: PHOENIX-4943
 URL: https://issues.apache.org/jira/browse/PHOENIX-4943
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4731) Make running transactional unit tests for a given provider optional

2018-10-02 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4731:
--
Attachment: PHOENIX-4731-4.x-HBase-1.3.patch

> Make running transactional unit tests for a given provider optional
> ---
>
> Key: PHOENIX-4731
> URL: https://issues.apache.org/jira/browse/PHOENIX-4731
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4731-4.x-HBase-1.3.patch
>
>
> Different users may not be relying on transactions, or may only be relying on 
> a single transaction provider. By default, we can run transactional tests 
> across all providers, but we should have a way of disabling the running of a 
> given provider.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4942) Move MetaDataEndpointImplTest to integration test

2018-10-02 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4942:
-

 Summary: Move MetaDataEndpointImplTest to integration test
 Key: PHOENIX-4942
 URL: https://issues.apache.org/jira/browse/PHOENIX-4942
 Project: Phoenix
  Issue Type: Bug
Reporter: James Taylor
Assignee: Thomas D'Silva


Any test that spins up a mini cluster must be an integration test, not a unit 
test. Looks like MetaDataEndpointImplTest snuck in under test instead of it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4937) Pass whether or not a table is conflict free to TTable constructor

2018-10-02 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4937.
---
Resolution: Fixed

> Pass whether or not a table is conflict free to TTable constructor
> --
>
> Key: PHOENIX-4937
> URL: https://issues.apache.org/jira/browse/PHOENIX-4937
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>
> We can prevent conflict detection logic from happening when we don't need it 
> by using the new TTable constructor in OmidTransactionalTable. This will help 
> performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4890) Prevent local indexes from being created on Omid transactional tables

2018-10-02 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4890?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4890.
---
   Resolution: Fixed
Fix Version/s: 4.15.0

> Prevent local indexes from being created on Omid transactional tables
> -
>
> Key: PHOENIX-4890
> URL: https://issues.apache.org/jira/browse/PHOENIX-4890
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>
> Until PHOENIX-4760 is fixed, we should prevent local indexes from being 
> created on Omid transactional tables. We can do this easily through our 
> existing PhoenixTransactionProvider.isUnsupported(Feature) method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-10-01 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4827.
---
Resolution: Fixed

> Modify TAL to use Table instead of HTableInterface
> --
>
> Key: PHOENIX-4827
> URL: https://issues.apache.org/jira/browse/PHOENIX-4827
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>
> Once both Tephra and Omid both use Table instead of HTableInterface, the TAL 
> methods should be updated as well. Support for this in Tephra was included in 
> the recent 0.15.0 release and for Omid in OMID-107.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4937) Pass whether or not a table is conflict free to TTable constructor

2018-09-29 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4937:
-

 Summary: Pass whether or not a table is conflict free to TTable 
constructor
 Key: PHOENIX-4937
 URL: https://issues.apache.org/jira/browse/PHOENIX-4937
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor


We can prevent conflict detection logic from happening when we don't need it by 
using the new TTable constructor in OmidTransactionalTable. This will help 
performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4937) Pass whether or not a table is conflict free to TTable constructor

2018-09-29 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4937:
-

Assignee: James Taylor

> Pass whether or not a table is conflict free to TTable constructor
> --
>
> Key: PHOENIX-4937
> URL: https://issues.apache.org/jira/browse/PHOENIX-4937
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>
> We can prevent conflict detection logic from happening when we don't need it 
> by using the new TTable constructor in OmidTransactionalTable. This will help 
> performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4920) Make Omid the default transaction provider

2018-09-24 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4920?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4920.
---
   Resolution: Fixed
Fix Version/s: 4.15.0

Fixed in omid2 branch

> Make Omid the default transaction provider
> --
>
> Key: PHOENIX-4920
> URL: https://issues.apache.org/jira/browse/PHOENIX-4920
> Project: Phoenix
>  Issue Type: Sub-task
>Affects Versions: 4.15.0
>    Reporter: James Taylor
>Assignee: James Taylor
>Priority: Major
> Fix For: 4.15.0
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4920) Make Omid the default transaction provider

2018-09-24 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4920:
-

 Summary: Make Omid the default transaction provider
 Key: PHOENIX-4920
 URL: https://issues.apache.org/jira/browse/PHOENIX-4920
 Project: Phoenix
  Issue Type: Sub-task
Affects Versions: 4.15.0
Reporter: James Taylor
Assignee: James Taylor






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4890) Prevent local indexes from being created on Omid transactional tables

2018-09-04 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4890:
-

 Summary: Prevent local indexes from being created on Omid 
transactional tables
 Key: PHOENIX-4890
 URL: https://issues.apache.org/jira/browse/PHOENIX-4890
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor
Assignee: James Taylor


Until PHOENIX-4760 is fixed, we should prevent local indexes from being created 
on Omid transactional tables. We can do this easily through our existing 
PhoenixTransactionProvider.isUnsupported(Feature) method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-09-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4827:
--
Description: Once both Tephra and Omid both use Table instead of 
HTableInterface, the TAL methods should be updated as well. Support for this in 
Tephra was included in the recent 0.15.0 release and for Omid in OMID-107.  
(was: Once Omid supports Table instead of HTableInterface (OMID-107), the TAL 
methods should be updated as well.)

> Modify TAL to use Table instead of HTableInterface
> --
>
> Key: PHOENIX-4827
> URL: https://issues.apache.org/jira/browse/PHOENIX-4827
> Project: Phoenix
>  Issue Type: Improvement
>        Reporter: James Taylor
>Priority: Major
>
> Once both Tephra and Omid both use Table instead of HTableInterface, the TAL 
> methods should be updated as well. Support for this in Tephra was included in 
> the recent 0.15.0 release and for Omid in OMID-107.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-09-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4827:
--
Issue Type: Sub-task  (was: Improvement)
Parent: PHOENIX-3623

> Modify TAL to use Table instead of HTableInterface
> --
>
> Key: PHOENIX-4827
> URL: https://issues.apache.org/jira/browse/PHOENIX-4827
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>Priority: Major
>
> Once both Tephra and Omid both use Table instead of HTableInterface, the TAL 
> methods should be updated as well. Support for this in Tephra was included in 
> the recent 0.15.0 release and for Omid in OMID-107.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-09-04 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reassigned PHOENIX-4827:
-

Assignee: James Taylor

> Modify TAL to use Table instead of HTableInterface
> --
>
> Key: PHOENIX-4827
> URL: https://issues.apache.org/jira/browse/PHOENIX-4827
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Major
>
> Once both Tephra and Omid both use Table instead of HTableInterface, the TAL 
> methods should be updated as well. Support for this in Tephra was included in 
> the recent 0.15.0 release and for Omid in OMID-107.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4889) Document Omid transaction support

2018-09-04 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4889:
-

 Summary: Document Omid transaction support
 Key: PHOENIX-4889
 URL: https://issues.apache.org/jira/browse/PHOENIX-4889
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor


We'll need to update our transaction documentation to reflect what's required 
to use Omid. See [http://phoenix.apache.org/transactions.html] with the source 
in SVN at transaction.md (see 
[http://phoenix.apache.org/building_website.html).]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4888) Create script that starts Omid transaction manager

2018-09-04 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4888:
-

 Summary: Create script that starts Omid transaction manager
 Key: PHOENIX-4888
 URL: https://issues.apache.org/jira/browse/PHOENIX-4888
 Project: Phoenix
  Issue Type: Sub-task
Reporter: James Taylor


To support local testing of Omid support, we need to create a simple script 
(similar to bin/tephra) that starts the transaction manager.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes

2018-08-28 Thread James Taylor
Glad to hear this was discussed at HBaseCon. The most common request I've
seen asked for is to be able to write Phoenix-compatible data from other,
non-Phoenix services/projects, mainly because row-by-row updates (even when
batched) can be a bottleneck. This is not feasible by using low level
constructs because of all the features provided by Phoenix: secondary
indexes, composite row keys, encoded columns, storage formats, salting,
ascending/descending row keys, array support, etc. The most feasible way to
accomplish writes outside of Phoenix is to use UPSERT VALUES followed by
PhoenixRuntime#getUncommittedDataIterator to get the Cells that would be
committed (followed by rolling back the uncommitted data). This maintains
Phoenix's abstract and minimizes any overhead (the cost of parsing is
negligible). You can control the frequency of how often the schema is
pulled over from the server through the UPDATE_CACHE_FREQUENCY declaration.

I haven't seen much demand for bypassing Phoenix JDBC on the read side. If
you don't want to use Phoenix to query, what's the point in using it?

As far as Calicte/Phoenix, it'd be great to see this work picked up. I
don't think this solves the API problem, though. I good home for this
adapter would be Apache Drill IMHO. They're up to a new enough version of
Calcite (and off of their fork) so that this would be feasible and would
provide immediate benefits on the query side.

Thanks,
James

On Tue, Aug 28, 2018 at 1:38 PM Andrew Purtell  wrote:

> On Mon, Aug 27, 2018 at 11:03 AM Josh Elser  wrote:
>
> > 2. Can Phoenix be the de-facto schema for SQL on HBase?
> >
> > We've long asserted "if you have to ask how Phoenix serializes data, you
> > shouldn't be do it" (a nod that you have to write lots of code). What if
> > we turn that on its head? Could we extract our PDataType serialization,
> > composite row-key, column encoding, etc into a minimal API that folks
> > with their own itches can use?
> >
> > With the growing integrations into Phoenix, we could embrace them by
> > providing an API to make what they're doing easier. In the same vein, we
> > cement ourselves as a cornerstone of doing it "correctly"
> >
>
> There have been discussion where I work where it seems this would be a
> great idea. If data types, row key constructors, and other key and data
> serialization concerns were a public API, these could be used by connectors
> to Spark or other systems to generate and consume Phoenix compatible data.
> It improves the integration story all around.
>
> Another thought for refactoring I've heard is exposing an API for
> generating query plans without needing the SQL parser. A public API  for
> programmatically building query plans could used by connectors to Spark or
> other systems when pushing down parts of a parallelized or federated query
> to Phoenix data sources, avoiding unnecessary hacking SQL language
> generation, string mangling, or (re)parsing overheads. This kind of
> describes Calcite's raison d'être. If Phoenix is not embedding Calcite as
> query planner, as it does not currently, it is independently useful to have
> a public API for programmatic query plan construction given the current
> implementation regardless. If Phoenix were to embed Calcite as query
> planner, you'd probably get a ton of re-use among internal and external
> users of the Calcite APIs. I'd think whatever option you might choose would
> be informed by the suitability (or not) of embedding Calcite as Phoenix's
> query planner, and how soon that might be expected to be feature complete.
> For what it's worth. Again this extends possibilities for integration.
>
>
> > 3. Better recommendations to users to not attempt certain queries.
> >
> > We definitively know that there are certain types of queries that
> > Phoenix cannot support well (compared to optimal Phoenix use-cases).
> > Users very commonly fall into such pitfalls on their own and this leaves
> > a bad taste in their mouth (thinking that the product "stinks").
> >
> > Can we do a better job of telling the user when and why it happened?
> > What would such a user-interaction model look like? Can we supplement
> > the "why" with instructions of what to do differently (even if in the
> > abstract)?
> >
> > 4. Phoenix-Calcite
> >
> > This was mentioned as a "nice to have". From what I understand, there
> > was nothing explicitly from with the implementation or approach, just
> > that it was a massive undertaking to continue with little immediate
> > gain. Would this be a boon for us to try to continue in some form? Are
> > there steps we can take that would help push us along the right path?
> >
> > Anyways, I'd love to hear everyone's thoughts. While the concerns were
> > raised at HBaseCon Asia, the suggestions that accompany them here are
> > largely mine ;). Feel free to break them out into their own threads if
> > you think that would be better (or say that you disagree with me --
> > that's cool too)!
> >
> > - Josh
> >
>
>

Re: [DISCUSS] EXPLAIN'ing what we do well (was Re: [DISCUSS] Suggestions for Phoenix from HBaseCon Asia notes)

2018-08-28 Thread James Taylor
Thomas' idea is a good one. From the EXPLAIN plan ResultSet, you can
directly get an estimate of the number of bytes that will be scanned. Take
a look at this [1] documentation. We need to implement PHOENIX-4735 too (so
that things are setup well out-of-the-box). We could have a kind of
guardrail config property that would define the max allowed bytes allowed
to be read and fail a query that goes over this limit. That would cover 80%
of the issues IMHO. Other guardrail config properties could cover other
corner cases.

[1] http://phoenix.apache.org/explainplan.html

On Mon, Aug 27, 2018 at 3:01 PM Josh Elser  wrote:

> On 8/27/18 5:03 PM, Thomas D'Silva wrote:
> >> 3. Better recommendations to users to not attempt certain queries.
> >>
> >> We definitively know that there are certain types of queries that
> Phoenix
> >> cannot support well (compared to optimal Phoenix use-cases). Users very
> >> commonly fall into such pitfalls on their own and this leaves a bad
> taste
> >> in their mouth (thinking that the product "stinks").
> >>
> >> Can we do a better job of telling the user when and why it happened?
> What
> >> would such a user-interaction model look like? Can we supplement the
> "why"
> >> with instructions of what to do differently (even if in the abstract)?
> >>
> > Providing relevant feedback before/after a query is run in general is
> very
> > hard to do. If stats are enabled we have an estimate of how many
> rows/bytes
> > will be scanned.
> > We could have an optional feature that prevent users from running queries
> > if the rows/bytes scanned are above a certain threshold. We should also
> > enhance our explain
> > plan documentationhttp://phoenix.apache.org/explainplan.html  with
> example
> > of queries so users know what kinds of queries Phoenix handles well.
>
> Breaking this out..
>
> Totally agree -- this is by no means "easy". I struggle very often
> trying to express just _why_ a query that someone is running in Phoenix
> doesn't run as well as they think it should.
>
> Centralizing on the EXPLAIN plan is good. Making sure it's
> consumable/thorough is probably the lowest hanging fruit. If we can give
> concrete examples to the kinds of explain plans a user might see, I
> think that might get use from users/admins.
>
> Throwing a random idea out there: with stats and the query plan, can we
> give a thumbs-up/thumbs-down? If we can, is that useful?
>


Re: Null array elements with joins

2018-08-13 Thread James Taylor
I commented on the JIRA you filed here: PHOENIX-4791. Best to keep
discussion there.
Thanks,
James

On Mon, Aug 13, 2018 at 11:08 AM, Gerald Sangudi 
wrote:

> Hello all,
>
> Any suggestions or pointers on the issue below?
>
> Projecting array elements works when not using joins, and does not work
> when we use hash joins. Is there an issue with the ProjectionCompiler for
> joins? I have not been able to isolate the specific cause, and would
> appreciate any pointers or suggestions.
>
> Thanks,
> Gerald
>
> On Tue, Jun 19, 2018 at 10:02 AM, Tulasi Paradarami <
> tulasi.krishn...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm running few tests against Phoenix array and running into this bug
>> where array elements return null values when a join is involved. Is this a
>> known issue/limitation of arrays?
>>
>> create table array_test_1 (id integer not null primary key, arr
>> tinyint[5]);
>> upsert into array_test_1 values (1001, array[0, 0, 0, 0, 0]);
>> upsert into array_test_1 values (1002, array[0, 0, 0, 0, 1]);
>> upsert into array_test_1 values (1003, array[0, 0, 0, 1, 1]);
>> upsert into array_test_1 values (1004, array[0, 0, 1, 1, 1]);
>> upsert into array_test_1 values (1005, array[1, 1, 1, 1, 1]);
>>
>> create table test_table_1 (id integer not null primary key, val varchar);
>> upsert into test_table_1 values (1001, 'abc');
>> upsert into test_table_1 values (1002, 'def');
>> upsert into test_table_1 values (1003, 'ghi');
>>
>> 0: jdbc:phoenix:localhost> select t1.id, t2.val, t1.arr[1], t1.arr[2],
>> t1.arr[3] from array_test_1 as t1 join test_table_1 as t2 on t1.id =
>> t2.id;
>> ++-++---
>> -++
>> | T1.ID  | T2.VAL  | ARRAY_ELEM(T1.ARR, 1)  | ARRAY_ELEM(T1.ARR, 2)  |
>> ARRAY_ELEM(T1.ARR, 3)  |
>> ++-++---
>> -++
>> | 1001   | abc | null   | null   |
>> null   |
>> | 1002   | def | null   | null   |
>> null   |
>> | 1003   | ghi | null   | null   |
>> null   |
>> ++-++---
>> -++
>> 3 rows selected (0.056 seconds)
>>
>> However, directly selecting array elements from the array returns data
>> correctly.
>> 0: jdbc:phoenix:localhost> select t1.id, t1.arr[1], t1.arr[2], t1.arr[3]
>> from array_test_1 as t1;
>> +---+-+-+---
>> --+
>> |  ID   | ARRAY_ELEM(ARR, 1)  | ARRAY_ELEM(ARR, 2)  | ARRAY_ELEM(ARR, 3)
>> |
>> +---+-+-+---
>> --+
>> | 1001  | 0   | 0   | 0
>>  |
>> | 1002  | 0   | 0   | 0
>>  |
>> | 1003  | 0   | 0   | 0
>>  |
>> | 1004  | 0   | 0   | 1
>>  |
>> | 1005  | 1   | 1   | 1
>>  |
>> +---+-+-+---
>> --+
>> 5 rows selected (0.044 seconds)
>>
>>
>>
>


[jira] [Resolved] (PHOENIX-4484) Write directly to HBase when creating an index for transactional table

2018-08-09 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4484.
---
Resolution: Not A Problem

Closing as "Not a Problem" based on [~ohads]'s last comment.

> Write directly to HBase when creating an index for transactional table
> --
>
> Key: PHOENIX-4484
> URL: https://issues.apache.org/jira/browse/PHOENIX-4484
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>Priority: Major
>
> Today, when creating an index table for a non empty data table. The writes 
> are performed using the transaction api and both consumes client side memory, 
> for storing the writeset, and checks for conflict analysis upon commit. This 
> is redundant and can be replaced by direct write to HBase. For this reason, a 
> new function in the transaction abstraction layer should be added that writes 
> directly to HBase at the Tephra's case and adds shadow cells with the fence 
> id at the Omid case. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-3860) Implement TAL functionality for Omid

2018-08-09 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-3860.
---
Resolution: Fixed

The TAL work looks to be complete (PHOENIX-4773 was the last of it), so closing 
this.

> Implement TAL functionality for Omid
> 
>
> Key: PHOENIX-3860
> URL: https://issues.apache.org/jira/browse/PHOENIX-3860
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Ohad Shacham
>Assignee: Ohad Shacham
>Priority: Major
>
> Implement TAL functionality for Omid in order to be able to use Omid as 
> Phoenix's transaction processing engine. 
> To support the integration jira [OMID-82] was opened that encapsulates all 
> Omid related development for Phoenix.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4783) Fix Timestamp not allowed in transactional user operations error

2018-08-09 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4783.
---
Resolution: Duplicate

Duplicate of OMID-106

> Fix Timestamp not allowed in transactional user operations error
> 
>
> Key: PHOENIX-4783
> URL: https://issues.apache.org/jira/browse/PHOENIX-4783
> Project: Phoenix
>  Issue Type: Sub-task
>        Reporter: James Taylor
>Assignee: Ohad Shacham
>Priority: Major
>
> MutableRollbackIT.testCheckpointAndRollback() is failing with an error of 
> "Timestamp not allowed in transactional user operations" from Omid. We should 
> fix this in Omid or workaround it in Phoenix.
> {code:java}
> ERROR] 
> testCheckpointAndRollback[MutableRollbackIT_localIndex=false](org.apache.phoenix.end2end.index.txn.MutableRollbackIT)
>  Time elapsed: 8.822 s <<< ERROR!
> org.apache.phoenix.execute.CommitException: 
> java.lang.IllegalArgumentException: Timestamp not allowed in transactional 
> user operations
> at 
> org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:475)
> Caused by: java.lang.IllegalArgumentException: Timestamp not allowed in 
> transactional user operations
> at 
> org.apache.phoenix.end2end.index.txn.MutableRollbackIT.testCheckpointAndRollback(MutableRollbackIT.java:475)
>  {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-4827) Modify TAL to use Table instead of HTableInterface

2018-07-30 Thread James Taylor (JIRA)
James Taylor created PHOENIX-4827:
-

 Summary: Modify TAL to use Table instead of HTableInterface
 Key: PHOENIX-4827
 URL: https://issues.apache.org/jira/browse/PHOENIX-4827
 Project: Phoenix
  Issue Type: Improvement
Reporter: James Taylor


Once Omid supports Table instead of HTableInterface (OMID-107), the TAL methods 
should be updated as well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4820) Optimize OrderBy for ClientAggregatePlan

2018-07-29 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4820:
--
Fix Version/s: 4.15.0

> Optimize OrderBy for ClientAggregatePlan
> 
>
> Key: PHOENIX-4820
> URL: https://issues.apache.org/jira/browse/PHOENIX-4820
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.0
>Reporter: chenglei
>Assignee: chenglei
>Priority: Major
> Fix For: 4.15.0
>
> Attachments: PHOENIX-4820_v1.patch
>
>
> Given a table
> {code}
>   create table test ( 
>pk1 varchar not null , 
>pk2 varchar not null, 
>pk3 varchar not null,
> v1 varchar, 
> v2 varchar, 
> CONSTRAINT TEST_PK PRIMARY KEY ( 
>   pk1,
>   pk2,
>   pk3 ))
> {code}
> for following sql :
> {code}
>  select a.ak3 
>  from (select substr(pk1,1,1) ak1,substr(pk2,1,1) ak2,substr(pk3,1,1) 
> ak3,substr(v1,1,1) av1,substr(v2,1,1) av2 from test order by pk2,pk3 limit 
> 10) a  group by a.ak3,a.av1 order by a.ak3,a.av1
> {code}
> Intuitively, the above OrderBy statement {{order by a.ak3,a.av1}} should be 
> compiled out because it match the group by statement, but in fact it is not.
> The problem is caused by the {{QueryCompiler.compileSingleQuery}} and 
> {{QueryCompiler.compileSingleFlatQuery}},for  
> {{QueryCompiler.compileSingleQuery}} method,because the inner query has order 
> by, so in line 520, local variable {{isInRowKeyOrder}} is false:
> {code}
> 519context.setCurrentTable(tableRef);
> 520boolean isInRowKeyOrder = innerPlan.getGroupBy() == 
> GroupBy.EMPTY_GROUP_BY && innerPlan.getOrderBy() == OrderBy.EMPTY_ORDER_BY;
> {code}
> In {{QueryCompiler.compileSingleFlatQuery}},when {{OrderByCompiler.compile}} 
> method is invoked, the last parameter {{isInRowKeyOrder}} is false:
> {code}
> 562OrderBy orderBy = OrderByCompiler.compile(context, select, 
> groupBy, limit, offset, projector,
> 563groupBy == GroupBy.EMPTY_GROUP_BY ? 
> innerPlanTupleProjector : null, isInRowKeyOrder);
> {code}
> So in following line 156 for  {{OrderByCompiler.compile}},even though the 
> {{tracker.isOrderPreserving}} is true, the OrderBy  statement could not be 
> compiled out.
> {code}
> 156   if (isInRowKeyOrder && tracker.isOrderPreserving()) {
> {code}  
> In my opinion, with GroupBy, in following line 563 for 
> {{QueryCompiler.compileSingleFlatQuery}} method, when we call 
> {{OrderByCompiler.compile}} method, we no need to conside the 
> {{isInRowKeyOrder}},  just like the previous parameter  {{tupleProjector}} 
> does.
> {code}
>  562OrderBy orderBy = OrderByCompiler.compile(context, select, 
> groupBy, limit, offset, projector,
>  563groupBy == GroupBy.EMPTY_GROUP_BY ? 
> innerPlanTupleProjector : null, isInRowKeyOrder);
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4819) Add Archive Page to PhoenixCon Website

2018-07-24 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16554580#comment-16554580
 ] 

James Taylor commented on PHOENIX-4819:
---

Very nice, [~elserj]! Would be great to add links to each PhoenixCon from this 
general resources/presentation  page too: 
http://phoenix.apache.org/resources.html

> Add Archive Page to PhoenixCon Website
> --
>
> Key: PHOENIX-4819
> URL: https://issues.apache.org/jira/browse/PHOENIX-4819
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Clay B.
>Assignee: Clay B.
>Priority: Minor
> Attachments: 0001-PhoenixCon-Archive.patch
>
>
> I have put together a PhoenixCon archives page. Could it be posted to the 
> website?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4797) file not found or file exist exception when create global index use -snaopshot option

2018-07-19 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16549346#comment-16549346
 ] 

James Taylor commented on PHOENIX-4797:
---

[~karanmehta93] - please commit this to all branches as we’ll do a 4.14.1 
release and this will be a good one to have in that.

> file not found or file exist exception when create global index use 
> -snaopshot option
> -
>
> Key: PHOENIX-4797
> URL: https://issues.apache.org/jira/browse/PHOENIX-4797
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.2-cdh5.11.2
>Reporter: sailingYang
>Priority: Major
>
> when use indextool with -snapshot option and if the mapreduce create multi 
> mapper.this will cause the hdfs file not found or  hdfs file exist 
> exception。finally the mapreduce task must be failed. because the mapper use 
> the same restore work dir.
> {code:java}
> Error: java.io.IOException: java.util.concurrent.ExecutionException: 
> java.io.IOException: The specified region already exists on disk: 
> hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
> at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:186)
> at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.cloneHdfsRegions(RestoreSnapshotHelper.java:578)
> at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:249)
> at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.restoreHdfsRegions(RestoreSnapshotHelper.java:171)
> at 
> org.apache.hadoop.hbase.snapshot.RestoreSnapshotHelper.copySnapshotForScanner(RestoreSnapshotHelper.java:814)
> at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.init(TableSnapshotResultIterator.java:77)
> at 
> org.apache.phoenix.iterate.TableSnapshotResultIterator.(TableSnapshotResultIterator.java:73)
> at 
> org.apache.phoenix.mapreduce.PhoenixRecordReader.initialize(PhoenixRecordReader.java:126)
> at 
> org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.initialize(MapTask.java:548)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:786)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> Caused by: java.util.concurrent.ExecutionException: java.io.IOException: The 
> specified region already exists on disk: 
> hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
> at java.util.concurrent.FutureTask.report(FutureTask.java:122)
> at java.util.concurrent.FutureTask.get(FutureTask.java:188)
> at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegions(ModifyRegionUtils.java:180)
> ... 15 more
> Caused by: java.io.IOException: The specified region already exists on disk: 
> hdfs://m12v1.mlamp.cn:8020/tmp/index-snapshot-dir/restore-dir/e738c85b-2394-43fc-b9de-b8280bc329ca/data/default/SCOPA.CETUS_EVENT_ZY_SCOPA_31_0516_TRAIN_EVENT/2ab2c1d73d2e31bb5a5e2b394da564f8
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.createRegionOnFileSystem(HRegionFileSystem.java:877)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.createHRegion(HRegion.java:6252)
> at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils.createRegion(ModifyRegionUtils.java:205)
> at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:173)
> at 
> org.apache.hadoop.hbase.util.ModifyRegionUtils$1.call(ModifyRegionUtils.java:170)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
> 2018-06-28 15:01:55 70909 [main] INFO org.apache.hadoop.mapreduce.Job - Task 
> Id : attempt_1530004808977_0011_m_01_0, Status : FAILED
> Error: java.io.IO

[jira] [Commented] (PHOENIX-4816) When select dynamic columns throw InvalidQualifierBytesException

2018-07-17 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547376#comment-16547376
 ] 

James Taylor commented on PHOENIX-4816:
---

See https://blogs.apache.org/phoenix/entry/column-mapping-and-immutable-data

> When select dynamic columns throw InvalidQualifierBytesException
> 
>
> Key: PHOENIX-4816
> URL: https://issues.apache.org/jira/browse/PHOENIX-4816
> Project: Phoenix
>  Issue Type: Bug
>Reporter: cherish peng
>Priority: Blocker
>  Labels: dynamic-schema
> Fix For: 4.10.1
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4816) When select dynamic columns throw InvalidQualifierBytesException

2018-07-17 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547361#comment-16547361
 ] 

James Taylor commented on PHOENIX-4816:
---

You’d have to do this instead:
{code}
CREATE IMMUTABLE TABLE ... (...) IMMUTABLE_STORAGE_SCHEME=ONE_CELL_PER_COLUMN, 
COLUMN_ENCODED_BYTES=NONE
{code}

But you’d still be better off using views instead of dynamic columns as you 
could use the default immutable storage format which will give you 3-5x better 
read and write perf.

> When select dynamic columns throw InvalidQualifierBytesException
> 
>
> Key: PHOENIX-4816
> URL: https://issues.apache.org/jira/browse/PHOENIX-4816
> Project: Phoenix
>  Issue Type: Bug
>Reporter: cherish peng
>Priority: Blocker
>  Labels: dynamic-schema
> Fix For: 4.10.1
>
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (PHOENIX-4816) When select dynamic columns throw InvalidQualifierBytesException

2018-07-17 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547070#comment-16547070
 ] 

James Taylor edited comment on PHOENIX-4816 at 7/17/18 8:32 PM:


By default, Phoenix will use 2 byes to encode the column qualifier for each 
column you have in your table (see 
[http://phoenix.apache.org/columnencoding.html]). Phoenix can then optimize the 
lookup of the HBase Cell by using the column qualifier as a index into the 
List during filtering by HBase. In theory, when you use dynamic columns, 
this optimization should be turned off since you're essentially supplying 
column qualifier names directly when you use this feature. Looks like there's a 
bug here, though.

Workaround would be to turn off column encoding for your table (add 
COLUMN_ENCODED_BYTES=NONE to your DDL statement). Another couple of things to 
consider for your use case:
 * Use our VIEW mechanism instead of dynamic columns so that Phoenix tracks the 
columns for each view (you can create these views on-the-fly: they are very low 
overhead).
 * Declare your base table as IMMUTABLE (i.e. CREATE IMMUTABLE TABLE ...) if 
possible (i.e. rows are never modified in-place, but only inserted as new rows) 
as you'll have numerous performance benefits around doing that.


was (Author: jamestaylor):
By default, Phoenix will use 2 byes to encode the column qualifier for each 
column you have in your table (see 
http://phoenix.apache.org/columnencoding.html). Phoenix can then optimize the 
lookup of the HBase Cell by using the column qualifier as a index into the 
List during filtering by HBase. In theory, when you use dynamic columns, 
this optimization should be turned off since you're essentially supplying 
column qualifier names directly when you use this feature. Looks like there's a 
bug here, though.

Workaround would be to turn off column encoding for your table (add 
COLUMN_ENCODED_BYTES=NONE to your DDL statement). Another couple of things to 
consider for your use case:
 * Use our VIEW mechanism instead of dynamic columns so that Phoenix tracks the 
columns for each view (you can create these views on-the-fly: they are very low 
overhead).
 * Declare your base table as IMMUTABLE (i.e. CREATE IMMUTABLE TABLE ...) as 
you'll have numerous performance benefits around doing that.

> When select dynamic columns throw InvalidQualifierBytesException
> 
>
> Key: PHOENIX-4816
> URL: https://issues.apache.org/jira/browse/PHOENIX-4816
> Project: Phoenix
>  Issue Type: Bug
>Reporter: cherish peng
>Priority: Blocker
>  Labels: dynamic-schema
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4816) When select dynamic columns throw InvalidQualifierBytesException

2018-07-17 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16547070#comment-16547070
 ] 

James Taylor commented on PHOENIX-4816:
---

By default, Phoenix will use 2 byes to encode the column qualifier for each 
column you have in your table (see 
http://phoenix.apache.org/columnencoding.html). Phoenix can then optimize the 
lookup of the HBase Cell by using the column qualifier as a index into the 
List during filtering by HBase. In theory, when you use dynamic columns, 
this optimization should be turned off since you're essentially supplying 
column qualifier names directly when you use this feature. Looks like there's a 
bug here, though.

Workaround would be to turn off column encoding for your table (add 
COLUMN_ENCODED_BYTES=NONE to your DDL statement). Another couple of things to 
consider for your use case:
 * Use our VIEW mechanism instead of dynamic columns so that Phoenix tracks the 
columns for each view (you can create these views on-the-fly: they are very low 
overhead).
 * Declare your base table as IMMUTABLE (i.e. CREATE IMMUTABLE TABLE ...) as 
you'll have numerous performance benefits around doing that.

> When select dynamic columns throw InvalidQualifierBytesException
> 
>
> Key: PHOENIX-4816
> URL: https://issues.apache.org/jira/browse/PHOENIX-4816
> Project: Phoenix
>  Issue Type: Bug
>Reporter: cherish peng
>Priority: Blocker
>  Labels: dynamic-schema
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-4790) Simplify check for client side delete

2018-07-12 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor resolved PHOENIX-4790.
---
Resolution: Won't Fix

Turns out the check can't be simplified, so I've reverted the related commits. 
Sorry for the noise, [~tdsilva].

> Simplify check for client side delete
> -
>
> Key: PHOENIX-4790
> URL: https://issues.apache.org/jira/browse/PHOENIX-4790
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4790.patch, PHOENIX-4790_addendum1.patch
>
>
> We don't need to check every query plan for the existence of a where clause 
> to determine if we can do a client-side delete. Instead, we can simply check 
> if any non PK columns are projected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Reopened] (PHOENIX-4790) Simplify check for client side delete

2018-07-12 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor reopened PHOENIX-4790:
---

> Simplify check for client side delete
> -
>
> Key: PHOENIX-4790
> URL: https://issues.apache.org/jira/browse/PHOENIX-4790
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4790.patch, PHOENIX-4790_addendum1.patch
>
>
> We don't need to check every query plan for the existence of a where clause 
> to determine if we can do a client-side delete. Instead, we can simply check 
> if any non PK columns are projected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: pre-commit Jenkins build isn't running - is this related to mast moved to HBase 1.4?

2018-07-12 Thread James Taylor
Awesome, Thomas! Thanks for following up on this.

On Thu, Jul 12, 2018 at 10:01 AM Thomas D'Silva 
wrote:

> I found a similar issue posted here
> https://issues.apache.org/jira/browse/INFRA-16349.
> I was able to get the Precommit build to post to jira by changing the
> jiracli cmd location to /home/jenkins/tools/jiracli/latest/jira.sh
> If you submit a patch the Precommit build should post the results to the
> JIRA now.
>
>
> On Thu, Mar 8, 2018 at 4:09 PM, Thomas D'Silva 
> wrote:
>
> > I think the exception is because we are using JDK7 according to this
> issue
> > https://issues.apache.org/jira/browse/HIVE-18461.
> > I tried switching the precommit build to use JDK8 but it then fails on a
> > different error:
> >
> > This message is automatically generated.' --issue PHOENIX-4576
> > Unable to log in to server:
> https://issues.apache.org/jira/rpc/soap/jirasoapservice-v2 with user:
> hadoopqa.
> >  Cause: (404)404
> > + /home/jenkins/tools/jiracli/latest/jira -s
> https://issues.apache.org/jira -a logout -u hadoopqa -p 4hadoopqa
> > Unable to log in to server:
> https://issues.apache.org/jira/rpc/soap/jirasoapservice-v2 with user:
> hadoopqa.
> >  Cause: (404)404
> > + cleanupAndExit 130
> >
> >
> >
> > On Fri, Mar 2, 2018 at 12:37 PM, Thomas D'Silva 
> > wrote:
> >
> >> The precommit build is running but not posting the results to the jira.
> >> It looks like hadoopqa is unable to login.
> >> Does anyone know how to fix this?
> >>
> >> export USER=hudson
> >>
> >> + USER=hudson
> >>
> >> + /home/jenkins/tools/jiracli/latest/jira -s
> >> https://issues.apache.org/jira -a addcomment -u hadoopqa -p 4hadoopqa
> >> --comment '{color:red}-1 overall{color}.  Here are the results of
> testing
> >> the latest attachment
> >>
> >>   http://issues.apache.org/jira/secure/attachment/12912665/PHO
> >> ENIX-4633-v2.patch
> >>
> >>   against master branch at commit 1a226ed3e6ac4a56658acbac4da0d5
> >> 18af343ee2.
> >>
> >>   ATTACHMENT ID: 12912665
> >>
> >>
> >>
> >> This message is automatically generated.' --issue PHOENIX-4633
> >>
> >> Unable to log in to server: https://issues.apache.org/jira
> >> /rpc/soap/jirasoapservice-v2 with user: hadoopqa.
> >>
> >>  Cause: ; nested exception is:
> >>
> >> javax.net.ssl.SSLException: Received fatal alert: protocol_version
> >>
> >> + /home/jenkins/tools/jiracli/latest/jira -s
> >> https://issues.apache.org/jira -a logout -u hadoopqa -p 4hadoopqa
> >>
> >> Unable to log in to server: https://issues.apache.org/jira
> >> /rpc/soap/jirasoapservice-v2 with user: hadoopqa.
> >>
> >>  Cause: ; nested exception is:
> >>
> >> javax.net.ssl.SSLException: Received fatal alert: protocol_version
> >>
> >> + cleanupAndExit 129
> >>
> >> + local result=129
> >>
> >> + [[ true == \t\r\u\e ]]
> >>
> >> + '[' -e
> /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess
> >> ']'
> >>
> >> + mv
> /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess
> >> /home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build
> >>
> >> mv:
> '/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess'
> >> and
> '/home/jenkins/jenkins-slave/workspace/PreCommit-PHOENIX-Build/patchprocess'
> >> are the same file
> >>
> >> + echo ''
> >>
> >>
> >> On Thu, Jan 18, 2018 at 4:16 PM, James Taylor 
> >> wrote:
> >>
> >>> The pre-commit Jenkins job doesn't appear to be running. Do you think
> >>> this
> >>> might be related to moving master to HBase 1.4? Another potential
> culprit
> >>> might be PHOENIX-4538.
> >>>
> >>> WDYT, Lars?
> >>>
> >>> Thanks,
> >>> James
> >>>
> >>
> >>
> >
>


Re: [DISCUSS] reduce notifications to phoenix-dev list

2018-07-12 Thread James Taylor
+1

On Thu, Jul 12, 2018 at 5:51 AM Josh Elser  wrote:

> Fine by me.
>
> On 7/11/18 9:50 PM, Thomas D'Silva wrote:
> > I think we should reduce the number of notifications that are sent to the
> > phoenix-dev list.
> > The notification scheme we use is located here :
> https://issues.apache.org/
> > jira/plugins/servlet/project-config/PHOENIX/notifications
> >
> > Maybe we don't need to notify the dev list when an issue is updated, or a
> > comment is created/edited/deleted. People watching a particular issue
> would
> > still see all the changes.
> > This would be similar to how other projects like HBase and Calcite
> operate.
> >
> > Thanks,
> > Thomas
> >
>


[jira] [Updated] (PHOENIX-4790) Simplify check for client side delete

2018-07-11 Thread James Taylor (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Taylor updated PHOENIX-4790:
--
Attachment: PHOENIX-4790_addendum1.patch

> Simplify check for client side delete
> -
>
> Key: PHOENIX-4790
> URL: https://issues.apache.org/jira/browse/PHOENIX-4790
> Project: Phoenix
>  Issue Type: Bug
>        Reporter: James Taylor
>    Assignee: James Taylor
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4790.patch, PHOENIX-4790_addendum1.patch
>
>
> We don't need to check every query plan for the existence of a where clause 
> to determine if we can do a client-side delete. Instead, we can simply check 
> if any non PK columns are projected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (PHOENIX-4790) Simplify check for client side delete

2018-07-11 Thread James Taylor (JIRA)


[ 
https://issues.apache.org/jira/browse/PHOENIX-4790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16541149#comment-16541149
 ] 

James Taylor commented on PHOENIX-4790:
---

Sorry about that, [~tdsilva]. I've attached an addendum which I'll commit 
unless I hear objections.

> Simplify check for client side delete
> -
>
> Key: PHOENIX-4790
> URL: https://issues.apache.org/jira/browse/PHOENIX-4790
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: James Taylor
>Priority: Minor
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4790.patch, PHOENIX-4790_addendum1.patch
>
>
> We don't need to check every query plan for the existence of a where clause 
> to determine if we can do a client-side delete. Instead, we can simply check 
> if any non PK columns are projected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   4   5   6   7   8   9   10   >