Re: [DISCUSS] Releasing the next Omid version

2022-04-28 Thread Josh Elser

+1

If we're dropping Phoenix 4.x imminently, that means dropping HBase 1.x 
and we should follow suit in Omid.


A "beta" HBase 3.0 is probably not too, too far away. I would consider 
how "nice" the current shim logic is in Omid (i.e. is it actually 
helpful? nice to work with? effective?), and make the call on that. 
However, HBase 3.0 should not drop any API from HBase 2.x, so we should 
not _have_ to shim anything.


1.1.0 as a release version makes sense to me for the API reason you gave.

On 4/19/22 4:53 AM, Istvan Toth wrote:

Hi!

When Geoffrey proposed releasing Phoenix 5.2.0, I asked for time to release
a new Omid version first, as there are a lot of unreleased fixes in  master.

One of those fixes removes the need to add a lot of explicit excludes for
the Omid HBase-1 artifacts when depending on Omid for HBase 2.

However, the discussion on dropping HBase 1.x support from Phoenixhas been
re-opened, and so far there are no objections.

We can either release the Omid master as is (perhaps with some dependency
version bumps), or we could just drop HBase 1.x support, and simplify the
project structure quite a bit for the next version.

In case we drop support for Hbase 1, we also need to decide whether to keep
the maven build infrastructure (shims and flatten-maven-plugin) for
supporting different HBase releases for an upcoming HBase 3 release (if the
API changes will require it), or to remove it altogether ?

We'll need to update the dependencies and exclusions in Phoenix either way.

What do you think ?
Can we make an official decision to drop Phoenix 4.x soon, and drop HBase 1
support from Omid for the next release,
or should I just go ahead with the Omid next release process, and worry
about removing the HBase 1.x support from Omid later ?

Also, as we're making incompatible changes to the way Omid is to be
consumed via maven, I think that we should bump the version either to
1.1.0, or 2.0.0. (I prefer 1.1.0, as the API doesn't change.)

Looking forward to your input,

Istvan



Re: [DISCUSS] Drop support for HBase 2.1 and 2.2 in Phoenix 5.2 ?

2022-04-28 Thread Josh Elser
Definitely makes sense to drop 2.1 and 2.2 (which are long gone in 
upstream support).


2.3 isn't mentioned on HBase downloads.html anymore so I think that's 
also good to go, but 2.4 is still very much alive.


On 4/19/22 11:32 AM, Geoffrey Jacoby wrote:

+1 to dropping support for 2.1 and 2.2.

Because of some incompatible 2.0-era changes to coprocessor interfaces, and
a bug around raw filters, we weren't able to support the newer global
indexes at all on 2.1, and even on 2.2 we have an issue where we can't
protect index consistency during major compaction. Getting rid of 2.1 and
2.2 support would let us simplify a lot.

Geoffrey

On Tue, Apr 19, 2022 at 6:28 AM Istvan Toth  wrote:


We can also consider dropping support for 2.4.0.

On Tue, Apr 19, 2022 at 12:21 PM Istvan Toth  wrote:


Hi!

Both Hbase 2.1 and 2.2 have been EOL for a little more than a year.

Do we want to keep supporting them in HBase 5.2 ?

Keeping them is not a big burden, as the compatibility modules are ready,
but we could simplify the compatibility module interface a bit, and free

up

resources in the multibranch test builds.

WDYT ?

Istvan







Re: [DISCUSS] Switching Phoenix to log4j2

2022-04-28 Thread Josh Elser

Agree on your solution proposed, Istvan.

I think a Phoenix 5.2 is the right time to take that on, too.

On 4/26/22 2:21 PM, Andrew Purtell wrote:

Thanks, I understand better.


What I am proposing is keeping phoenix-client-embedded, but dropping the

legacy (non embedded) phoenix-client jar/artifact from 5.2.

+1, for what it's worth. Embedding a logging back end is a bad idea as we
have learned. Only the facade (SLF4J) should be necessary.


On Tue, Apr 26, 2022 at 11:14 AM Istvan Toth 
wrote:


Andrew, what you describe is the phoenix-client-embedded jar, and it is the
(or at least my) preferred way to consume the phoenix thick client.

However, we still build and publish the legacy phoenix-client (non
embedded) JAR, that DOES include the slf4j + logging backend libraries (as
well as sqlline + jline)

What I am proposing is keeping phoenix-client-embedded, but dropping the
legacy (non embedded) phoenix-client jar/artifact from 5.2.

sqlline.py and friends used to use the non-embedded jar, so that they get
logging and sqlline, but I have since modified all scripts to use the
embedded client, and add the logging backend and sqlline from /lib, so
nothing we ship depends on the legacy phoenix-client JAR any longer.

regards
Istvan






[jira] [Updated] (PHOENIX-3654) Client-side PQS discovery for thin client

2022-03-09 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-3654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-3654:

Summary: Client-side PQS discovery for thin client  (was: Load Balancer for 
thin client)

> Client-side PQS discovery for thin client
> -
>
> Key: PHOENIX-3654
> URL: https://issues.apache.org/jira/browse/PHOENIX-3654
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.8.0
> Environment: Linux 3.13.0-107-generic kernel, v4.9.0-HBase-0.98
>Reporter: Rahul Shrivastava
>Assignee: Rahul Shrivastava
>Priority: Major
> Fix For: 4.12.0
>
> Attachments: LoadBalancerDesign.pdf, Loadbalancer.patch
>
>   Original Estimate: 240h
>  Remaining Estimate: 240h
>
> We have been having internal discussion on load balancer for thin client for 
> PQS. The general consensus we have is to have an embedded load balancer with 
> the thin client instead of using external load balancer such as haproxy. The 
> idea is to not to have another layer between client and PQS. This reduces 
> operational cost for system, which currently leads to delay in executing 
> projects.
> But this also comes with challenge of having an embedded load balancer which 
> can maintain sticky sessions, do fair load balancing knowing the load 
> downstream of PQS server. In addition, load balancer needs to know location 
> of multiple PQS server. Now, the thin client needs to keep track of PQS 
> servers via zookeeper ( or other means). 
> In the new design, the client ( PQS client) , it is proposed to  have an 
> embedded load balancer.
> Where will the load Balancer sit ?
> The load load balancer will embedded within the app server client.  
> How will the load balancer work ? 
> Load balancer will contact zookeeper to get location of PQS. In this case, 
> PQS needs to register to ZK itself once it comes online. Zookeeper location 
> is in hbase-site.xml. It will maintain a small cache of connection to the 
> PQS. When a request comes in, it will check for an open connection from the 
> cache. 
> How will load balancer know load on PQS ?
> To start with, it will pick a random open connection to PQS. This means that 
> load balancer does not know PQS load. Later , we can augment the code so that 
> thin client can receive load info from PQS and make intelligent decisions.  
> How will load balancer maintain sticky sessions ?
> While we still need to investigate how to implement sticky sessions. We can 
> look for some open source implementation for the same.
> How will PQS register itself to service locator ?
> PQS will have location of zookeeper in hbase-site.xml and it would register 
> itself to the zookeeper. Thin client will find out PQS location using 
> zookeeper.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [DISCUSS] The future of Tephra

2022-01-04 Thread Josh Elser
Agreed. As the person who did the work of pulling Tephra in from the 
incubator, I think we were already then in the state of "does someone 
actually care about Tephra?".


Without digging into the archives, I think someone was interested, but 
it seems like this never manifested.


+1 to remove Tephra integration from Phoenix.

On 1/3/22 1:38 PM, Viraj Jasani wrote:

+1 (unless any volunteer comes forward to support Tephra going forward)


On Mon, 3 Jan 2022 at 4:34 PM, Istvan Toth  wrote:


Hi!

As recently noticed by Lars, Tephra hasn't been working in Phoenix since
5.1/4.16 due to a bug.

The fact that this went unnoticed for a year, and the fact that generally
there seems to be minimal interest in Tephra suggests that we should
re-visit the decision to maintain Tephra within the Phoenix project.

The last two commits that were not aimed at fighting bit-rot, but were real
fixes were committed in Jun 2019 by Lars. In the last two and a half years,
all we did was try to keep ahead of bit-rot, so that Tephra keeps up with
new HBase and maven releases, and the changes in the CI infra.

Tephra uses an old Guava version, and depends heavily on the retired Apache
Twill project.
This is a major tech debt, and an adoption blocker (CVEs in direct Tephra
dependencies), which is also carried over into the Phoenix dependencies and
shaded artifacts that we should rectify.
PHOENIX-6064  , which
broke Tephra support, itself is a workaround so that we can avoid shipping
Tephra, and its problematic dependencies.

Ripping out Twill, and updating Guava and other dependencies is a
non-trivial amount of work (I estimate 1-4 weeks, depending on familiarity
with Tephra/Twill/Guava).

At the moment, no-one seems to be interested enough in Tephra to bring its
tech debt to acceptable levels, and in fact no-one seems to be using it
with any recent Phoenix release (as it doesn't work in them).

I suggest that you also check out the discussion between Lars and me in
https://issues.apache.org/jira/browse/PHOENIX-6615 for some more details
and background.

Based on the above, I propose retiring Tephra, and removing Tephra support
from Phoenix 5.2 / 4.17, unless someone steps up to solve the above issues
and maintain Tephra.

Note that this would not mean dropping transaction support from Phoenix, as
Omid support is in much better shape, and is actively used.

Please share your thoughts on the issue, if you are using Tephra and/or can
commit to solving the issues above, or if you agree on its removal, or any
other suggestions or objections.

regards
Istvan





Re: Next Tech Talk on Online Data Format Change in Phoenix

2021-10-07 Thread Josh Elser

Hey Kadir,

I think your update might have squashed some of the generated language 
pages. I've just pushed a new update.


Not a problem, just an FYI :)

On 10/7/21 4:09 PM, Kadir Ozdemir wrote:

We had the tech talk on Online Data Format Change in Phoenix today. The
slides and recording for this tech talk are posted at
https://phoenix.apache.org/tech_talks.html.

Thanks,
Kadir

On Wed, Sep 29, 2021 at 11:53 AM Kadir Ozdemir  wrote:


Hi All,

Did you ever need to change the underlying data format of your table which
is already in production and serving your customers. For example, you
wanted to change the primary key of your table, or its column encoding or
even its storage format. And you wanted to do this without interrupting the
services depending on this table. If you did, then you would want to
attend the tech talk by Gokcen Iskender on Oct 07 (next Thursday at 9:00
PST) and see how this will be possible in Phoenix. Hope to see you there.
Please see details at https://phoenix.apache.org/tech_talks.html.

Thanks,
Kadir





Re: [VOTE] Release of phoenixdb 1.1.0 RC0

2021-08-19 Thread Josh Elser

+1 (binding)

* Ran unit tests with Phoenix 5.1 against python 2.7, 3.7, and 3.8 (all 
passed)

* Ran a basic write test in parallel (saw expected levels of performance)
* dev-support/run-source-ratcheck.sh passed, visual inspection of output 
looks good

* xsum/sigs look good

On 8/18/21 12:44 PM, Istvan Toth wrote:

Hello Everyone,

This is a call for a vote on phoenixdb 1.1.0 RC0.

PhoenixDB is native Python driver for accessing Phoenix via Phoenix Query
Server.

This version contains the following improvements compared to the previous
1.0.1 release

- Fix handling empty Array objects in resultsets
- Add / fix primary key and index metadata access methods

The source release consists of the contents of the python-phoenixdb
directory of the phoenix-queryserver repository.

The source tarball, including signatures, digests, etc can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.1.0.rc0/src/

Artifacts are signed with my "CODE SIGNING KEY":
825203A70405BC83AECF5F7D97351C1B794433C7

KEYS file available here:
https://dist.apache.org/repos/dist/dev/phoenix/KEYS


The hash and tag to be voted upon:
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=commit;h=48ca4b22b0f2793d54d7ae0ab4e6f8b5751301b4

https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=commit;h=refs/tags/python-phoenixdb-1.1.0.rc0

The vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
Istvan



[jira] [Created] (PHOENIX-6515) Phoenix uses hbase-testing-util but does not list it as a dependency

2021-07-15 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-6515:
---

 Summary: Phoenix uses hbase-testing-util but does not list it as a 
dependency
 Key: PHOENIX-6515
 URL: https://issues.apache.org/jira/browse/PHOENIX-6515
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 5.2.0


Just saw a build failure of Phoenix at $dayjob due to OMID-211. OMID-211 
removes the hbase-testing-util as a regular-scope dependency and adds it 
(properly) as a test-scope dependency. However, this means that phoenix-core no 
long has hbase-testing-util on the classpath and will fail to compile.

Easy fix to just list the hbase-testing-util as a test-scope dependency.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6503) Update Apache Phoenix News on the website

2021-07-05 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-6503.
-
Resolution: Done

Applied! Thanks [~richardantal]

> Update Apache Phoenix News on the website
> -
>
> Key: PHOENIX-6503
> URL: https://issues.apache.org/jira/browse/PHOENIX-6503
> Project: Phoenix
>  Issue Type: Task
>Reporter: Richárd Antal
>Assignee: Richárd Antal
>Priority: Major
> Attachments: PHOENIX-6503.diff
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Unbundling Sqlline and slf4j backend from phoenix-client and phoenix-client embedded

2021-05-26 Thread Josh Elser
I think the idea is that we would include a sqlline jar with the Phoenix 
distribution. Context: we had some grief where a sqlline upgrade caused 
user pain because they were relying on specific output from sqlline.


If we have the sqlline jar _not_ packaged inside phoenix-client, then 
users can easily replace the version of sqlline which makes them happiest.


While I agree with Istvan that #1 is the more "correct" option, I'm 
worried about the impact of folks who rely on the phoenix-client.jar to 
be a "batteries included" fat-jar. Removing sqlline from 
phoenix-client-embedded is great, so I'd lean towards #2.


We can see what adoption of phoenix-client-embedded looks like now that 
we have it in releases. I imagine most folks haven't yet realized that 
it's even an option that's available.


On 5/26/21 1:16 PM, la...@apache.org wrote:

Will sqlline still be part of the Phoenix "distribution"? Or will it become a 
separate package to install?






On Wednesday, May 26, 2021, 1:07:17 AM PDT, Istvan Toth  
wrote:





Hi!

The current purpose of the phoenix-client JAR is twofold:
- It servers as a generic JDBC driver for embedding in applications
- It also contains the sqlline library used by the sqlline.py script, as
well as the slf4j log4j backend.
- (It also contains a some Phoenix code and HBase libraries not necessary
for a client, but we're already tracking that in different tickets)

One major pain point is the slf4j backend, which makes phoenix-client
incompatible with applications and libraries that do not use log4j 1.2 as a
backend, and kind of defeats the purpose of using slf4j in the first place.
phoenix-client-embedded solves this problem by removing the slf4j backend
from Phoenix.

In PHOENIX-6378  we aim
to remove sqlline from the phoenix-client JAR, as it further cleans up the
classpath, and avoids locking phoenix to the sqlline version that it was
built with.

In Richard's current patch, we remove sqlline from phoenix-client-embedded,
and use that in the sqlline script.

In our quest for a more useable phoenix-client, we can do two things now:

   1. Remove both the slf4j backend, and sqlline from phoenix-client, and
   also drop phoenix-client-embedded as it would be the same as phoenix-client
   2. Remove sqlline from phoenix-client-embedded, and keep the current
   phoenix-client as backwards compatibility option

I'd prefer the first option, but this is somewhat more disruptive than the
other.

Please share your thoughts. Do you prefer option 1, 2, or something else
entirely ?

Istvan



Re: [VOTE] Release of Apache Phoenix 4.16.1 RC11

2021-05-20 Thread Josh Elser

+1 (binding)

* xsums/sigs are good
* can build from src
* src release looks ok (apache-rat:check)

On 5/15/21 1:53 PM, Viraj Jasani wrote:

Please vote on this Apache phoenix release candidate, phoenix-4.16.1RC11

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix 4.16.1
[ ] -1 Do not release this package because ...

The tag to be voted on is 4.16.1RC11:

   https://github.com/apache/phoenix/tree/4.16.1RC11


Commit ID: a7d212378b80e48be8c17fc8862ddfedd3bd25ee

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.1RC11/

Maven artifacts are available in a staging repository at:

   https://repository.apache.org/content/repositories/orgapachephoenix-1237/

Artifacts were signed with the 1C8ADFD5 key which can be found in:

   https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache phoenix, please see

   https://phoenix.apache.org/

Thanks,
Your Phoenix Release Manager



[jira] [Updated] (PHOENIX-6473) Add Hadoop JMXServlet as /jmx endpoint

2021-05-20 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6473:

Fix Version/s: queryserver-6.0.0

> Add Hadoop JMXServlet as /jmx endpoint
> --
>
> Key: PHOENIX-6473
> URL: https://issues.apache.org/jira/browse/PHOENIX-6473
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
> Fix For: queryserver-6.0.0
>
>
> It would be beneficial to add Hadoop's JMXServlet as /jmx HTTP endpoint to 
> the Phoenix Query Server for better monitoring capabilities.
> JMXServlet creates a read-only HTTP endpoint where JMX beans can be queried 
> in JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6473) Add Hadoop JMXServlet as /jmx endpoint

2021-05-20 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-6473:
---

Assignee: Andor Molnar

> Add Hadoop JMXServlet as /jmx endpoint
> --
>
> Key: PHOENIX-6473
> URL: https://issues.apache.org/jira/browse/PHOENIX-6473
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Andor Molnar
>Assignee: Andor Molnar
>Priority: Major
>
> It would be beneficial to add Hadoop's JMXServlet as /jmx HTTP endpoint to 
> the Phoenix Query Server for better monitoring capabilities.
> JMXServlet creates a read-only HTTP endpoint where JMX beans can be queried 
> in JSON format.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] Release of phoenixdb 1.0.1 RC0

2021-05-18 Thread Josh Elser

+1 (binding)

* Xsums/sigs work
* Keys works
* Source release looks fine
* Validated that cookie support fix works

Ran tests against HBase 2.2.7 and Phoenix 5.2.0-SNAPSHOT which all 
passed (except those tests which expect no extra tables present).


On 5/14/21 3:46 PM, Istvan Toth wrote:

Hello Everyone,

This is a call for a vote on phoenixdb 1.0.1 RC0.

PhoenixDB is native Python driver for accessing Phoenix via Phoenix Query
Server.

This version contains the following improvements compared to the previous
1.0.0 release

- Restore default GSS OID to "SPNEGO" to improve compatibility
- Use HTTP sessions to handle cookies for sticky load balancer support

The source release consists of the contents of the python-phoenixdb
directory of the phoenix-queryserver repository.

The source tarball, including signatures, digests, etc can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.1.rc0/src/

Artifacts are signed with my "CODE SIGNING KEY":
825203A70405BC83AECF5F7D97351C1B794433C7

KEYS file available here:
https://dist.apache.org/repos/dist/dev/phoenix/KEYS


The hash and tag to be voted upon:
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=commit;h=d983b86a541c65938a4344f79b2dbe70c38ea905

https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=tag;h=refs/tags/python-phoenixdb-1.0.1.rc0

The vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
Istvan



Re: [VOTE] Release of Apache Tephra 0.16.1 RC1

2021-05-11 Thread Josh Elser

+1 (binding)

* Can build from src
* xsums/sigs are good
* KEYS is up-to-date
* Unit tests all passed!

I think ASF guidelines want the commit ID (not just a tag) in a VOTE 
email, but let's just use that for future guidance. The git-tag of 
0.1.6.1RC1 points to 2d4285adde3c62abe07984ef5bef3f08d34b8f45 which 
exists in the source tree, so that's good.


We should also get https://phoenix.apache.org/tephra updated and start 
using that for Tephra communications. I think we're lucky that infra 
hasn't taken down the old website :)


Thanks Viraj!

On 5/4/21 7:40 AM, Viraj Jasani wrote:

Please vote on this Apache phoenix tephra release candidate,
phoenix-tephra-0.16.1RC1

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix tephra 0.16.1
[ ] -1 Do not release this package because ...

The tag to be voted on is 0.16.1RC1:

   https://github.com/apache/phoenix-tephra/tree/0.16.1RC1

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-tephra-0.16.1RC1/

Maven artifacts are available in a staging repository at:

   https://repository.apache.org/content/repositories/orgapachephoenix-1233/

Artifacts were signed with the 1C8ADFD5 key which can be found in:

   https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache phoenix tephra, please see

   https://tephra.incubator.apache.org/

Thanks,
Your Phoenix Release Manager



[jira] [Created] (PHOENIX-6463) PQS jar hosting doesn't include pom's

2021-05-10 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-6463:
---

 Summary: PQS jar hosting doesn't include pom's
 Key: PHOENIX-6463
 URL: https://issues.apache.org/jira/browse/PHOENIX-6463
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser


Without the copyPom option on the maven-assembly-plugin, we don't get the pom 
files included which makes building against it pretty useless (Maven will fail 
for a normal-looking project).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] EOL 4.16 and 4.x?

2021-05-10 Thread Josh Elser
No objections over here! Y'all already know the work Istvan and Richard 
have been doing to push on Phoenix 5.1. That continues to be our focus.


On 5/7/21 11:13 AM, Viraj Jasani wrote:

Hi,

Based on HBase community's decision to EOL branch-1 after 1.7.0 release as
per the discussion thread [1], it is inevitable that we will also have to
consider EOL of 4.x release line sometime soon.

As we have discussed in the past, even though Phoenix 4.x should support
Java 7 only (as it supports HBase 1), we are not strictly following this
compatibility. With HBase 2 / Phoenix 5, we no longer have to worry about
this source compatibility. Tephra also continues to support HBase 1 and
hence should follow Java 7 source compatibility rules and yet I see many
Java 8 Optional imports in tephra-hbase-compat-2.x modules. Source
compatibility is just one of the reasons behind HBase community's decision
to EOL branch-1, many other important reasons are discussed over thread [1].
Overall, HBase 2 is already widely adopted and deployed in production and
so should be Phoenix 5 IMHO.

Given that there are no apparent functional differences b/ 4.16 and 5.1(and
master) except for maybe few pending forward-ports (if any), I believe it
is worth considering the ongoing 4.16 patch release as the last one on 4.x
release line and EOL 4.16 and 4.x.
Thoughts?

1. https://s.apache.org/rs2bk



[jira] [Resolved] (PHOENIX-6459) phoenixdb ignores cookies set by the server

2021-05-06 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-6459.
-
Resolution: Fixed

> phoenixdb ignores cookies set by the server
> ---
>
> Key: PHOENIX-6459
> URL: https://issues.apache.org/jira/browse/PHOENIX-6459
> Project: Phoenix
>  Issue Type: Bug
>  Components: python
>    Reporter: Josh Elser
>    Assignee: Josh Elser
>Priority: Major
> Fix For: python-phoenixdb-1.0.1
>
>
> We saw an issue where phoenixdb was unable to communicate with PQS sitting 
> behind Knox (post KNOX-843). We saw a situation where phoenixdb would try to 
> openConnection() and then call connectionSync(). However, the connectionSync 
> would fail, saying that the connection with the ID we just created doesn't 
> exist.
> Reading KNOX-843 a little more closely, we can see that the implementation of 
> the stickiness is dependent on a Knox cookie which Knox will set. Switching 
> over to using the requests.Session seems to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6459) phoenixdb ignores cookies set by the server

2021-05-06 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6459:

Fix Version/s: python-phoenixdb-1.0.1

> phoenixdb ignores cookies set by the server
> ---
>
> Key: PHOENIX-6459
> URL: https://issues.apache.org/jira/browse/PHOENIX-6459
> Project: Phoenix
>  Issue Type: Bug
>  Components: python
>    Reporter: Josh Elser
>    Assignee: Josh Elser
>Priority: Major
> Fix For: python-phoenixdb-1.0.1
>
>
> We saw an issue where phoenixdb was unable to communicate with PQS sitting 
> behind Knox (post KNOX-843). We saw a situation where phoenixdb would try to 
> openConnection() and then call connectionSync(). However, the connectionSync 
> would fail, saying that the connection with the ID we just created doesn't 
> exist.
> Reading KNOX-843 a little more closely, we can see that the implementation of 
> the stickiness is dependent on a Knox cookie which Knox will set. Switching 
> over to using the requests.Session seems to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Releasing python-phoenixdb 1.0.1

2021-05-06 Thread Josh Elser

Yes, that'd be great.

The tl;dr for others: Knox employs a cookie for redirecting clients back 
to the same backend PQS server. However, the Python library didn't pass 
this cookie back to Knox. This results in the second Avatica API call 
going to a different PQS instance which doesn't have the cached state 
(i.e. the JDBC Connection for Phoenix isn't open on that PQS).


On the bright side, I also saw a 15% performance increase in a 
(contrived) test by making this change.


On 5/6/21 3:21 AM, Istvan Toth wrote:

Hi!

PHOENIX-6459 is a show-stopper for using phoenixdb when Knox HA is used.

I propose releasing phoenixdb 1.0.1 after Josh's fix lands.

regards
Istvan



[jira] [Created] (PHOENIX-6459) phoenixdb ignores cookies set by the server

2021-04-30 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-6459:
---

 Summary: phoenixdb ignores cookies set by the server
 Key: PHOENIX-6459
 URL: https://issues.apache.org/jira/browse/PHOENIX-6459
 Project: Phoenix
  Issue Type: Bug
  Components: python
Reporter: Josh Elser
Assignee: Josh Elser


We saw an issue where phoenixdb was unable to communicate with PQS sitting 
behind Knox (post KNOX-843). We saw a situation where phoenixdb would try to 
openConnection() and then call connectionSync(). However, the connectionSync 
would fail, saying that the connection with the ID we just created doesn't 
exist.

Reading KNOX-843 a little more closely, we can see that the implementation of 
the stickiness is dependent on a Knox cookie which Knox will set. Switching 
over to using the requests.Session seems to fix the issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Separating client and server side code

2021-04-19 Thread Josh Elser

Istvan -- the mailing list stripped your attachment off, I believe :).

IIRC, Istvan's suggestion paves the way to make this (further) 
separation easier. With the changes he's proposing, we could further 
split the common module out into distinct pieces, and reduce what 
phoenix "server" requires.


On 4/18/21 9:13 PM, la...@apache.org wrote:

There is also another angle to look at. A long time ago I wrote this:

"
It seems Phoenix serves 4 distinct purposes:
1. Query parsing and compiling.
2. A type system
3. Query execution
4. Efficient HBase interface

Each of these is useful by itself, but we do not expose these as stable 
interfaces.
We have seen a lot of need to tie HBase into "higher level" service, such as 
Spark (and Presto, etc).
I think we can get a long way if we separate at least #1 (SQL) from the rest 
#2, #3, and #4 (Typed HBase Interface - THI).
Phoenix is used via SQL (#1), other tools such as Presto, Impala, Drill, Spark, 
etc, can interface efficiently with HBase via THI (#2, #3, and #4).
"

I still believe this is an additional useful demarcation for how to group the 
code. And coincided somewhat with server/client.

Query parsing and the type system are client. Query execution and HBase 
interface are both client and server.

-- Lars

On Wednesday, April 14, 2021, 8:56:08 AM PDT, Istvan Toth  
wrote:





Jacob, Josh and me had a discussion about the topic.

I'm attaching the dependency graph of the proposed modules



On Fri, Apr 9, 2021 at 6:30 AM Istvan Toth  wrote:

The bulk of the changes I'm working on is indeed the separation of the client 
and the server side code.

Separating the MR related classes, and the tools-specific code (main, options 
parsing, etc) makes sense to me, if we don't mind adding another module.

In the first WIP iteration, I'm splitting out everything that depends on more than 
hbase-client into a "server" module.
Once that works I will look at splitting that further into a  real "server" and an 
"MR/tools" module.


My initial estimates about splitting the server side code were way too 
optimistic, we have to touch a lot of code to break circular dependencies 
between the client and server side. The changes are still quite trivial, but 
the patch is going to be huge and scary.


Tests are also going to be a problem, we're probably going to have to move most of them into the 
"server" or a separate "tests" module, as the MiniCluster tests depend on code 
from each module.

The plan in PHOENIX-5483, and Lars's mail sounds good, but I think that it would be more 
about dividing the "client-side" module further.
(BTW I think that making the indexing engine available separately would also be 
a popular feature )



On Fri, Apr 9, 2021 at 5:39 AM Daniel Wong  wrote:

This is another project I am interested in as well as my group at
Salesforce.  We have had some discussions internally on this but I wasn't
aware of this specific Spark issue (We only allow phoenix access via spark
by default).  I think the approaches outlined are a good initial step but
we were also considering a larger breakup of phoenix-core.  I don't
think the desire for the larger step should stop us from doing the initial
ones Istavan and Josh proposed.  I think the high level plan makes sense
but I might prefer a different name than phoenix-tools for the ones we want
to be available to external libraries like phoenix-connectors.  Another
possible alternative is to restructure maybe less invasively by making
phoenix core like your proposed tools and making a phoenix-internal or
similar for the future.
One thing I was wondering was how much effort it was to split client/server
through phoenix-core...  Lars layed out a good component view of phoenix
whosethe first step might be PHOENIx-5483 but we could focus on highest
level separation rather than bottom up.  However, even that thread linked
there talks about a client-facing api which we can piggyback for this use.
Say phoeinx-public-api or similar.

On Wed, Apr 7, 2021 at 9:43 AM Jacob Isaac  wrote:


Hi Josh & Istvan

Thanks Istvan for looking into this, I am also interested in solving this
problem,
Let me know how I can help?

Thanks
Jacob

On Wed, Apr 7, 2021 at 9:05 AM Josh Elser  wrote:


Thanks for trying to tackle this sticky problem, Istvan. For the context
of everyone else, the real-life problem Istvan is trying to fix is that
you cannot run a Spark application with both HBase and Phoenix jars on
the classpath.

If I understand this correctly, it's that the HBase API signatures are
different depending on whether we are "client side" or "server side"
(within a RegionServer). Your comment on PHOENIX-6053 shows that
(signatures on Table.java around Protobuf's Service class having shaded
relocation vs. the original com.google.protobuf coordinates).

I think the reason we have the monolithic phoenix-core is that we ha

Re: [DISCUSS] Releasing Tephra 0.16.1 (or 0.17)

2021-04-19 Thread Josh Elser

All points look good to me :)

On 4/19/21 4:31 AM, Istvan Toth wrote:

Hi!

Due to low interest, Tephra support tends to be broken on master (and in
this case, on recent releases).

The current problems with Tephra integration are described in PHOENIX-6442.

However, before we can fix that, we need to have a Tephra release that
supports HBase 2.4.

As the changes since 0.16.0 are minimal, I propose calling it 0.16.1 .

Please use this thread to share your suggestions, objections, or concerns,
or volunteer to the RM for the release.

If there is no objection, and no other volunteer to act as a release
manager, I will start the release process as my time permits (probably
sometime next week).

regards
Istvan



Re: [DISCUSS] Separating client and server side code

2021-04-07 Thread Josh Elser
Thanks for trying to tackle this sticky problem, Istvan. For the context 
of everyone else, the real-life problem Istvan is trying to fix is that 
you cannot run a Spark application with both HBase and Phoenix jars on 
the classpath.


If I understand this correctly, it's that the HBase API signatures are 
different depending on whether we are "client side" or "server side" 
(within a RegionServer). Your comment on PHOENIX-6053 shows that 
(signatures on Table.java around Protobuf's Service class having shaded 
relocation vs. the original com.google.protobuf coordinates).


I think the reason we have the monolithic phoenix-core is that we have 
so much logic which is executed on both the client and server side. For 
example, we may push a filter operation to the server-side or we many 
run it client-side. That's also why we have the "thin" phoenix-server 
Maven module which just re-packages phoenix-core.


Is it possible that we change phoenix-server so that it contains the 
"server-side" code that we don't want to have using the HBase classes 
with thirdparty relocations, rather than introduce another new Maven module?


Looking through your WIP PR too.

On 4/7/21 1:10 AM, Istvan Toth wrote:

Hi!

I've been working on getting Phoenix working with hbase-shaded-client.jar,
and I am finally getting traction.

One of the issues that I encountered is that we are mixing client and
server side code in phoenix-core, and there's a
mutual interdependence between the two.

Fixing this is not hard, as it's mostly about replacing .class.getName() s
with string constants, and moving around some inconveniently placed static
utility methods, and now I have a WIP version where the client side doesn't
depend on server classes.

However, unless we change the project structure, and factor out the classes
that depend on server-side APIs, this will be extremely fragile, as any
change can (and will) re-introduce the circular dependency between the
classes.

To solve this issue I propose the following:

- clean up phoenix-core, so that only classes that depend only on
*hbase-client* (or at worst only on classes that are present in
*hbase-shaded-client*) remain. This should be 90+% of the code
- move all classes (mostly coprocessors and their support code) that use
the server API (*hbase-server* mostly) to a new module, say
phoenix-coprocessors (the phoenix-server module name is taken). This new
class depends on phoenix-core.
- move all classes that directly depend on MapReduce, and their main()
classes to the existing phoenix-tools module (which also depends on core)

The separation would be primarily based on API use, at the first cut I'd be
fine with keeping all logic phoenix-core, and referencing that. We may or
may not want to move logic that is only used in coprocessors or tools, but
doesn't use the respective APIs to the new modules later.

As for the main artifacts:

- *phoenix-server.jar* would include code from all three classes.
- A newly added *phoenix-client-byo-shaded-hbase.jar *would include only
the code from cleaned-up phoenix-core
- Ideally, we'd remove the the tools and coprocessor code (and
dependencies) from the standard and embedded clients, and switch
documentation to use *phoenix-server* to run the MR tools, but this is
optional.

I am tracking this work in PHOENIX-6053, which has a (currently working)
WIP patch attached.

I think that this change would fit the pattern established by creating the
phoenix-tools module,
but as this is major change in project structure (even if the actual Java
changes are trivial),
I'd like to gather your input on this approach (please also speak up if you
agree).

regards
Istvan



Re: Topic suggestions for April Tech Talk

2021-03-22 Thread Josh Elser

Noted! Thanks for the suggestion, Lars :)

On 3/19/21 1:35 PM, la...@apache.org wrote:

In addition to the technical implementation of the PQS an interesting topic for 
the PQS would be how to scale it.
How many PQSs compared to the number of region servers? How to size the 
machine/VM? Etc.
I think just that could take more than 15 minutes. :)

On Thursday, March 18, 2021, 9:18:40 AM PDT, Josh Elser  
wrote:





What I think we have now is..

* 15mins Hue
* 15mins on PQS and Python

I think a full hour might be too much, depends on amount of Q

On 3/17/21 6:24 PM, Kadir Ozdemir wrote:

Josh,

I will be very interested in a discussion on PQS and would like to learn
about the Python library and its integration with Hue.  Do you suggest to
discuss all in one meeting?

Thanks,
Kadir

On Wed, Mar 17, 2021 at 2:26 PM Josh Elser  wrote:


While we try to figure out who will give it, I think I am safe to say we
can offer a discussion on PQS, the Phoenix Python library, and its
integration with Hue [1].

Is this of interest?

[1]
https://urldefense.com/v3/__https://gethue.com/__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0J2zRqRUA$

On 3/9/21 1:54 PM, Kadir Ozdemir wrote:

Hi All,

This is a friendly reminder that our next tech talk meeting will be held
at 9AM PST on April 01. If you like to present a topic for the next
meeting, please let us know by March 18. For more information about our
tech talks, and the slides and recording for the previous presentation,
please visit

https://urldefense.com/v3/__https://phoenix.apache.org/tech_talks.html__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0I2wVK0pA$


<

https://urldefense.com/v3/__https://phoenix.apache.org/tech_talks.html__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0I2wVK0pA$

.

I look forward to your suggestions for the next meeting.

Thanks,
Kadir






Re: Topic suggestions for April Tech Talk

2021-03-22 Thread Josh Elser

Sounds good. I'll work with folks internally to put together an agenda.

On 3/19/21 2:11 PM, Kadir Ozdemir wrote:

Thank you for volunteering for the April meeting. You are the host for it.
It will be great if you can write a brief abstract/description about the
meeting topic that I can send out with the meeting invitation. I do not
know exactly how you want to structure the meeting but I can think of that
you give a brief overview of Hue, PQS, and the Python library, explain what
specific problems they address, and what challenges they have and/or what
we would like to do for them in near future to initiate discussions. Thank
you again!


Re: Topic suggestions for April Tech Talk

2021-03-18 Thread Josh Elser

What I think we have now is..

* 15mins Hue
* 15mins on PQS and Python

I think a full hour might be too much, depends on amount of Q

On 3/17/21 6:24 PM, Kadir Ozdemir wrote:

Josh,

I will be very interested in a discussion on PQS and would like to learn
about the Python library and its integration with Hue.  Do you suggest to
discuss all in one meeting?

Thanks,
Kadir

On Wed, Mar 17, 2021 at 2:26 PM Josh Elser  wrote:


While we try to figure out who will give it, I think I am safe to say we
can offer a discussion on PQS, the Phoenix Python library, and its
integration with Hue [1].

Is this of interest?

[1]
https://urldefense.com/v3/__https://gethue.com/__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0J2zRqRUA$

On 3/9/21 1:54 PM, Kadir Ozdemir wrote:

Hi All,

This is a friendly reminder that our next tech talk meeting will be held
at 9AM PST on April 01. If you like to present a topic for the next
meeting, please let us know by March 18. For more information about our
tech talks, and the slides and recording for the previous presentation,
please visit

https://urldefense.com/v3/__https://phoenix.apache.org/tech_talks.html__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0I2wVK0pA$


<

https://urldefense.com/v3/__https://phoenix.apache.org/tech_talks.html__;!!DCbAVzZNrAf4!QOuUB00ICNpifpvXyDyti8dJ2abET-ZDa7qbcbPi11gGN_tl7bvbiT08g0I2wVK0pA$

.

I look forward to your suggestions for the next meeting.

Thanks,
Kadir






[jira] [Updated] (PHOENIX-6414) Access to Phoenix from Python using SPNEGO

2021-03-17 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6414:

Description: 
When connecting to Phoenix from Python using "SPNEGO" as the authentication 
mechanism an exception occurs:

{noformat}
import phoenixdb
import phoenixdb.cursor
database_url = 'http://myphoenixdb:8765/'
conn = phoenixdb.connect(database_url, autocommit=True, authentication="SPNEGO")
{noformat}

Causes this exception:

{noformat}
>>> conn = phoenixdb.connect(database_url, autocommit=True, 
>>> authentication="SPNEGO")
venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py:121: 
RuntimeWarning: Unexpected end-group tag: Not all data was converted
 if not err.ParseFromString(message.wrapped_message):
Traceback (most recent call last):
 File "", line 1, in 
 File "venv/lib/python3.6/site-packages/phoenixdb/_init_.py", line 121, in 
connect
 return Connection(client, **kwargs)
 File "venv/lib/python3.6/site-packages/phoenixdb/connection.py", line 53, in 
_init_
 self.open()
 File "venv/lib/python3.6/site-packages/phoenixdb/connection.py", line 98, in 
open
 self._client.open_connection(self._id, info=self._phoenix_props)
 File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 363, 
in open_connection
 response_data = self._apply(request)
 File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 215, 
in _apply
 parse_error_protobuf(response_body)
 File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 128, 
in parse_error_protobuf
 raise_sql_error(err.error_code, err.sql_state, err.error_message)
 File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 96, 
in raise_sql_error
 raise errors.InternalError(message, code, sqlstate)
phoenixdb.errors.InternalError: ('', 0, '', None)
{noformat}

This problem is caused by the authentication mechanism because phoenixdb is 
using Kerberos 5 instead of SPNEGO.

To resolve the issue we have patched the package applying the idea behind the 
"Explicit Mechanism" described in [https://pypi.org/project/requests-gssapi/] 
when the authentication is SPNEGO. The attached file has the patch applied.

If you want, I can create a branch and pull request this change.

  was:
When connecting to Phoenix from Python using "SPNEGO" as the authentication 
mechanism an exception occurs:

{{import phoenixdb}}
{{ import phoenixdb.cursor}}
{{ database_url = 'http://myphoenixdb:8765/'}}
{{ conn = phoenixdb.connect(database_url, autocommit=True, 
authentication="SPNEGO")}}

Causes this exception:

{{>>> conn = phoenixdb.connect(database_url, autocommit=True, 
authentication="SPNEGO")}}
{{venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py:121: 
RuntimeWarning: Unexpected end-group tag: Not all data was converted}}
{{ if not err.ParseFromString(message.wrapped_message):}}
{{Traceback (most recent call last):}}
{{ File "", line 1, in }}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/__init__.py", line 121, in 
connect}}
{{ return Connection(client, **kwargs)}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/connection.py", line 53, in 
__init__}}
{{ self.open()}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/connection.py", line 98, in 
open}}
{{ self._client.open_connection(self._id, info=self._phoenix_props)}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 
363, in open_connection}}
{{ response_data = self._apply(request)}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 
215, in _apply}}
{{ parse_error_protobuf(response_body)}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 
128, in parse_error_protobuf}}
{{ raise_sql_error(err.error_code, err.sql_state, err.error_message)}}
{{ File "venv/lib/python3.6/site-packages/phoenixdb/avatica/client.py", line 
96, in raise_sql_error}}
{{ raise errors.InternalError(message, code, sqlstate)}}
{{phoenixdb.errors.InternalError: ('', 0, '', None)}}

This problem is caused by the authentication mechanism because phoenixdb is 
using Kerberos 5 instead of SPNEGO.

To resolve the issue we have patched the package applying the idea behind the 
"Explicit Mechanism" described in [https://pypi.org/project/requests-gssapi/] 
when the authentication is SPNEGO. The attached file has the patch applied.

If you want, I can create a branch and pull request this change.


> Access to Phoenix from Python using SPNEGO
> --
>
> Key: PHOENIX-6414
> URL: https://issues.apache.org/jira/browse/PHOENIX-6414
> Proje

Re: Topic suggestions for April Tech Talk

2021-03-17 Thread Josh Elser
While we try to figure out who will give it, I think I am safe to say we 
can offer a discussion on PQS, the Phoenix Python library, and its 
integration with Hue [1].


Is this of interest?

[1] https://gethue.com/

On 3/9/21 1:54 PM, Kadir Ozdemir wrote:

Hi All,

This is a friendly reminder that our next tech talk meeting will be held 
at 9AM PST on April 01. If you like to present a topic for the next 
meeting, please let us know by March 18. For more information about our 
tech talks, and the slides and recording for the previous presentation, 
please visit https://phoenix.apache.org/tech_talks.html 
.


I look forward to your suggestions for the next meeting.

Thanks,
Kadir


Re: [VOTE] Release of Apache Phoenix 4.16.0 RC3

2021-02-21 Thread Josh Elser
Be sure to update 
https://dist.apache.org/repos/dist/release/phoenix/KEYS with your key 
prior to releasing this.


+1 (binding)

* Sigs/xsums are good
* No unexpected files in source release
* rat:check passes
* Build from src, ran against 1.4.13

On 2/16/21 12:08 AM, Xinyi Yan wrote:

Hello Everyone,

This is a call for a vote on Apache Phoenix 4.16.0 RC3. This is the next
minor release of Phoenix 4, compatible with Apache HBase 1.3, 1.4, 1.5
and 1.6.

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix 4.16.0
[ ] -1 Do not release this package because ...

The tag to be voted on is 4.16.0RC3
https://github.com/apache/phoenix/tree/4.16.0RC3

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC3

For a complete list of changes, see:

https://dist.apache.org/repos/dist/dev/phoenix/phoenix-4.16.0RC3/CHANGES.md

Artifacts are signed with my "CODE SIGNING KEY":
E4882DD3AB711587

KEYS file available here:
https://dist.apache.org/repos/dist/dev/phoenix/KEYS


Thanks,
Xinyi



Re: [Discuss] Phoenix Tech Talks

2021-02-04 Thread Josh Elser
Love it! I'll do my best to join in and listen (and participate later 
on, too ;))


I joined one from Calcite a week or two ago. They did a signup via 
Meetup.com and hosted it through Zoom. It felt very professional.


On 2/4/21 12:10 PM, Kadir Ozdemir wrote:

We are very excited to propose an idea that brings the Phoenix community
together to have technical discussions on a recurring basis. The goal is to
have a forum where we share technical knowledge we have acquired by working
on various aspects of Phoenix and to continue to bring innovation and
improvements as a community into Phoenix. We’d love to get feedback on this
idea and determine the logistics for these meetings.

Here is what we were thinking:

- Come together as a community by hosting *Phoenix tech talks* once a
month
- The topics for these meetings can be any technical subject related to
Phoenix, including the architecture, internals, features and interfaces of
Phoenix, its operational aspects in the first party data centers and cloud,
the technologies that it leverages (e.g., HBase and Zookeeper), and
technologies it can possibly leverage, adapt or follow

*Logistics*:

- *When*: First Thursday of each month at 9AM PST
- *Duration*: 90 minutes (to allow the audience to participate and ask
questions)
- We will conduct these meetings over a video conference and make the
recordings available (we are sorting out the specifics)
- The meeting agenda and past recordings will be available on the Apache
Phoenix site

We need a coordinator for these meetings to set the agenda and manage its
logistics. I will volunteer to organize these meetings and curate the
topics for the tech talks, at least initially. To get the ball rolling, I
will present the strongly consistent global indexes in the first meeting.
What do you think about this proposal?

Thanks,
Kadir



Re: [Discuss] Dropping support for older HBase version

2021-01-29 Thread Josh Elser
I'd request that we keep hbase-2.2 support around for a while longer. If 
we drop that, it's going to cause us some major headache whereas I'd 
rather see us able to keep pushing our dayjob efforts directly into 
upstream.


On 1/28/21 11:56 PM, Viraj Jasani wrote:

+1(non-binding) to EOLing the support for HBase 1.3 and 2.1 at least since
both were EOLed last year (1.4 and 2.2 can also be dropped).

Moreover, b/ 2.4.0 and 2.4.1 we have some compat issue in IA.Private class
(we need some utility from HStore which is refactored in 2.4.1), hence we
will need new compat module to support 2.4.1+ releases in Phoenix 5.2.0+
releases mostly.


On Fri, 29 Jan 2021 at 6:54 AM, Geoffrey Jacoby  wrote:


+1. Following 4.16 and 5.1's releases I'd suggest EOLing support for HBase
1.3, 1.4, 2.1 and 2.2, I believe all of which have been EOLed by the HBase
community. All of those versions also require special compatibility lib
support currently.

Geoffrey

On Thu, Jan 28, 2021 at 6:35 PM Xinyi Yan  wrote:


Hi,

I'm thinking to drop the number of supported HBase versions for future
releases. For example, the HBase 1.3 was EOM'd in August 2020, do we

still

consider support it for 4.17.0? Similarly, our current master branch also
supports EOM'd HBase version. If phoenix users already upgraded their
HBase, we should not spend time supporting these old versions IMO.

I think we should do it after 4.16.0 and 5.1.0, thoughts?


Thanks,
Xinyi







Re: [Discuss] Releasing Phoenix 4.16

2021-01-12 Thread Josh Elser
IMO, a slow build is better than forcing users to run their own dev 
build of a release.


Very proud of all of the work y'all have been doing on 4.16. Keep 
pushing on the release :)


On 1/12/21 12:35 PM, Istvan Toth wrote:

I've been working this, and came up with the following:

* We no longer generate phoenix-client.jar and phoenix-server.jar, we call
them phoenix-client-hbase-X.Y.jar and phoenix-server-hbase-X.Y.jar instead.
* These file names are used in the binary assemblies, as well as as maven
coordinates for consistency.
* We generate four release binaries, for each HBase profile
* We build and publish to maven the phoenix-client-hbase-X.Y and
phoenix-server-hbase-X.Y artifacts for each profile (Actually, we deploy
four times, but the rest of the artifacts are identical)

* On the downside, running the release script on a Macbook takes several
hours, as we build HBase 4 and Phoenix 8 times on the slow and kludgy Mac
Docker filesystem.

The almost finished patch to the build system is linked to
https://issues.apache.org/jira/browse/PHOENIX-6307

4.x would have the same build system changes once this is approved.

Please check it out, and let me know if this solution is satisfactory, or
if you have a better idea.

regards
Istvan


On Fri, Jan 8, 2021 at 4:22 PM Istvan Toth  wrote:


As I started to work on this I realized that while providing binary
tarballs for each HBase profile is fine,
this does not solve the maven artifact issue.

Are we OK with publishing a single phoenix-client maven artifact (for the
oldest HBase),
or do we want to publish a separate one for each HBase version ?

I looked at publishing multiple client versions, but none of them are
particularly easy or attractive.

The best I could come up with is adding a separate maven module for (each
version x embedded) (i.e 6 for 4.x, 8 for  master),
and activating them according to hbase.profile.

This would also mean that we need to add the hbase version to the artifact
id. i.e.: phoenix-client-hbase-2.1

Once we publish separate binaries for the HBase profiles, we can undo the
change that excludes compat-module from phoenix-server,
and shade it in again for the binary assemblies.

In this case we'd also have to do something about the phoenix-server maven
artifacts. Either they get the same treatment as phoenix-client,
or we simply skip publishing them. I personally do not see anyone getting
phoenix-server.jar from maven.

The easiest version is
* publish the oldest profile phoenix-client to maven
* do not publish phoenix-server to maven
* add compat-module to phoenix-server for the binary artifact


regards
Istvan

On Fri, Jan 8, 2021 at 6:56 AM Istvan Toth  wrote:


On the release binaries:

The current solution (which the default profile change has broken) was
based on Lars's idea at

https://issues.apache.org/jira/browse/PHOENIX-5902?focusedCommentId=17125122=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17125122
However, I agree that providing separate assemblies for each HBase
profile is better for our users, as they won't have to rebuild Phoenix to
take advantage of any new features, and to get the general  improvements in
later HBase releases.
I have opened https://issues.apache.org/jira/browse/PHOENIX-6307 to
track this.

On the 5.1 release:

Yes, I do want to release 5.1 shortly. In fact $dayjob desperately needs
it ASAP.
I have a few older PRs waiting for review that fell between the cracks,
but as soon as those are merged, I want to cut the first RC.
It would be nice to have 4.16 and 5.1 as close as possible, and
PHOENIX-6211 seems to be ready, so I hope to include it in 5.1 too.
I will start an official 5.1 release thread and volunteer to be the
release manager soon. (unless you want to take that up too, Xinyi).



On Fri, Jan 8, 2021 at 1:08 AM Xinyi Yan  wrote:


If we can modify the dev/create-release scripts and make them work for
the
4.16 release with this hbase.profile option, it would make our life much
easier to release multiple HBase profiles from the single branch in the
future too(the master branch will have a release shortly right?).
Geoffrey
and Istvan, what do you think?

Thanks,
Xinyi

On Thu, Jan 7, 2021 at 11:28 AM Geoffrey Jacoby 
wrote:


Thanks for bringing up the default branch issue, Istvan, I've been

meaning

to start a conversation about it on this list.

As part of PHOENIX-5435, I changed the default 4.x HBase release to

1.5 and

the default 5.x HBase to 2.3 because the WAL annotation feature

introduced

by 5435 only works with HBase 1.5+ or 2.3+. (It depends on a coproc

hook

introduced in HBASE-22623). That means that all tests of that feature

must

no-op when run against an earlier HBase, which means that it would

never be

tested in our CI pipelines if the default was 1.3 or 2.1.

This has come up quite a few times recently. In particular, the new
indexing framework runs in a degraded state against HBase 2.1 and 2.2
(still better than the old indexing 

Re: [VOTE] Release of Apache Tephra 0.16.0RC2 as Apache Tephra 0.16.0

2020-11-30 Thread Josh Elser

+1 (binding)

* xsums/sigs OK
* CHANGES has reasonable content
* mvn apache-rat:check passes and `find . -type f` doesn't show anything 
unreasonable

* Can build the code
* Ran many unit tests, but cancelled at ~1hr mark.
* Can build Phoenix master against 0.16.0rc2

On 11/30/20 8:15 AM, Istvan Toth wrote:

Please vote on this Apache Phoenix Tephra release candidate,
phoenix-tephra-0.16.0RC2

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix tephra 0.16.0
[ ] -1 Do not release this package because ...

The tag to be voted on is 0.16.0RC2

   https://github.com/apache/phoenix-tephra/tree/0.16.0RC2

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-tephra-0.16.0RC2/

Maven artifacts are available in a staging repository at:

   https://repository.apache.org/content/repositories/orgapachephoenix-1206/

Artifacts were signed with the 0x794433C7 key which can be found in:

   https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache Phoenix Tephra, please see

   https://tephra.incubator.apache.org/

Thanks,
Istvan



Re: Please describe website update procedure for updating Phoenix and Omid

2020-11-30 Thread Josh Elser
Should we just incorporate the omid and tephra websites into phoenix.a.o 
going forward?


I'm surprised infra didn't kill the old websites when the IPMC 
"graduated" them.


ASF websites are either updated via svn pub-sub, git pub-sub, or via ASF 
CMS (which might be EOL, I forget).


svn/git pub-sub essentially work the same way: when you commit to a 
certain branch in a repository, magic happens on the infra side to 
update the corresponding website. For SVN pubsub, this is just some 
repo. For Git pubsub, the website branch defaults to "asf-site".


ASF CMS was a home-grown content management system in which you would go 
into your website, click a special bookmark, use a WYSIWYG editor, 
stage, and publish changes.


If we don't know what Omid (and Tephra) are using, we can ask infra.

On 11/25/20 2:35 AM, Istvan Toth wrote:

I have documented my unsuccessful attempt to update the Omid website in
https://issues.apache.org/jira/browse/OMID-190

Still looking for any information on update procedure for both websites.

regards
Istvan

On Mon, Nov 9, 2020 at 1:30 PM Istvan Toth  wrote:


Hi!

We have documented how to update the Phoenix website on the website itself.
However, I could not find similar documentation for Omid and Tephra.

If you have updated either website in the past, or know the procedure to
update either, please describe the procedure, or link the relevant
documentation.

thanks in advance
Istvan





Re: [VOTE] Release of Apache Omid 1.0.2

2020-11-20 Thread Josh Elser

(Because I accidentally sent just to Istvan the first time)

On 11/19/20 3:27 PM, Josh Elser wrote:

+1 binding

* NOTICE has out of date copyright year. Fix for later. License appears 
fine (some copyright retained by Yahoo on some benchmark files, it 
seems, but still apache licensed)

* xsums/sigs OK
* Some errors reported by `mvn apache-rat:check`. Fine to ship in this, 
IMO. Let's get fixed for next release.


```
   misc/findbugs-exclude.xml
   misc/omid_checks.xml
   bintray-settings.xml
   doc/images/ModuleDependencies.graffle
   doc/site/site.xml
   .travis.yml
```

* Can run unit tests
* Can build Phoenix's master branch with 1.0.2rc0 (have to use the 
profile `-Phbase-2` on phoenix-omid when building). A smell we can fix 
later in omid.

* Phoenix unit tests pass with 1.0.2rc0

Looks great to me by that! Nice work, Istvan.

- Josh

On 11/17/20 10:24 AM, Istvan Toth wrote:

Please vote on this Apache phoenix omid release candidate,
phoenix-omid-1.0.2RC0

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix omid 1.0.2
[ ] -1 Do not release this package because ...

The tag to be voted on is 1.0.2RC0:

   https://github.com/apache/phoenix-omid/tree/1.0.2RC0

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:

   https://dist.apache.org/repos/dist/dev/phoenix/phoenix-omid-1.0.2RC0/

Maven artifacts are available in a staging repository at:

   
https://repository.apache.org/content/repositories/orgapachephoenix-1203/


Artifacts were signed with the 0x794433C7 key which can be found in:

   https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache phoenix omid, please see

   https://omid.incubator.apache.org/

Thanks,
Istvan



Re: Phoenix connection issue

2020-10-23 Thread Josh Elser

(-to: dev@phoenix, +bcc: dev@phoenix, +to: user@phoenix)

I've taken the liberty of moving this over to the user list.

Typically, such an exception is related to Kerberos authentication, when 
the HBase service denies in an incoming, non-authenticated client. 
However, since you're running inside of a VM, I'd suggest you should 
validate that your environment is sane, first.


HDP 2.6 never shipped an HBase 1.5 release, so it seems like you have 
chosen to own your own set of versions. I'll assume that you have 
already validated that the versions of Hadoop, HBase, and Phoenix that 
you are running are compatible with one another. A common debugging 
trick is to strip away the layers of complexity so that you can isolate 
your problem. Can you talk to HBase or Phoenix not within Presto?


On 10/23/20 8:58 AM, Varun Rajavelu wrote:

Hi,,

I'm currently using HDP 2.6 and hbase1.5 and phoenix 4.7 and presto-server
which i have been using presto344. Im facing issue while connecting
prestoserver with hbase;

*./presto --server localhost:14100 --catalog phoenix*
*Issue:*
Fri Oct 23 18:11:06 IST 2020, null, java.net.SocketTimeoutException:
callTimeout=6, callDuration=69066: Call to
sandbox-hdp.hortonworks.com/127.0.0.1:16020 failed on local exception:
java.io.EOFException row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at
region=hbase:meta,,1.1588230740,
hostname=sandbox-hdp.hortonworks.com,16020,1603447852070,
seqNum=0

*My presto catalog Config:*
connector.name=phoenix
phoenix.connection-url=jdbc:phoenix:localhost:2181:/hbase-unsecure
phoenix.config.resources=/home/desa/Downloads/hbase_schema/hbase-site.xml

Kindly please have a look and help me to resolve this issue.



Re: [DISCUSS] Phoenix bot account for GitHub

2020-10-21 Thread Josh Elser

Sounds good to me. Yes, would say that private SVN space is good.

I can't find any existing private space, so I think it would be an ask 
to infra, e.g. https://issues.apache.org/jira/browse/INFRA-15461



On 10/21/20 1:23 AM, Istvan Toth wrote:

Hi!

I've recently implemented the Yetus PR checks for GitHub PR, which for the
most part seem to work well.

However, it seems that none of the available GitHub credentials in Jenkins
let the job comment on the PR, so ATM the jobs are using my GitHub account.
It is not very professional looking, and I get spammed with mail on every
entry on every ticket that has a PR, which makes life difficult for me.

Looking at HBase (as ever), they have created a bot account, and are using
it for the same purpose.

I propose that we do similarly. The GitHub docs

seem
to indicate that it is OK.

One open question is how to share the credentials, so that I am not not the
only one with access. I seem to recall that we have a private SVN or git
repo for Phoenix members somewhere, that we could use to store the
login/password for it, but I cannot find it now.

Please share your opinion, and point me to the private repo, or the docs
that describes it.

regards
Istvan



Re: [VOTE] Release of phoenix-thirdparty 1.0.0

2020-10-21 Thread Josh Elser

+1 (binding)

* L seem fine
* RELEASENOTES.md is empty, maybe we'll have content in there later? 
(Mentioning in case there should have been content)

* phoenix-shaded-guava jar looks sane
* apache-rat:check passes
* Nice content in CHANGES.md
* Built phoenix.git with this release

On 10/14/20 3:05 AM, Istvan Toth wrote:

Please vote on this Apache phoenix-thirdparty release candidate,
phoenix-thirdparty-1.0.0RC0

The VOTE will remain open for at least 72 hours.

[ ] +1 Release this package as Apache phoenix-thirdparty 1.0.0
[ ] -1 Do not release this package because ...

The tag to be voted on is 1.0.0RC0

   https://github.com/apache/phoenix-thirdparty/tree/1.0.0RC0

The release files, including signatures, digests, as well as CHANGES.md
and RELEASENOTES.md included in this RC can be found at:


https://dist.apache.org/repos/dist/dev/phoenix/phoenix-thirdparty-1.0.0RC0/

Artifacts were signed with the ${GPG_KEY} key which can be found in:

   https://dist.apache.org/repos/dist/release/phoenix/KEYS

To learn more about Apache Phoenix, please see

   http://phoenix.apache.org/

Thanks,
Istvan



Re: [DISCUSS] Remove Omid and Tephra server component from Phoenix distribution

2020-09-11 Thread Josh Elser

Sounds reasonable to me.

We have the same kind of thing going since phoenix-queryserver was moved 
out to its own repository. Maybe we can come up with some conventions as 
to how we "overlay" things so that Omid, PQS, and Tephra can all have 
some semblance of familiarity?


I think that would mean, daemons and such that Omid/Tephra need to run 
would be launched via an assembly in their respective code-bases. I 
guess the difference to PQS is that each of them have jar(s) that would 
also need to be added to the HBase RegionServer classpath?


On 9/11/20 11:09 AM, Istvan Toth wrote:

Hi!

We are currently shipping startup scripts and [some of] the JARs necessary
to start the server components of Omid and Tephra in the Phoenix
distribution.

The JARs for OMID are manually enumerated to be added to the distribution,
and have to be updated whenever the Omid dependencies change (and are
currently not enough to start Omid TSO), while the JARS for Tephra seem to
be completely missing.

I propose that we remove both from the Phoenix assembly, and instead
document (link to the corresponding project documentations) how to install
and run the server components for Omid and Tephra.

This would free us from the burden of having to duplicate and maintain the
Omid and Tephra server runtimes in our distribution, and save our users the
frustration when we fail to do so.

Looking forward to hearing your thoughts on this.

best regards
Istvan



Re: [VOTE] Release of phoenixdb 1.0.0 RC0

2020-09-10 Thread Josh Elser

+1 (binding)

* Xsums/sigs are good
* dist/dev/phoenix/KEYS was updated but dist/release/phoenix is the 
official KEYS file that you need to add your key to, Istvan

* dev-support's RAT check passed
* No unexpected files in the source release
* Was able to run tests against all Python versions in tox.ini
- I did run into issues with the QueryServerBasicsIT going into a 
fail-loop state (where HBase was stuck in ConnectionLoss), but I think 
that's just stressing the tiny JVM, not a product defect.


Great work Istvan!

On 9/9/20 5:09 AM, Istvan Toth wrote:

Hello Everyone,

This is a call for a vote on phoenixdb 1.0.0 RC0.

PhoenixDB is native Python driver for accessing Phoenix via Phoenix Query
Server.

This version is the first version to be released by Apache Phoenix Project,
and contains the following improvements compared to the previous 0.7
release by the original author.

- Replaced bundled requests_kerberos with request_gssapi library
- Use default SPNEGO Auth settings from request_gssapi
- Refactored authentication code
- Added support for specifying server certificate
- Added support for BASIC and DIGEST authentication
- Fixed HTTP error parsing
- Added transaction support
- Added list support
- Rewritten type handling
- Refactored test suite
- Removed shell example, as it was python2 only
- Updated documentation
- Added SQLAlchemy dialect
- Implemented Avatica Metadata API
- Misc fixes
- Licensing cleanup

The source release consists of the contents of the python-phoenixdb
directory of the phoenix-queryserver repository.

The source tarball, including signatures, digests, etc can be found at:

https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.asc
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.sha256
https://dist.apache.org/repos/dist/dev/phoenix/python-phoenixdb-1.0.0-rc0/src/python-phoenixdb-1.0.0-src.tar.gz.sha512

Artifacts are signed with my "CODE SIGNING KEY":
825203A70405BC83AECF5F7D97351C1B794433C7

KEYS file available here:
https://dist.apache.org/repos/dist/dev/phoenix/KEYS


The hash and tag to be voted upon:
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=commit;h=3360154858e27cabe258dfb33b37ec31ed3bd210
https://gitbox.apache.org/repos/asf?p=phoenix-queryserver.git;a=tag;h=refs/tags/python-phoenixdb-1.0.0-rc0

Vote will be open for at least 72 hours. Please vote:

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
The Apache Phoenix Team



[jira] [Updated] (PHOENIX-5881) Port MaxLookbackAge logic to 5.x

2020-09-09 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5881:

Attachment: PHOENIX-5881.v4.patch

> Port MaxLookbackAge logic to 5.x
> 
>
> Key: PHOENIX-5881
> URL: https://issues.apache.org/jira/browse/PHOENIX-5881
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Blocker
> Fix For: 5.1.0
>
> Attachments: PHOENIX-5881.v1.patch, PHOENIX-5881.v2.patch, 
> PHOENIX-5881.v3.patch, PHOENIX-5881.v4.patch
>
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> PHOENIX-5645 wasn't included in the master (5.x) branch because an HBase 2.x 
> change prevented the logic from being useful in the case of deletes, since 
> HBase 2.x no longer allows us to show deleted cells on an SCN query before 
> the point of deletion. Unfortunately, PHOENIX-5645 wound up requiring a lot 
> of follow-up work in the IndexTool and IndexScrutinyTool to deal with its 
> implications, and because of that, the 4.x and 5.x codebases around indexes 
> have diverged a good bit. 
> This work item is to get them back in sync, even though the behavior in the 
> face of deletes will be somewhat different, and so most likely some tests 
> will have to be changed or Ignored. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-6084) Error in public package when builded

2020-08-25 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-6084.
-
Resolution: Invalid

It seems like your complaint is that the _Guava_ code {{InternetDomainName}} 
does not work correctly when it has been shaded into the Phoenix client jar.

First and most importantly, we do not package Guava for downstream users to 
leverage. It is present for Phoenix to use internally. You should not be 
relying on any bundled dependencies that come with Phoenix jars as they are 
subject to change across _any_ release.

Second, if this Guava code does not work for some reason which is due to how we 
shade Guava into the phoenix-client jar, I think we would generally be 
interested in fixing that. However, as Phoenix does not use anything from this 
class, it is likely not a priority of any of the regular developers. If you 
have interest in debugging why your example doesn't work and can create a fix, 
please open a new Jira issue which explains the issue, includes a change, and 
demonstrates how the change fixes your issue.

> Error in public package when builded
> 
>
> Key: PHOENIX-6084
> URL: https://issues.apache.org/jira/browse/PHOENIX-6084
> Project: Phoenix
>  Issue Type: Bug
>  Components: core
>Affects Versions: 5.0.0
>Reporter: Alejandro Anadon
>Priority: Major
>
> It is very easy to reproduce. Just download the 5.0.0-HBase-2.0 
> ([https://phoenix.apache.org/download.html)] and make the next  (and very 
> simple) class in java using the "phoenix-5.0.0-HBase-2.0-client.jar" that is 
> inside of the package :
> -
> import com.google.common.net.InternetDomainName;
> public class TestBug{
>  public static void main(String[] args){
>  InternetDomainName domainName;
>  domainName=InternetDomainName.from("www.mydomain.com"); 
>  System.out.println(domainName.isUnderPublicSuffix());
>  
>  domainName=InternetDomainName.from("www.mydomain.net");
>  System.out.println(domainName.isUnderPublicSuffix());
>  
>  
> domainName=InternetDomainName.from("nonopublicsufix.org.apache.phoenix.shaded.net.al");
>  System.out.println(domainName.isUnderPublicSuffix());
>  
>  domainName=InternetDomainName.from("org.apache.phoenix.shaded.net.al");
>  System.out.println(domainName.isPublicSuffix());
>  }
>  }
> --
>  
> Expected result:
>  
> true   -> [www.mydomain.com|http://www.mydomain.com/] is under .com public 
> suffix
> true -> [www.mydomain.net|http://www.mydomain.com/] is under .net public 
> suffix
> false -> nonopublicsufix.org.apache.phoenix.shaded.net.al is under NON public 
> suffix org.apache.phoenix.shaded.net.al
> false -> org.apache.phoenix.shaded.net.al is NOT a public sufix
>  
> Actually result:
> true  -> ok
> false   -> Error
> true  -> Error
> true  -> Error
>  
> I found the error in jar (I had not the knowledge to find it in the source 
> code).
> The error is in the actually class "com.google.common.net.TldPatterns" that 
> is in the current jar  "phoenix-5.0.0-HBase-2.0-client.jar".
>  
> If you decompile it , you can see something like this:
>EXACT = ImmutableSet.of((Object)"ac", (Object)"com.ac", 
> (Object)"edu.ac", (Object)"gov.ac", 
> (Object)"org.apache.phoenix.shaded.net.ac", (Object)"mil.ac", (Object[])new 
> String[] { "org.ac", "ad", "nom.ad", "ae", "co.ae", 
> "org.apache.phoenix.shaded.net.ae", " 
> and it seems that all domains that should begings with "net" (p.e. "net.ac"), 
> when building the file (it is a generated file), it makes a mistake and add 
> "org.apache.phoenix.shaded" leaving it as "org.apache.phoenix.shaded.net.ac" 
> and this makes the error.
> I guest that as this is a generated class, there should be something bad when 
> building the complete package.
>  
> I didn't test it in other versions (I solved it building my own classes; but 
> it is not a clean solution).
>  
>  
> I know that this bug is NOT from the core of Phoenix; but in my case, I uses 
> the "phoenix-5.0.0-HBase-2.0-client.jar" for access to phenix, and the 
> "com.google.common.net.InternetDomainName" class for domains funtions.
> So I do not know if this is the right place to create the issue.
> (I take the opportunity to ask something:
> I have been able to conect phoenix using hibernate ORM satisfactorily .
> I created a "Dialect" and make 3 o 4 changes in hibernate core to change 
> "insert" to "upsert".
> It is large to splain how I did it and it is not correct to do it here, but 
> if somebody tell me the right place to do it, I'll do it).
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6068) (5.x) Read repair reduces the number of rows returned for LIMIT queries

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6068:

Priority: Blocker  (was: Major)

> (5.x) Read repair reduces the number of rows returned for LIMIT queries
> ---
>
> Key: PHOENIX-6068
> URL: https://issues.apache.org/jira/browse/PHOENIX-6068
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 4.16.0
>
>
> Phoenix uses HBase PageFilter to limit the number of rows returned by scans. 
> If a scanned index row is unverified, GlobalIndexChecker repairs this rows. 
> This repair operation leads to either skipping the unverified row or scanning 
> its repaired version. Every scanned row including unverified rows are counted 
> by the page filter. Since unverified rows are counted but not returned for 
> the query, the actual number of rows returned for a LIMIT query becomes less 
> than the set limit (i.e., page size) for the query.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6068) (5.x) Read repair reduces the number of rows returned for LIMIT queries

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-6068:
---

Assignee: (was: Kadir OZDEMIR)

> (5.x) Read repair reduces the number of rows returned for LIMIT queries
> ---
>
> Key: PHOENIX-6068
> URL: https://issues.apache.org/jira/browse/PHOENIX-6068
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 5.1.0
>
>
> Phoenix uses HBase PageFilter to limit the number of rows returned by scans. 
> If a scanned index row is unverified, GlobalIndexChecker repairs this rows. 
> This repair operation leads to either skipping the unverified row or scanning 
> its repaired version. Every scanned row including unverified rows are counted 
> by the page filter. Since unverified rows are counted but not returned for 
> the query, the actual number of rows returned for a LIMIT query becomes less 
> than the set limit (i.e., page size) for the query.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6068) (5.x) Read repair reduces the number of rows returned for LIMIT queries

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6068:

Fix Version/s: (was: 4.16.0)
   5.1.0

> (5.x) Read repair reduces the number of rows returned for LIMIT queries
> ---
>
> Key: PHOENIX-6068
> URL: https://issues.apache.org/jira/browse/PHOENIX-6068
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.3
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Blocker
> Fix For: 5.1.0
>
>
> Phoenix uses HBase PageFilter to limit the number of rows returned by scans. 
> If a scanned index row is unverified, GlobalIndexChecker repairs this rows. 
> This repair operation leads to either skipping the unverified row or scanning 
> its repaired version. Every scanned row including unverified rows are counted 
> by the page filter. Since unverified rows are counted but not returned for 
> the query, the actual number of rows returned for a LIMIT query becomes less 
> than the set limit (i.e., page size) for the query.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-6068) (5.x) Read repair reduces the number of rows returned for LIMIT queries

2020-08-10 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-6068:
---

 Summary: (5.x) Read repair reduces the number of rows returned for 
LIMIT queries
 Key: PHOENIX-6068
 URL: https://issues.apache.org/jira/browse/PHOENIX-6068
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 5.0.0, 4.14.3
Reporter: Kadir OZDEMIR
Assignee: Kadir OZDEMIR
 Fix For: 4.16.0


Phoenix uses HBase PageFilter to limit the number of rows returned by scans. If 
a scanned index row is unverified, GlobalIndexChecker repairs this rows. This 
repair operation leads to either skipping the unverified row or scanning its 
repaired version. Every scanned row including unverified rows are counted by 
the page filter. Since unverified rows are counted but not returned for the 
query, the actual number of rows returned for a LIMIT query becomes less than 
the set limit (i.e., page size) for the query.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6067) (5.x) IndexTool's inline verification should not verify rows beyond max lookback age

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6067:

Priority: Blocker  (was: Major)

> (5.x) IndexTool's inline verification should not verify rows beyond max 
> lookback age
> 
>
> Key: PHOENIX-6067
> URL: https://issues.apache.org/jira/browse/PHOENIX-6067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Priority: Blocker
> Fix For: 4.15.1
>
>
> IndexTool's inline verification should not verify rows beyond max lookback age
> Similar to Phoenix-5734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-6067) (5.x) IndexTool's inline verification should not verify rows beyond max lookback age

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-6067:

Fix Version/s: (was: 4.15.1)
   5.1.0

> (5.x) IndexTool's inline verification should not verify rows beyond max 
> lookback age
> 
>
> Key: PHOENIX-6067
> URL: https://issues.apache.org/jira/browse/PHOENIX-6067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Priority: Blocker
> Fix For: 5.1.0
>
>
> IndexTool's inline verification should not verify rows beyond max lookback age
> Similar to Phoenix-5734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-6067) (5.x) IndexTool's inline verification should not verify rows beyond max lookback age

2020-08-10 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-6067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-6067:
---

Assignee: (was: Weiming Wang)

> (5.x) IndexTool's inline verification should not verify rows beyond max 
> lookback age
> 
>
> Key: PHOENIX-6067
> URL: https://issues.apache.org/jira/browse/PHOENIX-6067
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.1
>
>
> IndexTool's inline verification should not verify rows beyond max lookback age
> Similar to Phoenix-5734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Roadmap to releasing Phoenix 5.1, PQS 6.0 and Connectors 6.0

2020-08-10 Thread Josh Elser

Agreed with Istvan on the importance of the new secondary indexing.

Thanks for the super-clear list, Geoffrey. I'll spin out clones of each 
of the BLOCKERs you mention and tag them for 5.1.0 (not that they get lost).


Thanks you, Istvan, for your very thoughtful write-up. Looks like a 
great plan.


On 8/5/20 1:11 AM, Istvan Toth wrote:

Hi!

Strongly Consistent Global Indexes is indeed the killer new feature in 5.1,
and it's important to have it working as well as possible.
I was only vaguely aware of the deficiencies in master, thanks for
explaining it (and working on fixing them)

Thanks
Istvan

On Tue, Aug 4, 2020 at 9:22 PM Geoffrey Jacoby  wrote:


Thanks, Istvan, Richard, Josh and Rajeshbabu (and anyone else I missed :-)
) for all your hard work getting master and the ancillary projects into a
releasable state.

An additional task that I think should happen before 5.1 can be released is
getting the indexing code in master up to parity with 4.x. As you may know,
between 4.14 and the upcoming 4.16, a lot of work has been done to build a
new, self-repairing consistent secondary index framework for Phoenix. Most
of that work has been ported to the master branch, but there are still some
significant gaps.

The biggest gap comes from the new indexing framework relying heavily on
Phoenix's SCN "lookback" feature to do point-in-time selects during index
creation and verification. SCN has in the past been an unreliable tool,
because HBase's flush and major compaction code cleans up expired versions
and removes chunks of prior history. To work around this, in 4.16 we've
introduced "max lookback age" in PHOENIX-5645, which allows operators to
configure a moving window during which compactions and flushes will not
purge versions for any history.

PHOENIX-5645 doesn't exist in master, because the coprocessor changes in
HBase 2.0 made implementing the needed compaction hooks impossible.
HBASE-24321, released in HBase 2.3, makes it possible again, though only
for Phoenix builds using the 2.3 profile. (Since it adds to an interface,
HBase compatibility guidelines prevent HBASE-24321 from being backported to
2.1 or 2.2.)

So, for 5.1 I believe we need forward ports for :
PHOENIX-5881 (implementing max lookback age for 2.3) - BLOCKER
PHOENIX-5735 (IndexTool verification distinguishes between inconsistencies
before or after max lookback age) - BLOCKER
PHOENIX-5928 (Simplification / perf improvement for index builds)
PHOENIX-5969 (Bug fix for querying indexes with limit clauses) - BLOCKER
PHOENIX-5951 (Configurable failure logging for past-max-lookback rows)
PHOENIX-6058 (Better behavior on verification when max lookback is disabled
-- needed for HBase 2.1 and 2.2 profiles) - BLOCKER

Kadir, Swaroopa, Abhishek, and Gokcen, please add in any that I missed. :-)
  Happy to discuss what is and isn't a blocker.

I plan to do PHOENIX-5881 next week, and the rest depends on it and will
follow afterward.

Thanks,

Geoffrey


On Mon, Aug 3, 2020 at 7:24 AM Istvan Toth  wrote:


Hi!

It's been more than two years since we've released 5.0.0, and almost as
long since Connectors and PQS have been split from the main repo.

I believe that we are now at the point where we've solved, or are close

to

solving the issues that have prevented us from releasing a useful and
relevant 5.1.0 , as well as making an actual releases of PQS and

Connectors

that are usable with both 5.x and 4.x.

The two major blockers that are still open are

- PHOENIX-6010 Create phoenix-thirdparty, and consume guava through it
- PHOENIX-5784 Phoenix-connectors doesn't work with phoenix master
branch

but I hope that we can wrap those up in the next few weeks.

This is going to be a complex process, as we'll have to release new
versions of ALL of our components. To recap, the affected projects, (and
their dependencies):

- phoenix-thirdparty
- tephra
- omid (phoenix-thirdparty)
- phoenix (tehpra ?, omid, phoenix-thirdparty)
- PQS
- Connectors

The 5.1 release is also a point where we can revisit the decision to
support Tephra. We have inherited those projects because of low developer
interest, and it hasn't increased visibly since we've adopted them.
Rajeshbabu and Josh have done some analysis and, as a part of our day

job,

are investing time first with Omid to ensure it's functional with the

rest

of Phoenix in its new home/packaging.

Tephra also carries the technical debt of being dependent on the
discontinued Twill library, which in turn is locked to oid Guava

versions.

In TEPHRA-308 I am implementing the stopgap solution of shading both

away,

so it is not a blocker for 5.1, but concentrating on one library would
probably be a smarter use of the almost non-existent developer time that
goes into maintaining our transactional solution.

I plan to add a profile to build Phoenix without Tephra, thus avoiding

the

problematic dependencies that it has. (Alternatively, the default can be
omitting Tephra, and defining a 

[jira] [Created] (PHOENIX-6067) (5.x) IndexTool's inline verification should not verify rows beyond max lookback age

2020-08-10 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-6067:
---

 Summary: (5.x) IndexTool's inline verification should not verify 
rows beyond max lookback age
 Key: PHOENIX-6067
 URL: https://issues.apache.org/jira/browse/PHOENIX-6067
 Project: Phoenix
  Issue Type: Improvement
Reporter: Swaroopa Kadam
Assignee: Weiming Wang
 Fix For: 4.15.1


IndexTool's inline verification should not verify rows beyond max lookback age

Similar to Phoenix-5734



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Solving the Guava situation by creating phoenix-thirdparty

2020-07-17 Thread Josh Elser

+1 Looks like a good plan, and happy to see PR's up for it already :)

On 7/16/20 8:44 AM, rajeshb...@apache.org wrote:

Sounds like a good plan and good work Istvan! Having pre-shaded third party
repo and using in most of the dependent components like Omid, Tephra avoids
a lot of headaches with compatibility.


On Wed, Jul 15, 2020 at 11:07 AM Istvan Toth  wrote:


Hi!

I've just opened https://issues.apache.org/jira/browse/PHOENIX-6010 that
introduces a pre-shaded Hbase-style phoenx-thirdparty repo with pre-shaded
Guava.

Please check it out, and share your thoughts on it!

Copying most of the ticket here, in the hope of getting more eyes on it:

We have long-standing and well-documented problems with Guava, just like
the rest of the Hadoop components.

Adopt the solution used by HBase:

- create phoenix-thirdparty repo
- create a pre-shaded phoenix-shaded-guava artifact in it
- Use the pre-shaded Guava in every phoenix component

The advantages are well-known, but to name a few:

- Phoenix will work with Hadoop 3.1.3+
- One less CVE in our direct dependencies
- No more conflict with our consumer's Guava versions


Notes:

- I've chosen 29.0-android for the thirdparty Guava version, as we need
Java 7 compatibility.
   - The alternative would be Guava 20 (the last non-android release
   that supports Java 7), which has CVEs.
- Tephra doesn't use phoenix-thirdparty, instead it is shaded with Twill
and Guava 13, as its Twill dependency doesn't work with recent Guavas.
   - The long-term solution would be removing the EOL twill dependency
   from it, and then converting to thirdparty, but that's quite a
lot of work,
   and I wanted to have something that works now.
- This is less of an issue for 4.x, where every component is on Guava 13
- ish, but I think once it's done, it'd be worth backporting this to
4.x as
well, if only to make backporting easier.
- If/when we agree on doing this, and have worked out the details, I'll
add the sub-tasks for getting this in master:
   - create a new repo for phoenix-thirdparty and release it
   - update and release Tephra with the shaded artifact
   - update and release Omid with the the thirdparty stuff
   - update the Omid and Tephra dependencies in Phoenix, and convert it
   to use thirdparty as well.

Please share your thoughts, opinion, and questions!





[jira] [Created] (PHOENIX-5999) Have executemany leverage ExecuteBatchRequest

2020-07-09 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5999:
---

 Summary: Have executemany leverage ExecuteBatchRequest
 Key: PHOENIX-5999
 URL: https://issues.apache.org/jira/browse/PHOENIX-5999
 Project: Phoenix
  Issue Type: Improvement
  Components: python
Reporter: Josh Elser
Assignee: Josh Elser


After some testing years ago, I wrote ExecuteBatch bindings for avatica. The 
observation was that we spent more time executing the HTTP call and parsing the 
tiny protobuf than we did in sending the update to HBase.

ExecuteBatch was a dirt-simple idea in that instead of sending one row's worth 
of parameters to bind to a statement, send many row's worth.

e.g. before we would do:
{noformat}
execute(stmt, ['a', 'b']); execute(stmt, ['b', 'c']), ... {noformat}
but with executeBatch we can do
{noformat}
executeBatch(stmt, [['a', 'b'], ['b', 'c'], ...]) {noformat}
and send exactly one http call instead of multiple. Obviously this is a huge 
saving.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5778) Remove the dependency of KeyStoreTestUtil

2020-07-03 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5778.
-
Resolution: Fixed

Thanks, Guanghao! Sorry again for the delay in applying this. Always good to 
keep an eye on these kinds of things.

> Remove the dependency of KeyStoreTestUtil
> -
>
> Key: PHOENIX-5778
> URL: https://issues.apache.org/jira/browse/PHOENIX-5778
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: queryserver-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If I am not wrong, phoenix should reduce the dependency of hbase 
> class/interface which not marked IA.Public.  KeyStoreTestUtil is just a 
> static util class. I thought phoenix query server can copy a new one and not 
> depend on hbase's KeyStoreTestUtil.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5778) Remove the dependency of KeyStoreTestUtil

2020-07-01 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5778:

Component/s: queryserver

> Remove the dependency of KeyStoreTestUtil
> -
>
> Key: PHOENIX-5778
> URL: https://issues.apache.org/jira/browse/PHOENIX-5778
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: queryserver-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If I am not wrong, phoenix should reduce the dependency of hbase 
> class/interface which not marked IA.Public.  KeyStoreTestUtil is just a 
> static util class. I thought phoenix query server can copy a new one and not 
> depend on hbase's KeyStoreTestUtil.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4844) Refactor queryserver tests to use QueryServerTestUtil

2020-07-01 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4844:

Fix Version/s: (was: queryserver-1.0.0)

> Refactor queryserver tests to use QueryServerTestUtil
> -
>
> Key: PHOENIX-4844
> URL: https://issues.apache.org/jira/browse/PHOENIX-4844
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
>
> See related JIRA: PHOENIX-4750



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-4844) Refactor queryserver tests to use QueryServerTestUtil

2020-07-01 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4844:

Component/s: queryserver

> Refactor queryserver tests to use QueryServerTestUtil
> -
>
> Key: PHOENIX-4844
> URL: https://issues.apache.org/jira/browse/PHOENIX-4844
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>Reporter: Alex Araujo
>Assignee: Alex Araujo
>Priority: Minor
> Fix For: queryserver-1.0.0
>
>
> See related JIRA: PHOENIX-4750



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5901) Add LICENSE and NOTICE files to phoenix-queryserver

2020-07-01 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5901.
-
Fix Version/s: queryserver-1.0.0
   Resolution: Fixed

> Add LICENSE and NOTICE files to phoenix-queryserver
> ---
>
> Key: PHOENIX-5901
> URL: https://issues.apache.org/jira/browse/PHOENIX-5901
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Istvan Toth
>Assignee: Josh Elser
>Priority: Blocker
> Fix For: queryserver-1.0.0
>
>
> The phoenix-queryserver repo doesn't include LICENSE or NOTICE files.
> Make sure the repo conforms to the legal requirements and the ASF standards.
> Recent changes that should be reflected in the NOTICE file (not exhaustive):
> * phoenixdb SQLAlchemy driver copyright
> * Guava HostAndPort class copyright



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: No builds on Phoenix-Master jenkins since Feb 19th?

2020-06-23 Thread Josh Elser

Thanks Chenglei!!

On 6/21/20 12:00 PM, cheng...@apache.org wrote:













I observed that when the ITtests hange,  the ZK connection exhaustion is caused 
by ViewUtil.dropChildViews , and I opened a JIRA PHOENIX-5970 to solve it, 
seems that it works.





At 2020-06-19 14:58:27, "Istvan Toth"  wrote:

The hanging test suite issue has been fixed by Richard Antal's PHOENIX-5962
 .
Tests should be back to normal (that is, they're still flakey as fresh snow)

Rajeshbabu, is there an easy way (specific message in the logs) for
identifying the ZK connection exhaustion situation?

Istvan

On Thu, Jun 18, 2020 at 2:49 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:


Just my observation related tests hang or flaky ness is that when there are
connections created which involves zookeeper connection then at some point
of time zookeeper sees too many connections and not allow to create further
connections then the cluster won't be terminated immediately and tests
hangs. We can identify such cases and close the connections properly if
any.

Thanks,
Rajeshbabu.

On Tue, Jun 16, 2020, 12:12 PM Istvan Toth  wrote:


What we really need is getting our test suite (and infra) sorted out.
Your suggestions are already part of the process, it's just that lately
everyone ignores them, because the precommit tests have been next to
useless for a good while.

I think that if developers can be reasonably sure that the precommit
failure that they see is not some kind of random/known flake, they (we)
will take them seriously, and not commit until they(we) get it right.

Blocking further commits until the tests are fixed is one drastic, but
possibly necessary measure to achieve this.
And once we get them fixed, we will indeed need to be more vigilant in

not

letting the situation deteriorate.

Istvan

On Mon, Jun 15, 2020 at 7:30 PM Andrew Purtell 
wrote:


So it's time to enforce an automatic revert if build fails policy?
Ideally, no commit before a precommit passes.
If that fails, if someone discovers failing tests and can git bisect to

the

cause, automatic revert of the offending commit.

YDYT?


On Sun, Jun 14, 2020 at 10:20 PM Istvan Toth



wrote:


Unfortunately you are not missing anything, Lars :(

What's worse, the tests are not only failing, they even hang since

Jun

2.

Master is in a similar state.

regards
Istvan

On Sun, Jun 14, 2020 at 12:03 AM la...@apache.org 
wrote:


Thanks Istvan.

Just checked the builds. Looks like the 4.x builds have not passed

a

single time since May 20th.

I hope I am missing something... Otherwise this would be pretty
frustrating. :)
(Since pleading doesn't appear to help, maybe we should

automatically

block all commits until all tests pass...?)

-- Lars


On Tuesday, April 21, 2020, 5:33:52 AM PDT, Istvan Toth <

st...@apache.org>

wrote:





I've deleted all but the four Phoenix jobs listed in the JIRA.




--
*István Tóth* | Staff Software Engineer
st...@cloudera.com 
[image: Cloudera] 
[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image:

Cloudera

on LinkedIn] 

--




--
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
- A23, Crosstalk







[jira] [Created] (PHOENIX-5967) phoenix-client transitively pulling in phoenix-core

2020-06-19 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5967:
---

 Summary: phoenix-client transitively pulling in phoenix-core
 Key: PHOENIX-5967
 URL: https://issues.apache.org/jira/browse/PHOENIX-5967
 Project: Phoenix
  Issue Type: Bug
Reporter: Josh Elser
Assignee: Josh Elser


Looks like something happened in master where phoenix-client is now 
transitively pulling in phoenix-core, even though all of phoenix-core and its 
dependencies are included in the phoenix-client shaded artifact.

4.15.0 looks OK, so maybe something inadvertent with the hbase version 
classifier stuff, [~stoty]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Renaming phoenix-queryserver maven artifacts (and jars)

2020-06-19 Thread Josh Elser
Fine by me. I think keeping `phoenix-queryserver-client` the same is the 
one that's most important. However, since the move to the separate repo 
and that we've not had a real release from it yet, we have the 
flexibility to make the change now.


On 6/19/20 3:31 AM, Istvan Toth wrote:

Hi!

I'd like to change the phoenix-queryserver artifact  and jar names, and
possibly rename the subproject directories as well:

This is what I have in mind:

phoenix-queryserver-parent
phoenix-queryserver-assembly
phoenix-queryserver-load-balancer
phoenix-queryserver
phoenix-queryserver-client
phoenix-queryerrver-it
phoenix-queryserver-orchestrator

This would match the pre-unbundling artifact names for the client and
server,
follow the hadoop conventions, and make the resulting JARs easily
identifiable.

I've opened https://issues.apache.org/jira/browse/PHOENIX-5964 for this.

Since this is a highly visible change, I'd like to have some feedback from
the community before committing to this.

Since we haven't made a release yet, this would be a great time to get this
in order.

looking forward to hearing from you

Istvan



Re: Master branch does not compile

2020-06-19 Thread Josh Elser
I see no value in supporting 2.0. I have mixed feelings on 2.1. We 
should default to 2.2, especially with 2.3 releases incoming "soon".


We're in a good position to help HBase drive adoption of certain 
versions. While this is always sort of "implicit" (it just happens), I 
see value in it via advertising what users should pick up.


On 6/17/20 8:42 AM, cheng...@apache.org wrote:




Should we only support HBase 2.2 , which is a stable 2.x release ?














At 2020-06-17 16:04:58, "Istvan Toth"  wrote:

The 4.x branch has historically deprecated unsupported branches.

I'd be fine with removing support for EOL 2.x branches in master,
https://issues.apache.org/jira/browse/PHOENIX-5716 is open for this very
issue,
but would welcome some community input before working on it.

I also like Josh's idea, https://issues.apache.org/jira/browse/PHOENIX-5828 is
quite similar to it.


On Wed, Jun 17, 2020 at 2:49 AM Guanghao Zhang  wrote:


HBase 2.0 and 2.1 has been EOL now. Does phoenix need to support them?

swaroopa kadam  于2020年6月17日周三 上午12:32写道:


yes, that’s a good idea.

On Tue, Jun 16, 2020 at 9:29 AM Josh Elser  wrote:


Sounds like we should try to update precommit to at least compile
against _a version_ in each line 2.1/2.2/2.3 for master. Thoughts?

On 6/16/20 11:57 AM, swaroopa kadam wrote:

Thank you for the replies everyone!

It puts me at ease by knowing the issue has been identified and being
fixed.

Thanks!

On Tue, Jun 16, 2020 at 4:11 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:


I am on the PHOENIX-5905 compilation issue and will fix it today.

On Tue, Jun 16, 2020 at 3:07 PM cheng...@apache.org <

cheng...@apache.org>

wrote:





PHOENIX-5905 caused the master branch compile broken, because
org.apache.hadoop.hbase.security.access.GetUserPermissionsRequest

is

only

available in hbase 2.2.x,
so when the hbase.profile=2.0 or  hbase.profile=2.1, the compiler

boken,

Should we revert PHOENIX-5905 for the moment?


The error messages are :


[ERROR] COMPILATION ERROR :
[INFO]

-

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[39,47]
cannot find symbol
symbol:   class GetUserPermissionsRequest
location: package org.apache.hadoop.hbase.security.access
[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1452,48]
cannot find symbol
symbol:   method hasUserName()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1452,72]
cannot find symbol
symbol:   method getUserName()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1458,32]
cannot find symbol
symbol:   method hasColumnFamily()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1458,60]
cannot find symbol
symbol:   method getColumnFamily()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1460,32]
cannot find symbol
symbol:   method hasColumnQualifier()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1460,63]
cannot find symbol
symbol:   method getColumnQualifier()
location: variable request of type








org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <








https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1461,17]
cannot find symbol
symbol:   class GetUserPermissionsRequest
location: class
org.apac

[jira] [Created] (PHOENIX-5966) Include poms in embedded maven repo

2020-06-19 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5966:
---

 Summary: Include poms in embedded maven repo
 Key: PHOENIX-5966
 URL: https://issues.apache.org/jira/browse/PHOENIX-5966
 Project: Phoenix
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 1.0.0


[~psomogyi] happened to notice that the embedded maven repo doesn't have 
pom.xml's copied into it by the dependency-plugin. There's an option for this, 
but it's false by default. Without these, pulling the artifacts for the n+1'th 
time fails (as maven can't check its freshness).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5901) Add LICENSE and NOTICE files to phoenix-queryserver

2020-06-16 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-5901:
---

Assignee: Josh Elser  (was: Istvan Toth)

> Add LICENSE and NOTICE files to phoenix-queryserver
> ---
>
> Key: PHOENIX-5901
> URL: https://issues.apache.org/jira/browse/PHOENIX-5901
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Istvan Toth
>Assignee: Josh Elser
>Priority: Blocker
>
> The phoenix-queryserver repo doesn't include LICENSE or NOTICE files.
> Make sure the repo conforms to the legal requirements and the ASF standards.
> Recent changes that should be reflected in the NOTICE file (not exhaustive):
> * phoenixdb SQLAlchemy driver copyright
> * Guava HostAndPort class copyright



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Master branch does not compile

2020-06-16 Thread Josh Elser
Sounds like we should try to update precommit to at least compile 
against _a version_ in each line 2.1/2.2/2.3 for master. Thoughts?


On 6/16/20 11:57 AM, swaroopa kadam wrote:

Thank you for the replies everyone!

It puts me at ease by knowing the issue has been identified and being
fixed.

Thanks!

On Tue, Jun 16, 2020 at 4:11 AM rajeshb...@apache.org <
chrajeshbab...@gmail.com> wrote:


I am on the PHOENIX-5905 compilation issue and will fix it today.

On Tue, Jun 16, 2020 at 3:07 PM cheng...@apache.org 
wrote:





PHOENIX-5905 caused the master branch compile broken, because
org.apache.hadoop.hbase.security.access.GetUserPermissionsRequest is only
available in hbase 2.2.x,
so when the hbase.profile=2.0 or  hbase.profile=2.1, the compiler boken,
Should we revert PHOENIX-5905 for the moment?


The error messages are :


[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[39,47]
cannot find symbol
   symbol:   class GetUserPermissionsRequest
   location: package org.apache.hadoop.hbase.security.access
[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1452,48]
cannot find symbol
   symbol:   method hasUserName()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1452,72]
cannot find symbol
   symbol:   method getUserName()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1458,32]
cannot find symbol
   symbol:   method hasColumnFamily()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1458,60]
cannot find symbol
   symbol:   method getColumnFamily()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1460,32]
cannot find symbol
   symbol:   method hasColumnQualifier()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1460,63]
cannot find symbol
   symbol:   method getColumnQualifier()
   location: variable request of type


org.apache.hadoop.hbase.protobuf.generated.AccessControlProtos.GetUserPermissionsRequest

[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1461,17]
cannot find symbol
   symbol:   class GetUserPermissionsRequest
   location: class
org.apache.phoenix.end2end.BasePermissionsIT.CustomAccessController
[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1463,49]
cannot find symbol
   symbol:   variable GetUserPermissionsRequest
   location: class
org.apache.phoenix.end2end.BasePermissionsIT.CustomAccessController
[ERROR] <


https://builds.apache.org/job/Phoenix-master-matrix/HBASE_PROFILE=2.1/ws/phoenix-core/src/it/java/org/apache/phoenix/end2end/BasePermissionsIT.java

:[1467,29]
cannot find symbol
   symbol:   variable GetUserPermissionsRequest
   location: class org.apache.phoenix.end2end.BasePermissionsIT.Custo


















At 2020-06-16 14:26:23, "Istvan Toth"  wrote:

Hi!

In some cases specifying non-default HBase and Hadoop versions can cause
this.
Please report your full maven command line, and I'll look into it.
In the meantime you can disable the dependency check with -D
mdep.analyze.skip=true

Istvan

On Mon, Jun 15, 2020 at 7:57 PM swaroopa kadam <

swaroopa.kada...@gmail.com>

wrote:


Thank you for the response, Andrew and Geoffrey. Below is the error

message

I see:

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-dependency-plugin:3.1.1:analyze-only
(enforce-dependencies) on project phoenix-core: Dependency problems

found


Re: [DISCUSS] Public Python PhoenixDB releases

2020-06-16 Thread Josh Elser
Yup, let's just draw a number (whatever we'd like, as long as it follows 
the standard PEP for versioning). When we release it, we can advertise 
whatever level of stability we expect it to have (i.e. "This is an 
alpha, you can try it if you want").


ASF is very strict when it comes to releases in source form. In the 
legalese for the foundation, _only_ source releases are actually 
considered a release. That means, all of the foundation documentation 
about release policies applies to just the source release, further, 
that's really all we have to vote on.


My current interpretation of the policy is that we should also try to 
validate any binaries we build to make sure that they still follow 
proper licensing guidelines for the foundation (but this is a "should" 
and not a "must" because they are not technically real releases).


All that to say, when we do a vote in the ASF, it must be on source. If 
we have an egg to also use for testing, great. But, the pushing an 
approved ASF release to PyPi is a post-vote step.


Shout if this doesn't make sense. It reads like you're on the right path :)

On 6/16/20 3:06 AM, Istvan Toth wrote:

The Beam guide looks interesting, we could certainly borrow from it.

I'm fine with your suggested process. Here's my interpretation of it:

- No dev releases on PyPI
- Decoupling PhoenixDB versioning from PQS versioning
- Making a proper minor/patch release when important features/fixes land
- Testing the release process /making RC releases on TestPyPI

One more detail that we haven't discussed yet:
The previous PyPI releases were source packages.
I suggest that we can keep releasing PhoenixDB as a source package, as it
is pure python.

Istvan

On Tue, Jun 16, 2020 at 3:32 AM Josh Elser  wrote:


ASF releases do not have to be on the order of years. As long as we have
three people to vote, we can do releases as often as we'd like. The
lower-bound on release cadence is probably on the order of "weeks" (as
opposed to days) but that's primarily limited by bandwidth of our
volunteers :)

Anyways, if that's the major worry, let's just push to doing proper
votes. We should not have the problem in having people turn out to vote.

I'll put L review to the top of my list to do (my) tomorrow morning.
Sorry I haven't gotten to it yet.

On 6/15/20 1:42 PM, Istvan Toth wrote:

TestPyPi certainly seems useful to practice the release process, and

avoid

mistakes with it.

However, I don't think that it is a substitute for a .devN release, which
(in my mind) is meant to give the users something to use until we get all
our ducks in a row to do a proper 1.0 release. The primary user in this
case would be Hue, which needs the new features, and is currently

mirroring

our dev sources into their own repo. (TBH, that's their normal modus
operandi anyway, but for their Python 3 version, they'd like to switch to
PyPI  modules)

To use a dev package, the users have to explicitly specify the exact
development version, (or go all-out, and use development versions of all
packages), so there is no chance of them accidentally upgrading from a
stable release. Making them specify another repo as well to install a dev
release sounds like too much to me.

If on the other hand we plan on releasing  the current state of PhoenixDB
as 1.0 soon (like in month), and then do frequent-ish releases as new
features/fixes are (hopefully) added, then we can skip the dev releases
altogether. The Phoenix norm so far is more like yearly releases (if we

are

lucky).

regards
István

On Mon, Jun 15, 2020 at 7:08 PM Josh Elser  wrote:


Did a little searching..

* Found a 2011 blog post from sqlalchemy which said (as a project) they
would not post devN releases to pypi
* There's a TestPyPi [1] instead which seems to be for staging work.

Could we play with staging there? And then push to pypi (real) after we
do a normal vote?

I think we can keep our python release super-low friction :)

[1] https://packaging.python.org/guides/using-testpypi/

On 6/15/20 1:02 PM, Josh Elser wrote:

Hey Istvan,

Great of you to drive this work!

I do have one concern about pushing the dev releases to PyPi (I'm
assuming that's what you mean). I understand that in the Python world a
"dev" release indicates that this isn't an "official" release [1].

At the ASF, you're correct that we, developers, are empowered to make
"builds" of our product(s) for the sake of our development. There is a
clear line when that "build" is published in a location to which a user
may find it and begin to use it. There has been at least one example in
recent memory of a project which made "developer only releases" without
proper voting, but published them in a high-visibility location and
(inadvertently or intentionally) circumvented the ASF release

requirements.


My biggest concern is: would a user who ran a `pip install phoenixdb`
after we make a

Re: [DISCUSS] Public Python PhoenixDB releases

2020-06-15 Thread Josh Elser
ASF releases do not have to be on the order of years. As long as we have 
three people to vote, we can do releases as often as we'd like. The 
lower-bound on release cadence is probably on the order of "weeks" (as 
opposed to days) but that's primarily limited by bandwidth of our 
volunteers :)


Anyways, if that's the major worry, let's just push to doing proper 
votes. We should not have the problem in having people turn out to vote.


I'll put L review to the top of my list to do (my) tomorrow morning. 
Sorry I haven't gotten to it yet.


On 6/15/20 1:42 PM, Istvan Toth wrote:

TestPyPi certainly seems useful to practice the release process, and avoid
mistakes with it.

However, I don't think that it is a substitute for a .devN release, which
(in my mind) is meant to give the users something to use until we get all
our ducks in a row to do a proper 1.0 release. The primary user in this
case would be Hue, which needs the new features, and is currently mirroring
our dev sources into their own repo. (TBH, that's their normal modus
operandi anyway, but for their Python 3 version, they'd like to switch to
PyPI  modules)

To use a dev package, the users have to explicitly specify the exact
development version, (or go all-out, and use development versions of all
packages), so there is no chance of them accidentally upgrading from a
stable release. Making them specify another repo as well to install a dev
release sounds like too much to me.

If on the other hand we plan on releasing  the current state of PhoenixDB
as 1.0 soon (like in month), and then do frequent-ish releases as new
features/fixes are (hopefully) added, then we can skip the dev releases
altogether. The Phoenix norm so far is more like yearly releases (if we are
lucky).

regards
István

On Mon, Jun 15, 2020 at 7:08 PM Josh Elser  wrote:


Did a little searching..

* Found a 2011 blog post from sqlalchemy which said (as a project) they
would not post devN releases to pypi
* There's a TestPyPi [1] instead which seems to be for staging work.

Could we play with staging there? And then push to pypi (real) after we
do a normal vote?

I think we can keep our python release super-low friction :)

[1] https://packaging.python.org/guides/using-testpypi/

On 6/15/20 1:02 PM, Josh Elser wrote:

Hey Istvan,

Great of you to drive this work!

I do have one concern about pushing the dev releases to PyPi (I'm
assuming that's what you mean). I understand that in the Python world a
"dev" release indicates that this isn't an "official" release [1].

At the ASF, you're correct that we, developers, are empowered to make
"builds" of our product(s) for the sake of our development. There is a
clear line when that "build" is published in a location to which a user
may find it and begin to use it. There has been at least one example in
recent memory of a project which made "developer only releases" without
proper voting, but published them in a high-visibility location and
(inadvertently or intentionally) circumvented the ASF release

requirements.


My biggest concern is: would a user who ran a `pip install phoenixdb`
after we make a devN release get the last-stable release (0.7) or the
new devN release? If they would get the dev release, does PyPi give us
any flexibility to prevent this from happening? I believe that we should
consider publishing to the "official" location should be treated as a
release if it means that a user could begin to use it with "low

friction".


- Josh


[1] https://www.python.org/dev/peps/pep-0440/#id24

On 6/12/20 7:11 AM, Istvan Toth wrote:

Hi!

Even though we have adopted the PhoenixDB driver in 2018, there hasn't
been
much activity on it, and the version available on PyPI is still the
old 0.7
release by Lukas.

Recently I have worked on it quite a bit, adding fixes and new features,
and adopting the partial SQLAlchemy driver from pyPhoenix, thus enabling
Hue support.

I plan to start releasing the driver publicly on PyPI. Lukas has kindly
shared control of the PyPI phoenixdb project with us, so we are good
to go.

The short-term plan is to release 1.0.0.dev0 and later 1.0.0.devN
releases
from the current HEAD of phoenix-queryserver. As these will be dev
releases, I am not planning to follow a formal release process for

these.


When and how to release 1.0.0 final, and the versioning scheme/process

to

use  after that are still not finalized.

Please join the discussion here, or in
https://issues.apache.org/jira/browse/PHOENIX-5939 if you have any
questions or suggestions!

regards
Istvan








Re: [DISCUSS] Public Python PhoenixDB releases

2020-06-15 Thread Josh Elser

One more thing :)

Apache Beam appears to have an excellent release guide which includes 
their process which involves PyPi -- 
https://beam.apache.org/contribute/release-guide/


Maybe we can copy them?

On 6/15/20 1:08 PM, Josh Elser wrote:

Did a little searching..

* Found a 2011 blog post from sqlalchemy which said (as a project) they 
would not post devN releases to pypi

* There's a TestPyPi [1] instead which seems to be for staging work.

Could we play with staging there? And then push to pypi (real) after we 
do a normal vote?


I think we can keep our python release super-low friction :)

[1] https://packaging.python.org/guides/using-testpypi/

On 6/15/20 1:02 PM, Josh Elser wrote:

Hey Istvan,

Great of you to drive this work!

I do have one concern about pushing the dev releases to PyPi (I'm 
assuming that's what you mean). I understand that in the Python world 
a "dev" release indicates that this isn't an "official" release [1].


At the ASF, you're correct that we, developers, are empowered to make 
"builds" of our product(s) for the sake of our development. There is a 
clear line when that "build" is published in a location to which a 
user may find it and begin to use it. There has been at least one 
example in recent memory of a project which made "developer only 
releases" without proper voting, but published them in a 
high-visibility location and (inadvertently or intentionally) 
circumvented the ASF release requirements.


My biggest concern is: would a user who ran a `pip install phoenixdb` 
after we make a devN release get the last-stable release (0.7) or the 
new devN release? If they would get the dev release, does PyPi give us 
any flexibility to prevent this from happening? I believe that we 
should consider publishing to the "official" location should be 
treated as a release if it means that a user could begin to use it 
with "low friction".


- Josh


[1] https://www.python.org/dev/peps/pep-0440/#id24

On 6/12/20 7:11 AM, Istvan Toth wrote:

Hi!

Even though we have adopted the PhoenixDB driver in 2018, there 
hasn't been
much activity on it, and the version available on PyPI is still the 
old 0.7

release by Lukas.

Recently I have worked on it quite a bit, adding fixes and new features,
and adopting the partial SQLAlchemy driver from pyPhoenix, thus enabling
Hue support.

I plan to start releasing the driver publicly on PyPI. Lukas has kindly
shared control of the PyPI phoenixdb project with us, so we are good 
to go.


The short-term plan is to release 1.0.0.dev0 and later 1.0.0.devN 
releases

from the current HEAD of phoenix-queryserver. As these will be dev
releases, I am not planning to follow a formal release process for 
these.


When and how to release 1.0.0 final, and the versioning 
scheme/process to

use  after that are still not finalized.

Please join the discussion here, or in
https://issues.apache.org/jira/browse/PHOENIX-5939 if you have any
questions or suggestions!

regards
Istvan



Re: [DISCUSS] Public Python PhoenixDB releases

2020-06-15 Thread Josh Elser

Did a little searching..

* Found a 2011 blog post from sqlalchemy which said (as a project) they 
would not post devN releases to pypi

* There's a TestPyPi [1] instead which seems to be for staging work.

Could we play with staging there? And then push to pypi (real) after we 
do a normal vote?


I think we can keep our python release super-low friction :)

[1] https://packaging.python.org/guides/using-testpypi/

On 6/15/20 1:02 PM, Josh Elser wrote:

Hey Istvan,

Great of you to drive this work!

I do have one concern about pushing the dev releases to PyPi (I'm 
assuming that's what you mean). I understand that in the Python world a 
"dev" release indicates that this isn't an "official" release [1].


At the ASF, you're correct that we, developers, are empowered to make 
"builds" of our product(s) for the sake of our development. There is a 
clear line when that "build" is published in a location to which a user 
may find it and begin to use it. There has been at least one example in 
recent memory of a project which made "developer only releases" without 
proper voting, but published them in a high-visibility location and 
(inadvertently or intentionally) circumvented the ASF release requirements.


My biggest concern is: would a user who ran a `pip install phoenixdb` 
after we make a devN release get the last-stable release (0.7) or the 
new devN release? If they would get the dev release, does PyPi give us 
any flexibility to prevent this from happening? I believe that we should 
consider publishing to the "official" location should be treated as a 
release if it means that a user could begin to use it with "low friction".


- Josh


[1] https://www.python.org/dev/peps/pep-0440/#id24

On 6/12/20 7:11 AM, Istvan Toth wrote:

Hi!

Even though we have adopted the PhoenixDB driver in 2018, there hasn't 
been
much activity on it, and the version available on PyPI is still the 
old 0.7

release by Lukas.

Recently I have worked on it quite a bit, adding fixes and new features,
and adopting the partial SQLAlchemy driver from pyPhoenix, thus enabling
Hue support.

I plan to start releasing the driver publicly on PyPI. Lukas has kindly
shared control of the PyPI phoenixdb project with us, so we are good 
to go.


The short-term plan is to release 1.0.0.dev0 and later 1.0.0.devN 
releases

from the current HEAD of phoenix-queryserver. As these will be dev
releases, I am not planning to follow a formal release process for these.

When and how to release 1.0.0 final, and the versioning scheme/process to
use  after that are still not finalized.

Please join the discussion here, or in
https://issues.apache.org/jira/browse/PHOENIX-5939 if you have any
questions or suggestions!

regards
Istvan



Re: [DISCUSS] Public Python PhoenixDB releases

2020-06-15 Thread Josh Elser

Hey Istvan,

Great of you to drive this work!

I do have one concern about pushing the dev releases to PyPi (I'm 
assuming that's what you mean). I understand that in the Python world a 
"dev" release indicates that this isn't an "official" release [1].


At the ASF, you're correct that we, developers, are empowered to make 
"builds" of our product(s) for the sake of our development. There is a 
clear line when that "build" is published in a location to which a user 
may find it and begin to use it. There has been at least one example in 
recent memory of a project which made "developer only releases" without 
proper voting, but published them in a high-visibility location and 
(inadvertently or intentionally) circumvented the ASF release requirements.


My biggest concern is: would a user who ran a `pip install phoenixdb` 
after we make a devN release get the last-stable release (0.7) or the 
new devN release? If they would get the dev release, does PyPi give us 
any flexibility to prevent this from happening? I believe that we should 
consider publishing to the "official" location should be treated as a 
release if it means that a user could begin to use it with "low friction".


- Josh


[1] https://www.python.org/dev/peps/pep-0440/#id24

On 6/12/20 7:11 AM, Istvan Toth wrote:

Hi!

Even though we have adopted the PhoenixDB driver in 2018, there hasn't been
much activity on it, and the version available on PyPI is still the old 0.7
release by Lukas.

Recently I have worked on it quite a bit, adding fixes and new features,
and adopting the partial SQLAlchemy driver from pyPhoenix, thus enabling
Hue support.

I plan to start releasing the driver publicly on PyPI. Lukas has kindly
shared control of the PyPI phoenixdb project with us, so we are good to go.

The short-term plan is to release 1.0.0.dev0 and later 1.0.0.devN releases
from the current HEAD of phoenix-queryserver. As these will be dev
releases, I am not planning to follow a formal release process for these.

When and how to release 1.0.0 final, and the versioning scheme/process to
use  after that are still not finalized.

Please join the discussion here, or in
https://issues.apache.org/jira/browse/PHOENIX-5939 if you have any
questions or suggestions!

regards
Istvan



[jira] [Updated] (PHOENIX-5939) Publish PhoenixDB to PyPI

2020-06-05 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5939:

Summary: Publish PhoenixDB to PyPI  (was: Plublish PhoenixDB to PyPI)

> Publish PhoenixDB to PyPI
> -
>
> Key: PHOENIX-5939
> URL: https://issues.apache.org/jira/browse/PHOENIX-5939
> Project: Phoenix
>  Issue Type: Task
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Istvan Toth
>Priority: Major
>
> The original PhoenixDB driver was published to PyPI.
> The improved version in phoenix-queryserver is only available from there.
> We should start publishing the driver again.
> Some questions to answer:
>  * Can we take over the old PyPI project ?
>  * Do we want to ?
>  * What should be the project/artifact name (if not the old one)
>  * Version numbering ?
>  * Do we want to publish development versions ?
>  * What is the process / who should do the publishing ?
>  * Any blockers before we start publishing ?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] TimeZone handling (PHOENIX-5066)

2020-06-02 Thread Josh Elser

Hiya,

Richard (and Istvan) had a chat with me the other day about the change 
Richard has started making here.


Given what I know so far, I think Richard is trying to fix a 
long-standing "it just is that way"-ism from Phoenix.


Please give it a glance and make sure we're working towards a proper 
long-term fix :). Thanks!



 Forwarded Message 
Subject: [jira] [Comment Edited] (PHOENIX-5066) The TimeZone is 
incorrectly used during writing or reading data

Date: Tue, 2 Jun 2020 12:51:00 + (UTC)
From: Richard Antal (Jira) 
To: els...@apache.org


[ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123679#comment-17123679 
]

Richard Antal edited comment on PHOENIX-5066 at 6/2/20, 12:50 PM:
--

I created a [pull request|[https://github.com/apache/phoenix/pull/796]] 
to make it easier to see the differences.
 In the latest patch I changed the static functions in the DateUtil 
class to non static. We can get the DateUtil instance by calling the 
getDateUtilContext on PhoenixConnection, this way we can set the 
timezone attribute for the DateUtil when we create the connection and 
use it later.


This change looks huge because DateUtil was replaced to 
getDateUtilContext everywhere.


There are lot of failing tests outside of GMT time zones, because this 
patch introduces a new behaviour for timezone handling. Strings that are 
parsed to time are not interpreted in GMT/UTC but in local timezone and 
we store the data in GMT or in QueryServices.DATE_FORMAT_TIMEZONE_ATTRIB 
if that is set to some other value.


I would like to hear other opinions about this change before doing any 
further modification.





Re: [Discuss] Twill removal and Guava update plan

2020-06-02 Thread Josh Elser

Sounds like a well-thought-out plan to me.

If we're going through and changing Guava, it may also be worthwhile to 
try to eliminate the use of Guava in our "public API". While the shaded 
guava eliminates classpath compatibility issues, Guava could (at any 
point) drop a class that we're using in our API and still break us. That 
could be a "later" thing.


The only thing I think differently is that 4.x could (at some point) 
pick up the shaded guava artifact you describe and make the change. 
However, that's just for the future -- the call can be made if/when 
someone wants to do that :)


On 6/2/20 10:01 AM, Istvan Toth wrote:

Hi!

There are two related dependency issues that I believe should be solved in
Phoenix to keep it healthy and supportable.

The Twill project has been officially terminated. Both Tephra and Omid
depend on it, and so transitively Phoenix does as well.

Hadoop 3.3 has updated its Guava to 29, while Phoenix (master) is still on
13.
None of Twill, Omid, Tephra, or Phoenix will run or compile against recent
Guava versions, which are pulled in by Hadoop 3.3.

If we want to work with Hadoop 3.3, we either have to update all
dependencies to a recent Guava version, or we have to build our artifacts
with shaded Guava.
Since Guava 13 has known vulnerabilities, including in the classpath causes
a barrier to adaptation. Some potential Phoenix users consider including
dependencies with
known vulnerabilities a show-stopper, they do not care if the vulnerability
affects Phoenix or not.

I propose that we take following steps to ensure compatibility with
upcoming Hadoop versions:

*1. Remove the Twill dependency from Omid and Tephra*
It is generally not healthy to depend on abandoned projects, but the fact
Twill also depends (heavily) on Guava 13, makes removal the best solution.
As far as I can see, Omid and Tephra mostly use the ZK client from Twill,
as well as the (transitively included) Guava service model.
Refactoring to use the native ZK client, and to use the Guava service
classes directly shouldn't be too difficult.

*2. Create a shaded guava artifact for Omid and Tephra*
Since Omid and Tephra needs to work with Hadoop2 and Hadoop3 (including the
upcoming Hadoop 3.3), which already pull in Guava, we need to use different
Guava internally.
(similar to the HBase-thirdparty solution, but we need a separate one).
This artifact could live under the Phoenix groupId, but we'll need to be
careful with the circular dependencies.

*3. Update the Omid and Tephra to use the shaded Guava artifact*
Apart from handling the mostly trivial, "let's break API compatibility for
the heck of it" Guava changes, the Guava Service API that both Omid and
Tephra build on has changed significantly.
This will mean changes in the public (Phoenix facing) APIs. All Guava
references will have to be replaced with the shaded guava classes from step
2.

*3. Define self-contained public APIs for Omid and Tephra*
To break the public API's dependency on Guava, redefine the public APIs in
such a way that they do not have Guava classes as ancestors.
This doesn't mean that we decouple the internal implementation from Guava,
simply defining a set of java Interfaces that matches the existing (updated
to recent Guava Service API)
interface's signature, but is self-contained under the Tephra/Omid
namespace should do the trick.

*4. Update Phoenix to use new Omid/Tephra API*
i.e. use the new Interface that we defined in step 3.

*5. Update Phoenix to work with Guava 13-29.*
We need to somehow get Phoenix work with both old and new Guava.
Probably the least disruptive way to do this is reduce the Guava use to the
common subset of 13.0 and 29.0, and replace/reimplement the parts that
cannot be resolved.
Alternatively, we could rebase to the same shaded guava-thirdparty library
that we use for Omid and Tephra.

For *4.x*, since we cannot get rid of Guava 13 ever, *Step 5 *is not
necessary

I am very interested in your opinion on the above plan.
Does anyone have any objections ?
Does anyone have a better solution ?
Is there some hidden pitfall that I hadn't considered (there certainly
is...) ?

best regards

Istvan



[jira] [Reopened] (PHOENIX-5922) IndexUpgradeTool should always re-enable tables on failure

2020-05-28 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reopened PHOENIX-5922:
-

This breaks compilation on master: there are two StringUtils imports and 
redefines {{isWaitComplete}}.

I'll revert the offending commit and re-open this for Geoffrey to fix this.

> IndexUpgradeTool should always re-enable tables on failure
> --
>
> Key: PHOENIX-5922
> URL: https://issues.apache.org/jira/browse/PHOENIX-5922
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Major
> Fix For: 5.1.0, 4.16.0
>
> Attachments: PHOENIX-5922-4.x.v1.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> If an error occurs while doing an index upgrade, the IndexUpgradeTool will 
> try to rollback the upgrade operations to leave the cluster in its initial 
> state. However, if the rollback fails, it will give up. This can leave tables 
> disabled for long periods until operators manually re-enable them.
> If rollback fails, the IndexUpgradeTool should always re-enable the affected 
> tables at the HBase level.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5831) Make Phoenix queryserver scripts work with Python 3

2020-05-12 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-5831:
---

Assignee: Richard Antal

> Make Phoenix queryserver scripts work with Python 3
> ---
>
> Key: PHOENIX-5831
> URL: https://issues.apache.org/jira/browse/PHOENIX-5831
> Project: Phoenix
>  Issue Type: Bug
>  Components: queryserver
>Affects Versions: queryserver-1.0.0
>Reporter: Richard Antal
>Assignee: Richard Antal
>Priority: Critical
> Fix For: queryserver-1.0.0
>
> Attachments: PHOENIX-5831.master.v1.patch, 
> PHOENIX-5831.master.v2.patch, PHOENIX-5831.master.v3.patch
>
>
> Python 2 is being retired in some environments now. We should make sure that 
> the Phoenix queryserver scripts work with Python 2 and 3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5656) Make Phoenix scripts work with Python 3

2020-05-12 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-5656:
---

Assignee: Richard Antal

> Make Phoenix scripts work with Python 3
> ---
>
> Key: PHOENIX-5656
> URL: https://issues.apache.org/jira/browse/PHOENIX-5656
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Assignee: Richard Antal
>Priority: Critical
> Fix For: 5.1.0, 4.16.0
>
> Attachments: 5656-4.x-HBase-1.5-untested.txt, 
> 5656-4.x-HBase-1.5-v3.txt, 5656-4.x-HBase-1.5-v4.txt, 
> PHOENIX-5656.4.x.v1.patch, PHOENIX-5656.4.x.v2.patch, 
> PHOENIX-5656.master.v1.patch, PHOENIX-5656.master.v2.patch
>
>
> Python 2 is being retired in some environments now. We should make sure that 
> the Phoenix scripts work with Python 2 and 3.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5869) Use symlinks to reduce size of phoenix queryserver assembly

2020-04-24 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5869:
---

 Summary: Use symlinks to reduce size of phoenix queryserver 
assembly
 Key: PHOENIX-5869
 URL: https://issues.apache.org/jira/browse/PHOENIX-5869
 Project: Phoenix
  Issue Type: Improvement
Reporter: Josh Elser
Assignee: Josh Elser


On the heels of PHOENIX-5827, we've increased the size of the installation by a 
bit.

[~stoty] had the good suggestion of using some symlinks to try to keep a single 
copy of the jars across the installation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5827) Let PQS act as a maven repo

2020-04-22 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5827.
-
Resolution: Fixed

Thanks for your reviews, Istvan!

> Let PQS act as a maven repo
> ---
>
> Key: PHOENIX-5827
> URL: https://issues.apache.org/jira/browse/PHOENIX-5827
> Project: Phoenix
>  Issue Type: Improvement
>  Components: queryserver
>    Reporter: Josh Elser
>    Assignee: Josh Elser
>Priority: Major
> Fix For: queryserver-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> PQS is already an HTTP server and we have the Phoenix client jars for PQS to 
> operate.
> How about we just let PQS host these jars as a normal Maven repository?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[ANNOUNCE] New VP Apache Phoenix

2020-04-16 Thread Josh Elser
I'm pleased to announce that the ASF board has just approved the 
transition of VP Phoenix from myself to Ankit. As with all things, this 
comes with the approval of the Phoenix PMC.


The ASF defines the responsibilities of the VP to be largely oversight 
and secretarial. That is, a VP should be watching to make sure that the 
project is following all foundation-level obligations and writing the 
quarterly project reports about Phoenix to summarize the happenings. Of 
course, a VP can choose to use this title to help drive movement and 
innovation in the community, as well.


With this VP rotation, the PMC has also implicitly agreed to focus on a 
more regular rotation schedule of the VP role. The current plan is to 
revisit the VP role in another year.


Please join me in congratulating Ankit on this new role and thank him 
for volunteering.


Thank you all for the opportunity to act as VP these last years.

- Josh


[jira] [Created] (PHOENIX-5827) Let PQS act as a maven repo

2020-04-08 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5827:
---

 Summary: Let PQS act as a maven repo
 Key: PHOENIX-5827
 URL: https://issues.apache.org/jira/browse/PHOENIX-5827
 Project: Phoenix
  Issue Type: Improvement
  Components: queryserver
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: queryserver-1.0.0


PQS is already an HTTP server and we have the Phoenix client jars for PQS to 
operate.

How about we just let PQS host these jars as a normal Maven repository?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Maven repo inside PQS for client jars

2020-04-08 Thread Josh Elser
Going to start working on this for PQS. It will be behind a feature-flag 
that folks would have to opt-in to.


On 4/7/20 4:47 PM, Josh Elser wrote:

Hi,

Over in HBASE-24066, I had hacked together a POC which shows the HBase 
UI hosting shaded client jars for Maven-based users. Nick had raised 
some concerns about inadvertently impacting the HBase Master, so it's 
stalled out at this point.


https://issues.apache.org/jira/browse/HBASE-24066

I'm curious what folks in Phoenix think about doing this inside of PQS. 
The idea would be that you can specify a path beneath the PQS url in 
your Maven application and automatically pull jars from there. The 
benefit would be that clients can automatically write their code and 
pull the exact client JAR for the cluster they're talking to.


If there aren't concerns, I'll implement the same idea over here. I want 
to avoid doing the work if it will just sit in limbo.


- Josh


Re: No builds on Phoenix-Master jenkins since Feb 19th?

2020-04-08 Thread Josh Elser
It looks like the Phoenix-Master[1] build is now defunct after Istvan's 
hbase profile work. New job over in [2] which is a matrix job for all of 
HBase 2.0, 2.1, and 2.2.


I think the question is whether or not we want to keep the old job 
around? I think the answer is "no".


[1] https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master/
[2] 
https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master-matrix/


On 4/8/20 2:00 PM, la...@apache.org wrote:

Just looking at the Phoenix Jenkins jobs I noticed that was no build on master 
since for 3 weeks.
Is that in purpose? There were clearly changes on the master branch since then.

Cheers.

-- Lars



[jira] [Created] (PHOENIX-5823) Clean up phoenix-client vestiges from a non-attached jar

2020-04-07 Thread Josh Elser (Jira)
Josh Elser created PHOENIX-5823:
---

 Summary: Clean up phoenix-client vestiges from a non-attached jar
 Key: PHOENIX-5823
 URL: https://issues.apache.org/jira/browse/PHOENIX-5823
 Project: Phoenix
  Issue Type: Task
Reporter: Josh Elser
Assignee: Josh Elser


Noticed that phoenix-client is doing some goofy stuff still as a result of the 
old way where we didn't attach the phoenix-client.jar to the Maven project 
(e.g. created the jar in target but didn't publish the jar out of our build).

Can remove some unnecessary stuff in the pom.xml



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[DISCUSS] Maven repo inside PQS for client jars

2020-04-07 Thread Josh Elser

Hi,

Over in HBASE-24066, I had hacked together a POC which shows the HBase 
UI hosting shaded client jars for Maven-based users. Nick had raised 
some concerns about inadvertently impacting the HBase Master, so it's 
stalled out at this point.


https://issues.apache.org/jira/browse/HBASE-24066

I'm curious what folks in Phoenix think about doing this inside of PQS. 
The idea would be that you can specify a path beneath the PQS url in 
your Maven application and automatically pull jars from there. The 
benefit would be that clients can automatically write their code and 
pull the exact client JAR for the cluster they're talking to.


If there aren't concerns, I'll implement the same idea over here. I want 
to avoid doing the work if it will just sit in limbo.


- Josh


[jira] [Resolved] (PHOENIX-5146) Phoenix missing class definition: java.lang.NoClassDefFoundError: org/apache/phoenix/shaded/org/apache/http/Consts

2020-04-05 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5146.
-
Resolution: Incomplete

No clear problem and reproduction can be provided. User list is a better place 
to ask non-specific questions.

> Phoenix missing class definition: java.lang.NoClassDefFoundError: 
> org/apache/phoenix/shaded/org/apache/http/Consts
> --
>
> Key: PHOENIX-5146
> URL: https://issues.apache.org/jira/browse/PHOENIX-5146
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: 3 node kerberised cluster.
> Hbase 2.0.2
>Reporter: Narendra Kumar
>Priority: Major
>
> While running a SparkCompatibility check for Phoniex hitting this issue:
> {noformat}
> 2019-02-15 09:03:38,470|INFO|MainThread|machine.py:169 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|RUNNING: echo "
>  import org.apache.spark.graphx._;
>  import org.apache.phoenix.spark._;
>  val rdd = sc.phoenixTableAsRDD(\"EMAIL_ENRON\", Seq(\"MAIL_FROM\", 
> \"MAIL_TO\"), 
> zkUrl=Some(\"huaycloud012.l42scl.hortonworks.com:2181:/hbase-secure\"));
>  val rawEdges = rdd.map
> { e => (e(\"MAIL_FROM\").asInstanceOf[VertexId], 
> e(\"MAIL_TO\").asInstanceOf[VertexId])}
> ;
>  val graph = Graph.fromEdgeTuples(rawEdges, 1.0);
>  val pr = graph.pageRank(0.001);
>  pr.vertices.saveToPhoenix(\"EMAIL_ENRON_PAGERANK\", Seq(\"ID\", \"RANK\"), 
> zkUrl = Some(\"huaycloud012.l42scl.hortonworks.com:2181:/hbase-secure\"));
>  " | spark-shell --master yarn --jars 
> /usr/hdp/current/hadoop-client/lib/hadoop-lzo-0.6.0.3.1.0.0-75.jar 
> --properties-file 
> /grid/0/log/cluster/run_phoenix_secure_ha_all_1/artifacts/spark_defaults.conf 
> 2>&1 | tee 
> /grid/0/log/cluster/run_phoenix_secure_ha_all_1/artifacts/Spark_clientLogs/phoenix-spark.txt
>  2019-02-15 09:03:38,488|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SPARK_MAJOR_VERSION is set 
> to 2, using Spark2
>  2019-02-15 09:03:39,901|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Class path contains 
> multiple SLF4J bindings.
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-75/phoenix/phoenix-5.0.0.3.1.0.0-75-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: Found binding in 
> [jar:file:/usr/hdp/3.1.0.0-75/spark2/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>  2019-02-15 09:03:39,902|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|SLF4J: See 
> [http://www.slf4j.org/codes.html#multiple_bindings] for an explanation.
>  2019-02-15 09:03:41,400|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|Setting default log level to 
> "WARN".
>  2019-02-15 09:03:41,400|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|To adjust logging level use 
> sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
>  2019-02-15 09:03:54,837|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84{color:#ff}*|java.lang.NoClassDefFoundError:
>  org/apache/phoenix/shaded/org/apache/http/Consts*{color}
>  2019-02-15 09:03:54,838|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.digestURI(URIBuilder.java:181)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.phoenix.shaded.org.apache.http.client.utils.URIBuilder.(URIBuilder.java:82)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.createURL(KMSClientProvider.java:468)
>  2019-02-15 09:03:54,839|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hadoop.crypto.key.kms.KMSClientProvider.getDelegationToken(KMSClientProvider.java:1023)
>  2019-02-15 09:03:54,840|INFO|MainThread|machine.py:184 - 
> run()||GUID=1566a829-b1df-4757-8c3d-73a7fa302b84|at 
> org.apache.hado

Re: [DISCUSS] client/server jar naming, post hbase-compat changes

2020-03-26 Thread Josh Elser

On 3/26/20 5:22 AM, Istvan Toth wrote:

On Thu, Mar 26, 2020 at 5:39 AM Guanghao Zhang  wrote:



I thought user still need to care about which hbase version in use?
phoenix-server-xxx-hbase-2.1.jar not worked with hbase 2.2.x cluster now?



They do need to care when they are referencing maven artifacts.

However there are separate assemblies (tar balls) for each HBase version,
and the assemblies only contain the shaded artifacts for the specific HBase
version that the assembly was built for.



Yes, as Istvan says. Sorry I was not more clear in my explanation :)


[DISCUSS] client/server jar naming, post hbase-compat changes

2020-03-25 Thread Josh Elser
Background: IstvanT has done a lot of really great work to clean up the 
HBase 2.x compatibility issues for us. This lets us move away from the 
HBase-version-tagged releases of Phoenix (e.g. HBase-1.3, HBase-1.4, 
etc), and keep a single branch which can build all of these.


Building master locally, I noticed the following in my tarball, 
specifically the jars



  phoenix-5.1.0-SNAPSHOT-hbase-2.2-client.jar -> 
phoenix-client-5.1.0-SNAPSHOT-hbase-2.2.jar

  phoenix-5.1.0-SNAPSHOT-hbase-2.2-server.jar
  phoenix-5.1.0-SNAPSHOT-server.jar
  phoenix-client-5.1.0-SNAPSHOT-hbase-2.2.jar


I think there are two things happening here. One is that the 
phoenix-5.1.0-SNAPSHOT-server.jar is "empty" -- it's not the shaded 
server jar, but the hbase-2.2-server.jar is the correct jar. I think 
this is just a bug (you agree, Istvan?)


The other thing I notice is that it feels like Istvan was try to 
simplify some things via symlinks. My feeling was that we could take 
this a step further. What if, instead of just having "hbase-x.y" named 
jars, we give symlinked jars as well. Creating something like...



  phoenix-5.1.0-SNAPSHOT-client.jar -> 
phoenix-client-5.1.0-SNAPSHOT-hbase-2.2-client.jar

  phoenix-client-5.1.0-SNAPSHOT-hbase-2.2-client.jar
  phoenix-5.1.0-SNAPSHOT-server.jar -> 
phoenix-server-5.1.0-SNAPSHOT-hbase-2.2-server.jar

  phoenix-server-5.1.0-SNAPSHOT-hbase-2.2-server.jar


This would make downstream applications/users a little more simple -- 
not having to worry about the HBase version in use (since their concerns 
are what version of Phoenix is being used, instead). We could even 
introduce non-Phoenix-versioned symlinks for these jars (e.g. 
phoenix-client.jar and phoenix-server.jar). I think this also moves us a 
little closer to what we used to have.


Sounds like a good idea to others?


[jira] [Assigned] (PHOENIX-5778) Remove the dependency of KeyStoreTestUtil

2020-03-17 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-5778:
---

Assignee: Guanghao Zhang

> Remove the dependency of KeyStoreTestUtil
> -
>
> Key: PHOENIX-5778
> URL: https://issues.apache.org/jira/browse/PHOENIX-5778
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: queryserver-1.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Fix For: queryserver-1.0.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If I am not wrong, phoenix should reduce the dependency of hbase 
> class/interface which not marked IA.Public.  KeyStoreTestUtil is just a 
> static util class. I thought phoenix query server can copy a new one and not 
> depend on hbase's KeyStoreTestUtil.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Unifying the 4.x branches

2020-03-02 Thread Josh Elser
our coprocessors tend to be giant monoliths, trying to create
release-based versions of them selectable by maven profile would
either require lots of developer copy/paste for each change, or a
significant (probably long overdue) refactor to make the coprocs

small

shims that call out to smaller, fine-grained classes that can
occasionally
be release-specific.

Geoffrey

On Fri, Feb 7, 2020 at 9:31 AM Josh Elser 
<mailto:els...@apache.org>> wrote:



Sounds like a good idea to me.

On 2/6/20 8:40 AM, Istvan Toth wrote:

Hello!

Now that we have a working solution in master for handling

different

HBase

minor versions, I think that we should think about applying

the same

template to 4.x., and unifying the 4.x-HBase-1.3, 1.4, and 1.5

branches.


Are there any intentional differences between the branches,

apart

from

having to conform to the slightly different APIs ?
If there are, what are they, and are they considered blockers?

Any other reasons not do this ?

I expect that based on my experience with the master branch,

I can do

this

in a few days, but I don't want to put in the effort if

there is no

interest in it.

My plan is to take the 1.5 branch as a base.

best regards
Istvan








--
*István Tóth* | Sr. Software Engineer
t. (36) 70 283-1788
st...@cloudera.com <mailto:st...@cloudera.com>

<https://www.cloudera.com>

[image: Cloudera] <https://www.cloudera.com/>
[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image:
Cloudera on LinkedIn] <https://www.linkedin.com/company/cloudera>
<https://www.cloudera.com/>
--











[jira] [Assigned] (PHOENIX-5699) Investigate reducing chore intervals in MiniCluster to speed up tests

2020-02-26 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser reassigned PHOENIX-5699:
---

Assignee: Richard Antal

> Investigate reducing chore intervals in MiniCluster to speed up tests
> -
>
> Key: PHOENIX-5699
> URL: https://issues.apache.org/jira/browse/PHOENIX-5699
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Istvan Toth
>Assignee: Richard Antal
>Priority: Major
> Attachments: PHOENIX-5699.master.v1.patch
>
>
> Some tests take a long time to run not because they are 
> computationally/memory/IO intensive, but simply because they are waiting for 
> some HBase chore to be run.
> One such test is MutableIndexSplitIT where we must wait for 
> CompactedHFilesDischarger for the requested split to happen.
> Try to identify these cases, and reduce the chore intervals in the 
> MiniCluster setup to speed up  test executions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] Feb 2020 board report

2020-02-12 Thread Josh Elser
With no input from the community, please find a copy of the board report 
I have submitted. In the future, at least an acknowledgement that my 
"points of interest" are in line with what you all think. I am not the 
one defining what things are interesting to Apache Phoenix, simply 
trying to summarize what I observe.



## Description:
The mission of Phoenix is the creation and maintenance of software 
related to
High performance relational database layer over Apache HBase for low 
latency

applications

## Issues:
No issues to report to the board at this time.

## Membership Data:
Apache Phoenix was founded 2014-05-20 (6 years ago)
There are currently 51 committers and 31 PMC members in this project.
The Committer-to-PMC ratio is roughly 7:4.

Community changes, past quarter:
- No new PMC members. Last addition was Chinmay Kulkarni on 2019-09-09.
- Andreas Neumann was added as committer on 2019-12-03
- Terence Yim was added as committer on 2019-12-03
- Gokcen Iskender was added as committer on 2020-02-07
- Gokul Gunasekaran was added as committer on 2019-12-03
- Istvan Toth was added as committer on 2019-12-02
- Xinyi Yan was added as committer on 2019-12-26
- Yoni Gottesman was added as committer on 2019-12-03

## Project Activity:

Following up from the previous report, I'm happy to report that both the
former podlings Omid and Tephra have been successfully "adopted" under
the Apache Phoenix PMC. The PMC voted to grant committership to all
PPMC who desired it, transitioned all infrastructure (e.g. Jira projects,
Git repositories) under the Phoenix role, and did some basic updates
to our public facing user-documentation to make sure our users can be
aware of how these (now) sub-projects will continue to exist at the ASF.

I'm also happy to report that Phoenix 4.15.0 was released in December. As is
normal, we are also approaching a 4.15.1 bug-fix release in that release 
line.
Activity on the 4.x release line continues at the usual cadence thanks 
to the

dedicated work of the committers.

On the 5.x release line, we were largely blocked because upstream Apache
HBase changes caused us some API and runtime compatibility issues. 
Thankfully,

after some more discussion on the matter, we got traction by a developer to
chase down the problem and implement a solution. At this point, we are 
largely

unblocked to work towards a long-overdue 5.1.0 release.

## Community Health:

We've added 7 new committers since our last report which is fantastic. 
We have
not, however, added any new committers. We should take this as an action 
item

as a project.

I find the mailing list traffic largely status quo; user lists have a drop
year-over-year but the dev and issues list have an increase of a similar
percentage magnitude year-over-year. In general, I observe a steady 
stream of

user questions and developers created and resolving Jira issues.



On 2/6/20 8:11 PM, Josh Elser wrote:

Yo,

It's that time which is so nice, it comes four times a year: board 
report time!


Things that are jumping out at me:

* Omid & Tephra "adoption"
* 4.15.0 released
* HBase2 compat stuff landed to unblock next 5.x release

Please tell me what else you think is worth mentioning.

If you forgot what was in the last board report in November 2019, you 
can re-read it here[1]


- Josh

[1] 
https://www.apache.org/foundation/records/minutes/2019/board_minutes_2019_11_20.txt 



Re: [ANNOUNCE] New Phoenix committer Gokcen Iskender

2020-02-12 Thread Josh Elser

Congratulations and welcome, Gokcen!

On 2/10/20 1:55 PM, Geoffrey Jacoby wrote:

On behalf of the Apache Phoenix PMC, I'm pleased to announce that Gokcen
Iskender has accepted our invitation to become a committer on the Phoenix
project. Gokcen has contributed many features and bug fixes as part of our
rewrite of secondary global indexes, and presented on these changes at the
NoSQL Day of last year's DataWorks Summit. She's also been an active
reviewer and tester on other's patches.

We appreciate Gokcen's many contributions and look forward to her continued
involvement. Welcome!

Geoffrey Jacoby



Re: [DISCUSS] Unifying the 4.x branches

2020-02-07 Thread Josh Elser

Sounds like a good idea to me.

On 2/6/20 8:40 AM, Istvan Toth wrote:

Hello!

Now that we have a working solution in master for handling different HBase
minor versions, I think that we should think about applying the same
template to 4.x., and unifying the 4.x-HBase-1.3, 1.4, and 1.5 branches.

Are there any intentional differences between the branches, apart from
having to conform to the slightly different APIs ?
If there are, what are they, and are they considered blockers?

Any other reasons not do this ?

I expect that based on my experience with the master branch, I can do this
in a few days, but I don't want to put in the effort if there is no
interest in it.

My plan is to take the 1.5 branch as a base.

best regards
Istvan



[DISCUSS] Feb 2020 board report

2020-02-06 Thread Josh Elser

Yo,

It's that time which is so nice, it comes four times a year: board 
report time!


Things that are jumping out at me:

* Omid & Tephra "adoption"
* 4.15.0 released
* HBase2 compat stuff landed to unblock next 5.x release

Please tell me what else you think is worth mentioning.

If you forgot what was in the last board report in November 2019, you 
can re-read it here[1]


- Josh

[1] 
https://www.apache.org/foundation/records/minutes/2019/board_minutes_2019_11_20.txt


Reminder: update reporter.a.o after a release

2020-02-06 Thread Josh Elser
If you are the release manager for Phoenix, make sure you are adding the 
new release here as a final step: 
https://reporter.apache.org/addrelease.html?phoenix


This is important in that it gives folks (e.g. ASF members and ASF board 
members) insight into the releases we're doing. It looks like the last 
two 4.x releases were not included.


Thanks!

- Josh

PS: if you committed a release to svn.a.o, you get a nasty-gram from 
reporter.a.o to do the above. Please don't ignore it ;)


Re: Moving Phoenix master to Hbase 2.2

2020-02-04 Thread Josh Elser

Thanks for sending out the reminder, Istvan!

Those interested: please take a look ASAP or give a shout for more time 
to review. On it's current trajectory, I think the PR will be in a place 
to merge tomorrow (2020/02/05, US times).


On 1/30/20 2:49 PM, Istvan Toth wrote:

I have received a ton of invaluable feedback from Josh, and some good
questions from Guanghao Zhang.

I am at the point where I am polishing the whitespaces and preparing to
update the documentation.
It would be great to hear some reviews from the SFDC side as well.
(even/especially if it is "this is great, carry on!", or a +1 :) )

It is likely that the very same approach could be used for unifying the 4.x
branches,
so that we'd end up with two development branches,
instead of the current four, which would be a huge win for maintainability.


On Thu, Jan 23, 2020 at 4:14 PM Istvan Toth  wrote:


I have updated https://github.com/apache/phoenix/pull/687

I  consider this version mostly finished. (Still only for HBase 2.x)
I have abandoned the idea of a runtime compatibility solution, as the
all-important shaded thick client would become unmanageable with two
different HBase client runtimes.

Please review and comment!

On Wed, Jan 22, 2020 at 12:41 PM István Tóth  wrote:


Hi!

In case not everyone on thread watches the ticket, I have put up a POC PR
for the build-time compatibility module solution.

It is for master/HBase 2.x. I did not investigate how well this approach
would fit incorporating HBase 1.x compatibility.

I also plan to investigate how easily this can be converted to selecting
the compatibility layer implementation at runtime, and have a single
artifact.

On Wed, Jan 15, 2020 at 6:22 PM Andrew Purtell 
wrote:


I suppose so, but release building is scripted. The build script can
iterate over a set of desired HBase version targets and drive the build by
setting parameters on the maven command line.



On Jan 15, 2020, at 2:01 AM, Guanghao Zhang 

wrote:






Anyway let’s assume for now you want to unify all the branches for

HBase

1.x. Start with the lowest HBase version you want to support. Then

iterate

up to the highest HBase version you want to support. Whenever you run

into

compile problems, make a new version specific maven module, add logic

to

the parent POM that chooses the right one. Then for each implicated

file,

move it into the version specific maven modules, duplicating as

needed, and

finally fixing up where needed.


+1. So we want to use one branch to handle all hbase branches? But we

still

need to release multi src/bin tar for multi hbase versions?

Andrew Purtell  于2020年1月15日周三 上午10:55写道:


Take PhoenixAccessController as an example. Over time the HBase

interfaces

change in minor ways. You’ll need different compilation units for this
class to be able to compile it across a wide range of 1.x. However the
essential Phoenix functionality does not change. The logic that makes

up

the method bodies can be factored into a class that groups together

static

helper methods which come to contain this common logic. The common

class

can remain in the core module. Then all you have in the version

specific

modules is scaffolding. In that scaffolding, calls to the static

methods in

core. It’s not a clever refactor but is DRY. Over time this can be

made

cleaner case by case where the naive transformation has a distasteful
result.



On Jan 14, 2020, at 6:40 PM, Andrew Purtell <

andrew.purt...@gmail.com>

wrote:









--
*István Tóth* | Sr. Software Engineer
t. (36) 70 283-1788
st...@cloudera.com 
[image: Cloudera] 
[image: Cloudera on Twitter]  [image:
Cloudera on Facebook]  [image:
Cloudera on LinkedIn] 

--







[jira] [Resolved] (PHOENIX-5693) Phoenix connectors jar doesn't seem to be served as said in the doc

2020-01-22 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5693.
-
Fix Version/s: (was: connectors-1.0.0)
   (was: 4.15.0)
   Resolution: Incomplete

> Phoenix connectors jar doesn't seem to be served as said in the doc
> ---
>
> Key: PHOENIX-5693
> URL: https://issues.apache.org/jira/browse/PHOENIX-5693
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.15.0, connectors-1.0.0
>Reporter: Ahmed Adnane
>Priority: Major
>
> As described in the documentation here 
> "https://phoenix.apache.org/phoenix_spark.html": the connectors have their 
> own releases but the dependecy included in the doc doesn't work , and the jar 
> is nowhere to be found



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (PHOENIX-5687) Phoenix-client-5.0.0 can not run on jdk13

2020-01-17 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved PHOENIX-5687.
-
Resolution: Later

> Phoenix-client-5.0.0 can not run on jdk13
> -
>
> Key: PHOENIX-5687
> URL: https://issues.apache.org/jira/browse/PHOENIX-5687
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0
> Environment: Spring boot: 2.0.6
> Jdk:13
> phonix-client:5.0.0
>Reporter: zhengchuxiong
>Priority: Major
> Attachments: HbaseServiceTest.java
>
>
> Use phoenix 5.0.0 version to connect hbase,the project run on environment 
> jdk13 , than got an exception "java.lang.IncompatibleClassChangeError: 
> Inconsistent constant pool data in classfile for class 
> org/apache/hadoop/hbase/client/Row. Method 'int 
> lambda$static$28(org.apache.hadoop.hbase.client.Row, 
> org.apache.hadoop.hbase.client.Row)' at index 57 is CONSTANT_MethodRef and 
> should be CONSTANT_InterfaceMethodRef"
> When i cut to environment jdk8 , the program work fine .
> Anybody who can give me some suggestion?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5683) Invalid pom for phoenix-connectors

2020-01-15 Thread Josh Elser (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5683?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-5683:

Description: 
Multiple warnings/error from Maven as to the pom structure for the project
 * Duplicate maven-compiler-plugin definitions in phoenix-spark
 * Invalid parent element in presto-phoenix-shaded
 * Incorrect phoenix version set
 * Tephra version not defined

  was:
Multiple warnings/error from Maven as to the pom structure for the project
 * Duplicate maven-compiler-plugin definitions in phoenix-spark
 * Invalid parent element in presto-phoenix-shaded


> Invalid pom for phoenix-connectors
> --
>
> Key: PHOENIX-5683
> URL: https://issues.apache.org/jira/browse/PHOENIX-5683
> Project: Phoenix
>  Issue Type: Bug
>    Reporter: Josh Elser
>    Assignee: Josh Elser
>Priority: Major
> Fix For: connectors-1.0.0
>
>
> Multiple warnings/error from Maven as to the pom structure for the project
>  * Duplicate maven-compiler-plugin definitions in phoenix-spark
>  * Invalid parent element in presto-phoenix-shaded
>  * Incorrect phoenix version set
>  * Tephra version not defined



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   7   8   9   10   >