[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: PHOENIX-5211.4.x-HBase-1.3.001.patch

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.4.x-HBase-1.3.001.patch, PHOENIX-5211.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: (was: PHOENIX-5211-4.x.patch)

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.4.x-HBase-1.3.001.patch, PHOENIX-5211.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: PHOENIX-5211-4.x.patch

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211-4.x.patch, PHOENIX-5211.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: PHOENIX-5211.patch

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: (was: PHOENIX-5211.patch)

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 11.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


4.x-HBase-1.4 jenkins runs

2019-06-27 Thread la...@apache.org
Hi all,
in the past weeks we had mostly successful test runs on 4.x-HBase-1.4

Yet when you look at that build it's all read. The reason is that tests all run 
and finish and then Jenkins just hangs trying to archive the test artifacts for 
an hour or more. That only happens on the -1.4 branch (does not happen on -1.3, 
-1.5, or master branches).
Does anybody have an idea about this? Do we need to restrict the jenkinds VMs 
to run on? Or something else?

Thanks.
-- Lars


Re: 4.15.0 and 5.1.0 releases

2019-06-27 Thread la...@apache.org
 Thanks Geoffrey.
The damage is already done. We messed up and let it slide (multiple times, this 
is by no means the first time) and thus are in exactly the situation you 
outlined: No confidence in the code base.
Now we can only look forward and get the code into a releasable state. The most 
important aspects are - as you point out, and I agree - getting confidence in 
splittable syscat and finishing the indexing work.

In hindsight we should have done a release right before splittable syscat and 
perhaps one right after. Oh well. :)

Could you mark the Jiras you remember with 4.15.0 and 5.1.0 fix versions (or 
are you saying you did already?)
Since you say that we can release 4.14.3 with just the index changes, does that 
imply that you are mostly concerned about splittable syscat in 4.15.0 and 5.1.0?

I'm not a fan of a "beta" release, honestly. We can only do as good as we can 
and release a version that we believe in good conscience that there are no 
major issues. All releases will contain some bugs that are found later.
It seems we are not even at that point yet... The good conscience part. :)

How about we institute an immediate absolutely-no-new-feature policy for *all* 
of Phoenix until we have a releasable project? I'd be happy to enforce that. 
One cannot add new features to a code base that is not releasable/stable 
anyway. Until a few weeks ago we *never* had a passing test run. I really don't 
understand how we get here over and over again. But whatever, it's too late, 
and whining surely doesn't help.

Lemme propose the following action plan then based on this and what you said:
1. We release 4.14.3 with just the index changes. Soon.

2. We immediately stop all new feature development in all branches (including 
5.x, i.e. master)

3. We harden/test/etc splittable syscat as well as other accumulated tech debt 
that we identify.

4. After we release 4.15.0 and 5.1.0 we allow feature work again.
5. Following those releases we do strictly monthly releases on all branches 
(and if we cannot do that, declare a branch dead)
Some of these (especially #5) might be radical, but we if we want to avoid this 
situation again we need to apply some rigor. As is Phoenix has been turning 
into an almost unmaintainable project over the past years, we need to actively 
counter that.

Cheers!

-- Lars

On Thursday, June 27, 2019, 1:36:37 PM PDT, Geoffrey Jacoby 
 wrote:  
 
 Lars,

I agree 100% that we should have smaller, more frequent releases going
forward. As for this release, I have two concerns.

The first is indexes. I've added several JIRAs that had been incorrectly
not marked with a Fix Version to 4.15 / 5.1. These are all part of the
Self-Repairing Index project, which spans several JIRAs and whose first
major one (PHOENIX-5156, allowing newly created mutable indexes to
self-repair inconsistencies at read time) is already in 4.15 and 5.1.
Outstanding JIRAs include PHOENIX-5211 to extend the logic to immutable
indexes, and PHOENIX-5333, to give users a tool to convert their legacy
indexes to the new model. These are all under review and should land very
soon.

Especially given the multiple reports on the user list of operators
encountering index consistency problems (which I have also seen in my own
environments), I think it's important that our next release include these
fixes, and that they go out in a unified way.

The second concern is testing, particularly upgrade, perf and chaos
testing. In addition to the large index changes (for which I know some perf
work and live-cluster testing has been done, with more planned), there are
other major changes in 4.15 such as the splittable system catalog. If all
the issues on the current list were fixed, I'd still be reluctant to put
the bits into production without more due diligence. We've released
binaries with significant regressions in them that were missed in our test
suites before, and it's important to avoid that this time.

Yet Lars's point that we've waited far too long to release is of course
correct. Perhaps the solution is to do what the HBase community did when
the 2.x branch dragged out too long, and after the listed issues are Fixed,
we release an explicit beta, closed to new features, from which a final
release can graduate. In parallel, we could release a 4.14.3 with just the
index changes and the current diff from 4.14.2 so users get those faster.

Or maybe our testing's advanced further than I know about, and we're closer
to green than I think. Happy to hear everyone's thoughts.

Geoffrey

On Thu, Jun 27, 2019 at 10:26 AM la...@apache.org  wrote:

> Hi all,
> we're getting close. The test suite is passing fairly reliably now.(minus
> some strange failure to archive the artifact in -1.4 and PartialCommitIT
> failing in -1.3 only).
> I put a lot of effort into speeding up the tests and making them pass.
> Let's please (pretty please :) ) keep it that way.A passing, comprehensive
> test suite is key to frequent releases.
>
> I also 

[jira] [Updated] (PHOENIX-5156) Consistent Mutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5156:
---
Fix Version/s: 5.1.0
   4.15.0

> Consistent Mutable Global Indexes for Non-Transactional Tables
> --
>
> Key: PHOENIX-5156
> URL: https://issues.apache.org/jira/browse/PHOENIX-5156
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5156.4.x-HBase-1.4.001.patch, 
> PHOENIX-5156.4.x-HBase-1.4.002.patch, PHOENIX-5156.4.x-HBase-1.4.005.patch, 
> PHOENIX-5156.master.001.patch, PHOENIX-5156.master.002.patch, 
> PHOENIX-5156.master.003.patch, PHOENIX-5156.master.004.patch, 
> PHOENIX-5156.master.005.patch, PHOENIX-5156.master.006.patch, 
> PHOENIX-5156.master.007.patch, PHOENIX-5156.master.008.patch, 
> PHOENIX-5156.master.009.patch, PHOENIX-5156.master.010.patch, 
> PHOENIX-5156.master.011.patch, PHOENIX-5156.master.012.patch, 
> PHOENIX-5156.master.013.patch, PHOENIX-5156.master.014.patch, 
> PHOENIX-5156.master.015.patch, PHOENIX-5156.master.016.patch, 
> PHOENIX-5156.master.017.patch, PHOENIX-5156.master.019.patch, 
> PHOENIX-5156.master.021.patch, PHOENIX-5156.master.022.patch
>
>  Time Spent: 31h 10m
>  Remaining Estimate: 0h
>
> Without transactional tables, the mutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent mutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4871) Query parser throws exception on parameterized join

2019-06-27 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-4871:
-
Fix Version/s: 5.1.0
   4.15.0

> Query parser throws exception on parameterized join
> ---
>
> Key: PHOENIX-4871
> URL: https://issues.apache.org/jira/browse/PHOENIX-4871
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: This issue exists on version 4 and I could reproduce it 
> on current git repo version 
>Reporter: Mehdi Salarkia
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4871-repo.patch, PHOENIX-4871.master.v1.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When a join select statement has a parameter, Phoenix query parser fails to 
> create query metadata and fails this query :
> {code:java}
> SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1" ) WHERE "B"."b2" = 
> ? 
> {code}
> with the following exception: 
>  
> {code:java}
> org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : while 
> preparing SQL: SELECT "A"."a2" FROM "A" JOIN "B" ON ("A"."a1" = "B"."b1") 
> WHERE ("B"."b2" = ?) 
> at org.apache.calcite.avatica.Helper.createException(Helper.java:54)
> at org.apache.calcite.avatica.Helper.createException(Helper.java:41)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:358)
> at 
> org.apache.calcite.avatica.AvaticaConnection.prepareStatement(AvaticaConnection.java:175)
> at 
> org.apache.phoenix.end2end.QueryServerBasicsIT.testParameterizedJoin(QueryServerBasicsIT.java:377)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
> at 
> com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)
> at 
> com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)
> at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)
> java.lang.RuntimeException: java.sql.SQLException: ERROR 2004 (INT05): 
> Parameter value unbound. Parameter at index 1 is unbound
> at org.apache.calcite.avatica.jdbc.JdbcMeta.propagate(JdbcMeta.java:700)
> at org.apache.calcite.avatica.jdbc.JdbcMeta.prepare(JdbcMeta.java:726)
> at org.apache.calcite.avatica.remote.LocalService.apply(LocalService.java:195)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1215)
> at 
> org.apache.calcite.avatica.remote.Service$PrepareRequest.accept(Service.java:1186)
> at 
> org.apache.calcite.avatica.remote.AbstractHandler.apply(AbstractHandler.java:94)
> at 
> org.apache.calcite.avatica.remote.ProtobufHandler.apply(ProtobufHandler.java:46)
> at 
> org.apache.calcite.avatica.server.AvaticaProtobufHandler.handle(AvaticaProtobufHandler.java:127)
> at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
> at 
> 

[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: PHOENIX-5211.patch

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: (was: PHOENIX-5211.patch)

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: PHOENIX-5211.patch

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Gokcen Iskender (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5211:
-
Attachment: (was: PHOENIX-5211.patch)

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: 4.15.0 and 5.1.0 releases

2019-06-27 Thread Geoffrey Jacoby
Lars,

I agree 100% that we should have smaller, more frequent releases going
forward. As for this release, I have two concerns.

The first is indexes. I've added several JIRAs that had been incorrectly
not marked with a Fix Version to 4.15 / 5.1. These are all part of the
Self-Repairing Index project, which spans several JIRAs and whose first
major one (PHOENIX-5156, allowing newly created mutable indexes to
self-repair inconsistencies at read time) is already in 4.15 and 5.1.
Outstanding JIRAs include PHOENIX-5211 to extend the logic to immutable
indexes, and PHOENIX-5333, to give users a tool to convert their legacy
indexes to the new model. These are all under review and should land very
soon.

Especially given the multiple reports on the user list of operators
encountering index consistency problems (which I have also seen in my own
environments), I think it's important that our next release include these
fixes, and that they go out in a unified way.

The second concern is testing, particularly upgrade, perf and chaos
testing. In addition to the large index changes (for which I know some perf
work and live-cluster testing has been done, with more planned), there are
other major changes in 4.15 such as the splittable system catalog. If all
the issues on the current list were fixed, I'd still be reluctant to put
the bits into production without more due diligence. We've released
binaries with significant regressions in them that were missed in our test
suites before, and it's important to avoid that this time.

Yet Lars's point that we've waited far too long to release is of course
correct. Perhaps the solution is to do what the HBase community did when
the 2.x branch dragged out too long, and after the listed issues are Fixed,
we release an explicit beta, closed to new features, from which a final
release can graduate. In parallel, we could release a 4.14.3 with just the
index changes and the current diff from 4.14.2 so users get those faster.

Or maybe our testing's advanced further than I know about, and we're closer
to green than I think. Happy to hear everyone's thoughts.

Geoffrey

On Thu, Jun 27, 2019 at 10:26 AM la...@apache.org  wrote:

> Hi all,
> we're getting close. The test suite is passing fairly reliably now.(minus
> some strange failure to archive the artifact in -1.4 and PartialCommitIT
> failing in -1.3 only).
> I put a lot of effort into speeding up the tests and making them pass.
> Let's please (pretty please :) ) keep it that way.A passing, comprehensive
> test suite is key to frequent releases.
>
> I also committed and push some issues to 4.15.1 and 5.1.1 already. But I
> can't do it alone.
>
> There are 14 items to go for 4.15.0. Some of those are potentially serious.
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%204.15.0
>
> And 26 items for 5.1.0
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%205.1.0
>
> Let's make a final push and get these done (or moved to 4.15.1/5.1.1,
> resp)If you have any issues open, please either get them committed to move
> them to the next release.
>
> And then let's try to never get into this situation again where we have a
> huge unreleased (and unreleasable) code base with 100's or 1000's of
> unreleased changes.
> Thanks!
> -- Lars
>


[jira] [Updated] (PHOENIX-5371) SystemCatalogCreationOnConnectionIT is slow

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5371:
---
Attachment: 5371-v3.txt

> SystemCatalogCreationOnConnectionIT is slow
> ---
>
> Key: PHOENIX-5371
> URL: https://issues.apache.org/jira/browse/PHOENIX-5371
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
> Attachments: 5371-v2.txt, 5371-v3.txt, 5371.txt
>
>
> It's not immediately clear how to fix that, just creating a Jira for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5333) A tool to upgrade existing tables/indexes to use self-consistent global indexes design

2019-06-27 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5333:
-
Fix Version/s: 5.1.0
   4.15.0

> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design
> --
>
> Key: PHOENIX-5333
> URL: https://issues.apache.org/jira/browse/PHOENIX-5333
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Swaroopa Kadam
>Assignee: Swaroopa Kadam
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5333.master.v1.patch
>
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> A tool to upgrade existing tables/indexes to use self-consistent global 
> indexes design in PHOENIX-5156



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5373) GlobalIndexChecker should treat the rows created by the previous design as unverified

2019-06-27 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5373:
-
Fix Version/s: 5.1.0
   4.15.0

> GlobalIndexChecker should treat the rows created by the previous design as 
> unverified 
> --
>
> Key: PHOENIX-5373
> URL: https://issues.apache.org/jira/browse/PHOENIX-5373
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.2
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5373.4.x-HBase-1.4.001.patch, 
> PHOENIX-5373.master.001.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> For the ease of transition from the old global secondary index design to the 
> new one (without having read performance impact), GlobalIndexChecker treats 
> existing index rows (i.e., the rows created by the previous design) as 
> verified. We have discovered that this would lead to keeping stale index rows 
> around forever and including them in the result of queries. A stale index row 
> is a row for which we do not have the corresponding data table row. The 
> reason that we do not have the data table row is either the row is deleted 
> (but not the corresponding index row(s)), or the data table and index rows 
> are written with different timestamps. The assumption was that such rows 
> would be fixed by index rebuild. Unfortunately, without dropping or 
> truncating index tables, these stale rows may not be fixed by index rebuild. 
> Thus, GlobalIndexChecker should treat the rows created by the previous design 
> as unverified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5211) Consistent Immutable Global Indexes for Non-Transactional Tables

2019-06-27 Thread Geoffrey Jacoby (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Geoffrey Jacoby updated PHOENIX-5211:
-
Fix Version/s: 5.1.0
   4.15.0

> Consistent Immutable Global Indexes for Non-Transactional Tables
> 
>
> Key: PHOENIX-5211
> URL: https://issues.apache.org/jira/browse/PHOENIX-5211
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.13.0, 4.14.0, 5.0.0, 4.14.1
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5211.patch
>
>  Time Spent: 10h 20m
>  Remaining Estimate: 0h
>
> Without transactional tables, the immutable global indexes can get easily out 
> of sync with their data tables in Phoenix. Transactional tables require a 
> separate transaction manager, have some restrictions and performance 
> penalties. This issue is to have consistent immutable global indexes without 
> the need for using transactional tables.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (PHOENIX-4893) Move parent column combining logic of view and view indexes from server to client

2019-06-27 Thread Thomas D'Silva (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas D'Silva reassigned PHOENIX-4893:
---

Assignee: Thomas D'Silva

> Move parent column combining logic of view and view indexes from server to 
> client
> -
>
> Key: PHOENIX-4893
> URL: https://issues.apache.org/jira/browse/PHOENIX-4893
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> In a stack trace I see that phoenix.coprocessor.ViewFinder.findRelatedViews() 
> scans SYSCAT. With splittable SYSCAT which now involves regions from other 
> servers.
> We should really push this to the client now.
> Related to HBASE-21166
> [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4968) Dependency upgrade/cleanup

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4968:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Dependency upgrade/cleanup
> --
>
> Key: PHOENIX-4968
> URL: https://issues.apache.org/jira/browse/PHOENIX-4968
> Project: Phoenix
>  Issue Type: Task
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 5.1.1
>
>
> Another round of dependency upgrades for hygiene as well as removal of 
> unneeded, declared dependencies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5066) The TimeZone is incorrectly used during writing or reading data

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5066:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> The TimeZone is incorrectly used during writing or reading data
> ---
>
> Key: PHOENIX-5066
> URL: https://issues.apache.org/jira/browse/PHOENIX-5066
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 5.0.0, 4.14.1
>Reporter: jaanai
>Assignee: jaanai
>Priority: Critical
> Fix For: 4.15.1, 5.1.1
>
> Attachments: DateTest.java
>
>
> We have two methods to write data when uses JDBC API.
> #1. Uses _the exceuteUpdate_ method to execute a string that is an upsert SQL.
> #2. Uses the _prepareStatement_ method to set some objects and execute.
> The _string_ data needs to convert to a new object by the schema information 
> of tables. we'll use some date formatters to convert string data to object 
> for Date/Time/Timestamp types when writes data and the formatters are used 
> when reads data as well.
>  
> *Uses default timezone test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47') 
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 2 | 2018-12-10 | 23:45:07 | 2018-12-10 23:45:07.0 
> 3 | 2018-12-10 | 15:45:07 | 2018-12-10 15:45:07.66 
> {code}
> Reading the table by the getString methods 
> {code:java}
> 1 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 2 | 2018-12-10 15:45:07.000 | 2018-12-10 15:45:07.000 | 2018-12-10 
> 15:45:07.000 
> 3 | 2018-12-10 07:45:07.660 | 2018-12-10 07:45:07.660 | 2018-12-10 
> 07:45:07.660
> {code}
>  *Uses GMT+8 test*
>  Writing 3 records by the different ways.
> {code:java}
> UPSERT INTO date_test VALUES (1,'2018-12-10 15:40:47','2018-12-10 
> 15:40:47','2018-12-10 15:40:47')
> UPSERT INTO date_test VALUES (2,to_date('2018-12-10 
> 15:40:47'),to_time('2018-12-10 15:40:47'),to_timestamp('2018-12-10 15:40:47'))
> stmt.setInt(1, 3);stmt.setDate(2, date);stmt.setTime(3, 
> time);stmt.setTimestamp(4, ts);
> {code}
> Reading the table by the getObject(getDate/getTime/getTimestamp) methods.
> {code:java}
> 1 | 2018-12-10 | 23:40:47 | 2018-12-10 23:40:47.0 
> 2 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.0 
> 3 | 2018-12-10 | 15:40:47 | 2018-12-10 15:40:47.106 {code}
> Reading the table by the getString methods
> {code:java}
>  1 | 2018-12-10 23:40:47.000 | 2018-12-10 23:40:47.000 | 2018-12-10 
> 23:40:47.000
> 2 | 2018-12-10 15:40:47.000 | 2018-12-10 15:40:47.000 | 2018-12-10 
> 15:40:47.000
> 3 | 2018-12-10 15:40:47.106 | 2018-12-10 15:40:47.106 | 2018-12-10 
> 15:40:47.106
> {code}
>  
> _We_ have a historical problem,  we'll parse the string to 
> Date/Time/Timestamp objects with timezone in #1, which means the actual data 
> is going to be changed when stored in HBase table。



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4910) Improvements to spooled MappedByteBufferQueue files

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4910:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Improvements to spooled MappedByteBufferQueue files
> ---
>
> Key: PHOENIX-4910
> URL: https://issues.apache.org/jira/browse/PHOENIX-4910
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
> Attachments: PHOENIX-4910.001.patch, PHOENIX-4910.002.patch
>
>
> A user ran into a JVM bug which appears to have caused a RegionServer to 
> crash while running a topN aggregate query. This left a large number of files 
> in {{/tmp}} after the RS had gone away (due to a JVM SIGBUS crash). 
> MappedByteBufferQueue will buffer results in memory up to 20MB by default 
> (controlled by {{phoenix.query.spoolThresholdBytes}}) and then start 
> appending them to a file. I'm seeing two things which could be improved:
>  * If the RS exits abnormally, there is no process to clean up files - would 
> be nice to register the {{deleteOnExit()}} hook to try to clean these up.
>  * There is no ability to control where MappedByteBufferQueue writes its 
> spool file - would be nice to use something other than /tmp (I think we have 
> a property to control this already in our config..)
> FYI [~an...@apache.org]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5129) Evaluate using same cell as the data cell for storing dynamic column metadata

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5129:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Evaluate using same cell as the data cell for storing dynamic column metadata
> -
>
> Key: PHOENIX-5129
> URL: https://issues.apache.org/jira/browse/PHOENIX-5129
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
>
> In PHOENIX-374 we use shadow cells to store metadata for dynamic columns in 
> order to be able to project these columns for wildcard queries. More details 
> outlined in the [design 
> doc|https://docs.google.com/document/d/1-N6Z6Id0LzJ457BHT542cxqdKfeZgkFvKGW4xKDPtqs/edit].
> This Jira is to discuss changing the approach so that we can store the 
> metadata in the same cell as the dynamic column data, instead of separate 
> shadow cells. This will help reduce the size of store files since we don't 
> have to store additional rows corresponding to the shadow cell.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4830) order by primary key desc return wrong results

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4830:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> order by primary key desc return wrong results
> --
>
> Key: PHOENIX-4830
> URL: https://issues.apache.org/jira/browse/PHOENIX-4830
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
> Environment: phoenix-4.14-hbase-1.2
>Reporter: JieChen
>Assignee: Xu Cang
>Priority: Major
>  Labels: DESC
> Fix For: 4.15.1, 5.1.1
>
> Attachments: PHOENIX-4830-4.x-HBase-1.3.001.patch, 
> PHOENIX-4830-4.x-HBase-1.3.002.patch, PHOENIX-4830-4.x-HBase-1.3.003.patch, 
> PHOENIX-4830-4.x-HBase-1.3.004.patch, PHOENIX-4830-4.x-HBase-1.3.005.patch, 
> PHOENIX-4830-4.x-HBase-1.3.006.patch, PHOENIX-4830-4.x-HBase-1.3.007.patch, 
> PHOENIX-4830-4.x-HBase-1.3.007.patch, PHOENIX-4830-4.x-HBase-1.3.008.patch
>
>
> {code:java}
> 0: jdbc:phoenix:localhost>  create table test(id bigint not null primary key, 
> a bigint);
> No rows affected (1.242 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(1,11);
> 1 row affected (0.01 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(2,22);
> 1 row affected (0.007 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(3,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> select * from test;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 1   | 11  |
> | 2   | 22  |
> | 3   | 33  |
> +-+-+
> 3 rows selected (0.015 seconds)
> 0: jdbc:phoenix:localhost> select * from test order by id desc limit 2 offset 
> 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 3   | 33  |
> | 2   | 22  |
> +-+-+
> 2 rows selected (0.018 seconds)
> 0: jdbc:phoenix:localhost> select * from test where id in (select id from 
> test ) order by id desc limit 2 offset 0;
> +-+-+
> | ID  |  A  |
> +-+-+
> | 2   | 22  |
> | 1   | 11  |
> +-+-+
> wrong results. 
> {code}
> there may be some errors. ScanUtil.setupReverseScan code.
>  then
> {code:java}
> 0: jdbc:phoenix:localhost> upsert into test values(4,33);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(5,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(6,23);
> 1 row affected (0.005 seconds)
> 0: jdbc:phoenix:localhost> upsert into test values(7,33);
> 1 row affected (0.006 seconds)
> {code}
> execute sql
> {code:java}
> select * from test where id in (select id from test where a=33) order by id 
> desc;
> {code}
> throw exception
> {code:java}
> Error: org.apache.phoenix.exception.PhoenixIOException: 
> org.apache.hadoop.hbase.DoNotRetryIOException: 
> TEST,,1533266754845.b8e521d4dc8e8b8f18c69cc7ef76973d.: The next hint must 
> come after previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:96)
> at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:62)
> at 
> org.apache.phoenix.iterate.RegionScannerFactory$1.nextRaw(RegionScannerFactory.java:212)
> at 
> org.apache.phoenix.coprocessor.HashJoinRegionScanner.nextRaw(HashJoinRegionScanner.java:264)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.DelegateRegionScanner.nextRaw(DelegateRegionScanner.java:82)
> at 
> org.apache.phoenix.coprocessor.BaseScannerRegionObserver$RegionScannerHolder.nextRaw(BaseScannerRegionObserver.java:293)
> at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2541)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2183)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:183)
> at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:163)
> Caused by: java.lang.IllegalStateException: The next hint must come after 
> previous hint 
> (prev=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> next=\x80\x00\x00\x00\x00\x00\x00\x07//LATEST_TIMESTAMP/Maximum/vlen=0/seqid=0,
>  
> kv=\x80\x00\x00\x00\x00\x00\x00\x06/0:\x00\x00\x00\x00/1533266778944/Put/vlen=1/seqid=9)
> at 
> org.apache.phoenix.filter.SkipScanFilter.setNextCellHint(SkipScanFilter.java:171)
> at 
> 

[jira] [Updated] (PHOENIX-5020) PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5020:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> PhoenixMRJobSubmitter should use a long timeout when getting candidate jobs
> ---
>
> Key: PHOENIX-5020
> URL: https://issues.apache.org/jira/browse/PHOENIX-5020
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 5.0.0, 4.14.1
>Reporter: Geoffrey Jacoby
>Assignee: Geoffrey Jacoby
>Priority: Minor
>  Labels: SFDC
> Fix For: 4.15.1, 5.1.1
>
>
> If an environment has a huge System.Catalog (such as having many views) the 
> query in getCandidateJobs can timeout. Because of PHOENIX-4936, this looks 
> like there are no indexes that need an async rebuild. In addition to fixing 
> PHOENIX-4936, we should extend the timeout. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5258) Add support to parse header from the input CSV file as input columns for CsvBulkLoadTool

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5258:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Add support to parse header from the input CSV file as input columns for 
> CsvBulkLoadTool
> 
>
> Key: PHOENIX-5258
> URL: https://issues.apache.org/jira/browse/PHOENIX-5258
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Prashant Vithani
>Assignee: Prashant Vithani
>Priority: Minor
> Fix For: 4.15.1, 5.1.1
>
> Attachments: PHOENIX-5258-4.x-HBase-1.4.001.patch, 
> PHOENIX-5258-4.x-HBase-1.4.patch, PHOENIX-5258-master.001.patch, 
> PHOENIX-5258-master.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, CsvBulkLoadTool does not support reading header from the input csv 
> and expects the content of the csv to match with the table schema. The 
> support for the header can be added to dynamically map the schema with the 
> header.
> The proposed solution is to introduce another option for the tool 
> `–parse-header`. If this option is passed, the input columns list is 
> constructed by reading the first line of the input CSV file.
>  * If there is only one file, read the header from the first line and 
> generate the `ColumnInfo` list.
>  * If there are multiple files, read the header from all the files, and throw 
> an error if the headers across files do not match.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5362) Mappers should use the queryPlan from the driver rather than regenerating the plan

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5362:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Mappers should use the queryPlan from the driver rather than regenerating the 
> plan
> --
>
> Key: PHOENIX-5362
> URL: https://issues.apache.org/jira/browse/PHOENIX-5362
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
>
> Currently, PhoenixInputFormat#getQueryPlan already generates a queryPlan and 
> we use this plan to get the scans and splits for the MR job. In 
> PhoenixInputFormat#createRecordReader which is called inside each mapper, we 
> again create a queryPlan and pass this to the PhoenixRecordReader instance.
> There are multiple problems with this approach:
> # The mappers already have information about the scans from the driver code. 
> We potentially just need to wrap these scans in an iterator and create a 
> subsequent ResultSet.
> # The mappers don't need most of the information embedded within a queryPlan, 
> so they shouldn't need to regenerate the plan.
> # There are weird corner cases that can occur if we replan the query in each 
> mapper. For ex: If there is an index creation or metadata change in between 
> when the MR job was created, and when the Mappers actually launch. In this 
> case, the mappers have the scans created for the first queryPlan, but the 
> mappers will use iterators created for the second queryPlan. In such cases, 
> the issued scans would not match the queryPlan embedded in the mappers' 
> iterators/ResultSet. We could potentially miss some scans/be looking for more 
> than we actually require since we check the original scans for this size. The 
> resolved table would be as per the new queryPlan, and there could be a 
> mismatch here as well (considering the index creation case you mentioned). 
> There are potentially other repercussions in case of intermediary metadata 
> changes as well.
> Serializing a subset of the information (like the projector, which iterator 
> to use, etc.) of a QueryPlan and passing it from the driver to the mappers 
> without having them regenerate the plans seems like the best way forward.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5296) Ensure store file reader refcount is zero at end of relevant unit tests

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5296:
---
Fix Version/s: (was: 5.1.0)
   5.1.1

> Ensure store file reader refcount is zero at end of relevant unit tests
> ---
>
> Key: PHOENIX-5296
> URL: https://issues.apache.org/jira/browse/PHOENIX-5296
> Project: Phoenix
>  Issue Type: Test
>Reporter: Andrew Purtell
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
>
> Unit and integration tests for functional areas where the implementation 
> wraps scanners and uses a delegate pattern should check at the end of the 
> test that no scanners were leaked (check that all scanners including the 
> inner scanners were properly closed) by testing that the store reader 
> reference count is zero on all stores. 
> HBASE-22459 will offer a new metric and API which exposes this information. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


4.15.0 and 5.1.0 releases

2019-06-27 Thread la...@apache.org
Hi all,
we're getting close. The test suite is passing fairly reliably now.(minus some 
strange failure to archive the artifact in -1.4 and PartialCommitIT failing in 
-1.3 only).
I put a lot of effort into speeding up the tests and making them pass. Let's 
please (pretty please :) ) keep it that way.A passing, comprehensive test suite 
is key to frequent releases.

I also committed and push some issues to 4.15.1 and 5.1.1 already. But I can't 
do it alone.

There are 14 items to go for 4.15.0. Some of those are potentially 
serious.https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%204.15.0

And 26 items for 5.1.0
https://issues.apache.org/jira/issues/?jql=project%20%3D%20PHOENIX%20AND%20status%20in%20(Open%2C%20%22In%20Progress%22%2C%20Reopened%2C%20%22Patch%20Available%22)%20AND%20fixVersion%20%3D%205.1.0

Let's make a final push and get these done (or moved to 4.15.1/5.1.1, resp)If 
you have any issues open, please either get them committed to move them to the 
next release.

And then let's try to never get into this situation again where we have a huge 
unreleased (and unreleasable) code base with 100's or 1000's of unreleased 
changes.
Thanks!
-- Lars


[jira] [Updated] (PHOENIX-5228) use slf4j for logging in phoenix project

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5228:
---
Fix Version/s: (was: 4.15.1)
   4.15.0

> use slf4j for logging in phoenix project
> 
>
> Key: PHOENIX-5228
> URL: https://issues.apache.org/jira/browse/PHOENIX-5228
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.14.1, 5.1.0
>Reporter: Mihir Monani
>Assignee: Xinyi Yan
>Priority: Trivial
>  Labels: SFDC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5228-4.x-HBase-1.3.patch, 
> PHOENIX-5228-4.x-HBase-1.4.patch, PHOENIX-5228-4.x-HBase-1.5.patch, 
> PHOENIX-5228-FIX-4.x-HBase-1.3.patch, PHOENIX-5228-FIX-4.x-HBase-1.4.patch, 
> PHOENIX-5228-FIX-4.x-HBase-1.4.patch, PHOENIX-5228-FIX-4.x-HBase-1.5.patch, 
> PHOENIX-5228-FIX-master.patch, PHOENIX-5228.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> It would be good to use slf4j for logging in phoenix project. Here is list of 
> files where doesn't use slf4j. 
> phoenix-core :-
> {noformat}
> WALRecoveryRegionPostOpenIT.java
> WALReplayWithIndexWritesAndCompressedWALIT.java
> BasePermissionsIT.java
> ChangePermissionsIT.java
> IndexRebuildIncrementDisableCountIT.java
> InvalidIndexStateClientSideIT.java
> MutableIndexReplicationIT.java
> FailForUnsupportedHBaseVersionsIT.java
> SecureUserConnectionsIT.java
> PhoenixMetricsIT.java
> BaseTracingTestIT.java
> PhoenixTracingEndToEndIT.java
> PhoenixRpcSchedulerFactory.java
> IndexHalfStoreFileReaderGenerator.java
> BinaryCompatibleBaseDecoder.java
> ServerCacheClient.java
> CallRunner.java
> MetaDataRegionObserver.java
> PhoenixAccessController.java
> ScanRegionObserver.java
> TaskRegionObserver.java
> DropChildViewsTask.java
> IndexRebuildTask.java
> BaseQueryPlan.java
> HashJoinPlan.java
> CollationKeyFunction.java
> Indexer.java
> LockManager.java
> BaseIndexBuilder.java
> IndexBuildManager.java
> NonTxIndexBuilder.java
> IndexMemStore.java
> BaseTaskRunner.java
> QuickFailingTaskRunner.java
> TaskBatch.java
> ThreadPoolBuilder.java
> ThreadPoolManager.java
> IndexManagementUtil.java
> IndexWriter.java
> IndexWriterUtils.java
> KillServerOnFailurePolicy.java
> ParallelWriterIndexCommitter.java
> RecoveryIndexWriter.java
> TrackingParallelWriterIndexCommitter.java
> PhoenixIndexFailurePolicy.java
> PhoenixTransactionalIndexer.java
> SnapshotScanner.java
> PhoenixEmbeddedDriver.java
> PhoenixResultSet.java
> QueryLogger.java
> QueryLoggerDisruptor.java
> TableLogWriter.java
> PhoenixInputFormat.java
> PhoenixOutputFormat.java
> PhoenixRecordReader.java
> PhoenixRecordWriter.java
> PhoenixServerBuildIndexInputFormat.java
> PhoenixMRJobSubmitter.java
> PhoenixConfigurationUtil.java
> Metrics.java
> DefaultStatisticsCollector.java
> StatisticsScanner.java
> PhoenixMetricsSink.java
> TraceReader.java
> TraceSpanReceiver.java
> TraceWriter.java
> Tracing.java
> EquiDepthStreamHistogram.java
> PhoenixMRJobUtil.java
> QueryUtil.java
> ServerUtil.java
> ZKBasedMasterElectionUtil.java
> IndexTestingUtils.java
> StubAbortable.java
> TestIndexWriter.java
> TestParalleIndexWriter.java
> TestParalleWriterIndexCommitter.java
> TestWALRecoveryCaching.java
> LoggingSink.java
> ParameterizedPhoenixCanaryToolIT.java
> CoprocessorHConnectionTableFactoryTest.java
> TestUtil.java{noformat}
> phoenix-tracing-webapp :-
> {noformat}
> org/apache/phoenix/tracingwebapp/http/Main.java
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5374) Incorrect exception thrown in some cases when client does not have Exec permissions on SYSTEM:CATALOG

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5374:
---
Fix Version/s: (was: 4.15.1)
   4.15.0

> Incorrect exception thrown in some cases when client does not have Exec 
> permissions on SYSTEM:CATALOG
> -
>
> Key: PHOENIX-5374
> URL: https://issues.apache.org/jira/browse/PHOENIX-5374
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chinmay Kulkarni
>Assignee: Chinmay Kulkarni
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-5374-4.x-HBase-1.3.patch, 
> PHOENIX-5374-master-v1.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Scenario:
> * Server and client-side namespace-mapping is enabled.
> * "hbase.security.exec.permission.checks" is true on the server. Thus, EXEC 
> permission checking will be performed during coprocessor endpoint 
> invocations. 
> * Client does not have EXEC permissions on SYSTEM:CATALOG
> * Client calls DriverManager.getConnection(..) and this fails with the 
> following exception:
> {noformat}
> java.sql.SQLException: ERROR 2006 (INT08): Incompatible jars detected between 
> client and server. Ensure that phoenix-[version]-server.jar is put on the 
> classpath of HBase in every region server: 
> org.apache.hadoop.hbase.security.AccessDeniedException: Insufficient 
> permissions (user=unprivilegedUser_N07, scope=SYSTEM:CATALOG, 
> params=[table=SYSTEM:CATALOG],action=EXEC)
>   at 
> org.apache.hadoop.hbase.security.access.AccessChecker.requirePermission(AccessChecker.java:281)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.requirePermission(AccessController.java:446)
>   at 
> org.apache.hadoop.hbase.security.access.AccessController.preEndpointInvocation(AccessController.java:2015)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1707)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$64.call(RegionCoprocessorHost.java:1704)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost$ObserverOperationWithResult.callObserver(CoprocessorHost.java:578)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperation(CoprocessorHost.java:614)
>   at 
> org.apache.hadoop.hbase.coprocessor.CoprocessorHost.execOperationWithResult(CoprocessorHost.java:592)
>   at 
> org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preEndpointInvocation(RegionCoprocessorHost.java:1703)
>   at 
> org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8011)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2409)
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2391)
>   at 
> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:409)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
> {noformat}
> The above is encountered as a result of a call to _getVersion_ from 
> _ConnectionQueryServicesImpl#checkClientServerCompatibility_, but can occur 
> from any place that we check client-server compatibility. 
> The exception bubbles up as an *INCOMPATIBLE_CLIENT_SERVER_JAR* exception, 
> which is wrong and also leads to misleading logs for clients.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5339) Refactor IndexBuildManager and IndexBuilder to eliminate usage of Pair

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5339:
---
Fix Version/s: (was: 5.1.0)
   (was: 4.15.0)
   5.1.1
   4.15.1

> Refactor IndexBuildManager and IndexBuilder to eliminate usage of Pair
> --
>
> Key: PHOENIX-5339
> URL: https://issues.apache.org/jira/browse/PHOENIX-5339
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Kadir OZDEMIR
>Assignee: Gokcen Iskender
>Priority: Major
> Fix For: 4.15.1, 4.14.3, 5.1.1
>
>
> Some methods of the IndexBuildManager and IndexBuilder classes return a 
> collection of pairs or triplets (implemented as pairs of pair and single). 
> This makes the Indexer and especially IndexRegionObserver code difficult to 
> read. We can replace the pair structures with a class, say IndexUpdate to 
> make the code more readable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5338) Test the empty column

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5338:
---
Fix Version/s: (was: 5.1.0)
   (was: 4.15.0)
   5.1.1
   4.15.1

> Test the empty column
> -
>
> Key: PHOENIX-5338
> URL: https://issues.apache.org/jira/browse/PHOENIX-5338
> Project: Phoenix
>  Issue Type: Test
>Reporter: Kadir OZDEMIR
>Assignee: Jacob Isaac
>Priority: Major
> Fix For: 4.15.1, 4.14.3, 5.1.1
>
>
> Every Phoenix table includes a shadow column that is called empty column. We 
> need an integration test to verify the following properties of the empty 
> column:
>  # Every Phoenix table (data or index) should have the empty column
>  # Every HBase mutation (full or partial row) for a Phoenix table should 
> include the empty column cell
>  # Removing/adding columns from/to a Phoenix table should not impact the 
> above empty column properties



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4810) Send parent->child link mutations to SYSTEM.CHILD_LINK table in MetdataClient.createTableInternal

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4810:
---
Fix Version/s: 5.1.0
   4.15.0

> Send parent->child link mutations to SYSTEM.CHILD_LINK table in 
> MetdataClient.createTableInternal 
> --
>
> Key: PHOENIX-4810
> URL: https://issues.apache.org/jira/browse/PHOENIX-4810
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Thomas D'Silva
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
>
> Instead of sending the parent->child link mutations to the 
> MetadataEndpointImpl.createTable write them directly to SYSTEM.CHILD_LINK



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5332) Fix slow running Integration Tests

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5332:
---
Fix Version/s: (was: 5.1.0)
   (was: 4.15.0)

> Fix slow running Integration Tests
> --
>
> Key: PHOENIX-5332
> URL: https://issues.apache.org/jira/browse/PHOENIX-5332
> Project: Phoenix
>  Issue Type: Test
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 1.5-test-times.txt, master-test-times.txt
>
>
> I picked out the slow running tests. From the last (only) successful jenkins 
> run.
> The overall run took 3 hr 9 min (on H26)
> I picked the slow ones in the following way:
> * More than 60s in the default tests
> * More than 120s in the StatsEnabledTests
> * And more than 200s in StatsDisabledTests, etc.
> {code:java}
> [INFO] --- maven-surefire-plugin:2.20:test (default-test) @ phoenix-core ---
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.455 
> s - in org.apache.phoenix.iterate.ChunkedResultIteratorTest
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.869 
> s - in org.apache.phoenix.util.CoprocessorHConnectionTableFactoryTest
> [INFO] --- maven-failsafe-plugin:2.20:integration-test 
> (ParallelStatsEnabledTest) @ phoenix-core ---
> [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 108.031 s - in org.apache.phoenix.end2end.QueryWithTableSampleIT
> [INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 120.746 s - in org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 222.043 s - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 212.731 s - in org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
> [INFO] --- maven-failsafe-plugin:2.20:integration-test 
> (ParallelStatsDisabledTest) @ phoenix-core ---
> [INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 501.869 s - in org.apache.phoenix.end2end.AlterTableIT
> [INFO] Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 453.287 s - in org.apache.phoenix.end2end.CastAndCoerceIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 576.443 s - in org.apache.phoenix.end2end.CaseStatementIT
> [INFO] Tests run: 57, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 395.982 s - in org.apache.phoenix.end2end.DateTimeIT
> [INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 560.619 s - in org.apache.phoenix.end2end.DeleteIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 596.039 s - in org.apache.phoenix.end2end.InQueryIT
> [INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 330.19 s - in org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 672.726 s - in org.apache.phoenix.end2end.GroupByIT
> [INFO] Tests run: 70, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 664.045 s - in org.apache.phoenix.end2end.IntArithmeticIT
> [INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 383.414 s - in org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 230.93 s - in 
> org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 227.712 s - in org.apache.phoenix.end2end.OrderByIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 228.137 s - in org.apache.phoenix.end2end.OrderByWithSpillingIT
> [INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 537.589 s - in org.apache.phoenix.end2end.NullIT
> [INFO] Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 446.558 s - in org.apache.phoenix.end2end.OnDuplicateKeyIT
> [INFO] Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 1,262.998 s - in org.apache.phoenix.end2end.InListIT
> [INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 582.677 s - in org.apache.phoenix.end2end.OrphanViewToolIT
> [INFO] Tests run: 55, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 254.14 s - in org.apache.phoenix.end2end.ProductMetricsIT
> [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 300.546 s - in org.apache.phoenix.end2end.PropertiesInSyncIT
> [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 410.912 s - in org.apache.phoenix.end2end.PointInTimeQueryIT
> [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 400.997 s - in 

[jira] [Resolved] (PHOENIX-5332) Fix slow running Integration Tests

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5332.

Resolution: Implemented

Closing this parent issue as implemented.

> Fix slow running Integration Tests
> --
>
> Key: PHOENIX-5332
> URL: https://issues.apache.org/jira/browse/PHOENIX-5332
> Project: Phoenix
>  Issue Type: Test
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 1.5-test-times.txt, master-test-times.txt
>
>
> I picked out the slow running tests. From the last (only) successful jenkins 
> run.
> The overall run took 3 hr 9 min (on H26)
> I picked the slow ones in the following way:
> * More than 60s in the default tests
> * More than 120s in the StatsEnabledTests
> * And more than 200s in StatsDisabledTests, etc.
> {code:java}
> [INFO] --- maven-surefire-plugin:2.20:test (default-test) @ phoenix-core ---
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 57.455 
> s - in org.apache.phoenix.iterate.ChunkedResultIteratorTest
> [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 66.869 
> s - in org.apache.phoenix.util.CoprocessorHConnectionTableFactoryTest
> [INFO] --- maven-failsafe-plugin:2.20:integration-test 
> (ParallelStatsEnabledTest) @ phoenix-core ---
> [INFO] Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 108.031 s - in org.apache.phoenix.end2end.QueryWithTableSampleIT
> [INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 120.746 s - in org.apache.phoenix.end2end.ExplainPlanWithStatsEnabledIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 222.043 s - in org.apache.phoenix.end2end.TenantSpecificTablesDDLIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 212.731 s - in org.apache.phoenix.end2end.TenantSpecificTablesDMLIT
> [INFO] --- maven-failsafe-plugin:2.20:integration-test 
> (ParallelStatsDisabledTest) @ phoenix-core ---
> [INFO] Tests run: 54, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 501.869 s - in org.apache.phoenix.end2end.AlterTableIT
> [INFO] Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 453.287 s - in org.apache.phoenix.end2end.CastAndCoerceIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 576.443 s - in org.apache.phoenix.end2end.CaseStatementIT
> [INFO] Tests run: 57, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 395.982 s - in org.apache.phoenix.end2end.DateTimeIT
> [INFO] Tests run: 64, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 560.619 s - in org.apache.phoenix.end2end.DeleteIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 596.039 s - in org.apache.phoenix.end2end.InQueryIT
> [INFO] Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 330.19 s - in org.apache.phoenix.end2end.NotQueryWithGlobalImmutableIndexesIT
> [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 672.726 s - in org.apache.phoenix.end2end.GroupByIT
> [INFO] Tests run: 70, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 664.045 s - in org.apache.phoenix.end2end.IntArithmeticIT
> [INFO] Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 383.414 s - in org.apache.phoenix.end2end.NotQueryWithLocalImmutableIndexesIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 230.93 s - in 
> org.apache.phoenix.end2end.OrderByWithServerClientSpoolingDisabledIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 227.712 s - in org.apache.phoenix.end2end.OrderByIT
> [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 228.137 s - in org.apache.phoenix.end2end.OrderByWithSpillingIT
> [INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 537.589 s - in org.apache.phoenix.end2end.NullIT
> [INFO] Tests run: 48, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 446.558 s - in org.apache.phoenix.end2end.OnDuplicateKeyIT
> [INFO] Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 1,262.998 s - in org.apache.phoenix.end2end.InListIT
> [INFO] Tests run: 28, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 582.677 s - in org.apache.phoenix.end2end.OrphanViewToolIT
> [INFO] Tests run: 55, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 254.14 s - in org.apache.phoenix.end2end.ProductMetricsIT
> [INFO] Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 300.546 s - in org.apache.phoenix.end2end.PropertiesInSyncIT
> [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 410.912 s - in org.apache.phoenix.end2end.PointInTimeQueryIT
> [INFO] Tests run: 42, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 
> 400.997 s - in 

[jira] [Updated] (PHOENIX-4768) Re-enable testCompactUpdatesStats and testCompactUpdatesStatsWithMinStatsUpdateFreq of StatsCollectorIT

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4768:
---
Fix Version/s: (was: 4.15.0)
   4.15.1

> Re-enable testCompactUpdatesStats and 
> testCompactUpdatesStatsWithMinStatsUpdateFreq of StatsCollectorIT
> ---
>
> Key: PHOENIX-4768
> URL: https://issues.apache.org/jira/browse/PHOENIX-4768
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0
>Reporter: Ankit Singhal
>Assignee: Ankit Singhal
>Priority: Major
> Fix For: 4.15.1
>
>
> Re-enable these tests as TEPHRA-208 is already committed for long time.
> StatsCollectorIT#testCompactUpdatesStats
> StatsCollectorIT#testCompactUpdatesStatsWithMinStatsUpdateFreq
>  
> Required some fixing for namespace enabled cases.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5371) SystemCatalogCreationOnConnectionIT is slow

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5371:
---
Fix Version/s: 5.1.1
   4.15.1

> SystemCatalogCreationOnConnectionIT is slow
> ---
>
> Key: PHOENIX-5371
> URL: https://issues.apache.org/jira/browse/PHOENIX-5371
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.1, 5.1.1
>
> Attachments: 5371-v2.txt, 5371.txt
>
>
> It's not immediately clear how to fix that, just creating a Jira for this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5117) Return the count of rows scanned in HBase

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5117:
---
Fix Version/s: (was: 5.1.0)
   (was: 4.15.0)
   5.1.1
   4.15.1

> Return the count of rows scanned in HBase
> -
>
> Key: PHOENIX-5117
> URL: https://issues.apache.org/jira/browse/PHOENIX-5117
> Project: Phoenix
>  Issue Type: New Feature
>Affects Versions: 4.14.1
>Reporter: Chen Feng
>Assignee: Chen Feng
>Priority: Minor
> Fix For: 4.15.1, 5.1.1
>
> Attachments: PHOENIX-5117-v1.patch
>
>
> HBASE-5980 provides the ability to return the number of rows scanned. Such 
> metrics should also be returned by Phoenix.
> HBASE-21815 is acquired.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5325) Fix some pherf tests that are failing.

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5325:
---
Fix Version/s: (was: 5.1)
   5.1.0

> Fix some pherf tests that are failing.
> --
>
> Key: PHOENIX-5325
> URL: https://issues.apache.org/jira/browse/PHOENIX-5325
> Project: Phoenix
>  Issue Type: Test
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5325-4.x-HBase-1.5.txt
>
>
> Exposed by PHOENIX-5316, [~tdsilva]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-4846) WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort order of pk columns being filtered on changes

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-4846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-4846:
---
Priority: Critical  (was: Major)

> WhereOptimizer.pushKeyExpressionsToScan() does not work correctly if the sort 
> order of pk columns being filtered on changes
> ---
>
> Key: PHOENIX-4846
> URL: https://issues.apache.org/jira/browse/PHOENIX-4846
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.14.0, 5.0.0
>Reporter: Thomas D'Silva
>Priority: Critical
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-4846-wip.patch
>
>
> {{ExpressionComparabilityWrapper}} should set the sort order based on 
> {{childPart.getColumn()}} or else the attached test throws an 
> IllegalArgumentException
> {code}
> java.lang.IllegalArgumentException: 4 > 3
> at java.util.Arrays.copyOfRange(Arrays.java:3519)
> at 
> org.apache.hadoop.hbase.io.ImmutableBytesWritable.copyBytes(ImmutableBytesWritable.java:272)
> at 
> org.apache.phoenix.compile.WhereOptimizer.getTrailingRange(WhereOptimizer.java:329)
> at 
> org.apache.phoenix.compile.WhereOptimizer.clipRight(WhereOptimizer.java:350)
> at 
> org.apache.phoenix.compile.WhereOptimizer.pushKeyExpressionsToScan(WhereOptimizer.java:237)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:157)
> at org.apache.phoenix.compile.WhereCompiler.compile(WhereCompiler.java:108)
> at 
> org.apache.phoenix.compile.QueryCompiler.compileSingleFlatQuery(QueryCompiler.java:556)
> {code}
> Also in {{pushKeyExpressionsToScan()}} we cannot extract pk column nodes from 
> the where clause if the sort order of the columns changes. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusiv

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5176.

Resolution: Fixed

Done. :)

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5176) KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when two key ranges have the same upper bound values but one is inclusive and another is exclusive

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5176:
---
Fix Version/s: 5.1.0

> KeyRange.compareUpperRange(KeyRang 1, KeyRang 2) returns wrong result when 
> two key ranges have the same upper bound values but one is inclusive and 
> another is exclusive 
> -
>
> Key: PHOENIX-5176
> URL: https://issues.apache.org/jira/browse/PHOENIX-5176
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Bin Shi
>Assignee: Bin Shi
>Priority: Blocker
> Fix For: 4.15.0, 5.1.0
>
>  Time Spent: 4h
>  Remaining Estimate: 0h
>
> In KeyRange.java, 
> {color:#262626}    public static int compareUpperRange(KeyRange rowKeyRange1, 
> KeyRange rowKeyRange2) {{color}
> {color:#262626}        int result = 
> Boolean.compare(rowKeyRange1.upperUnbound(), 
> rowKeyRange2.upperUnbound());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        result = 
> Bytes.BYTES_COMPARATOR.compare(rowKeyRange1.getUpperRange(), 
> rowKeyRange2.getUpperRange());{color}
> {color:#262626}        if (result != 0) {{color}
> {color:#262626}            return result;{color}
> {color:#262626}        }{color}
> {color:#262626}        return 
> Boolean.compare(*rowKeyRange2*.isUpperInclusive(), 
> *rowKeyRange1*.isUpperInclusive());{color}
> {color:#262626}    }{color}
> {color:#262626} {color}
> {color:#262626}The last line in yellow color should be "{color}return 
> Boolean.compare(*rowKeyRange1*.isUpperInclusive(), 
> *rowKeyRange2*.isUpperInclusive());".  Given rowKeyRange1 [3, 5) and 
> rowKeyRange2 [3, 5], the function should return -1, but now it returns 1 due 
> to the bug I mentioned.
>  
> The KeyRange.compareUpperRange is only used in 
> KeyRange.intersect(List rowKeyRanges1, List 
> rowKeyRanges2). Given rowKeyRanges1 \{[3, 5), [5, 6)} and rowKeyRanges2\{[3, 
> 5], [6, 7]}, the function should return \{[3, 5), [5, 5]}, i.e., \{[3, 5]}, 
> but it seems that now it returns \{[3,5)} due to the bug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5378) We are looking for a solution to connect to our Apache phoenix cluster from a node.js external IP.

2019-06-27 Thread Ahmed (JIRA)
Ahmed created PHOENIX-5378:
--

 Summary: We are looking for a solution to connect to our Apache 
phoenix cluster from a node.js external IP.
 Key: PHOENIX-5378
 URL: https://issues.apache.org/jira/browse/PHOENIX-5378
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.7.0
Reporter: Ahmed
 Fix For: 4.7.0
 Attachments: Screenshot from 2019-06-27 10-31-52.png

We are able to connect internally to Apache Phoenix using node.js from the 
internal cluster that contains more than 30 Virtual machines. We are now trying 
to connect from my local node.js backend to Apache Phoenix and extract the data.



We did the following:

 - Imported the configuration files "Hbase-site.xml" and "core-site.xml" and 
"hdfs-site.xml" from the server,

 - We added a line to read those files like the following: 

jinst.setupClasspath([

  '/usr/local/HBase/lib/phoenix-4.7.0.2.6.4.0-91-client.jar',

   './hdp'

])

}

 - Changed the internal IP address to the external IP address of the Zookeeper 
server like the following: url: 
'jdbc:phoenix::2181:/hbase-unsecure'



We start the node.js backend, the connection with the main Zookeeer has been 
established ,but it is enable to establish connection with some of the one of 
the nodes with the following error:
2019-06-26 13:09:06,153 INFO  [hconnection-0xea4a92b-shared--pool1-t1] 
client.RpcRetryingCaller: Call exception, tries=10, retries=35, started=166981 
ms ago, cancelled=false, msg=1 millis timeout while waiting for channel to 
be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending 
remote=namenode/10.0.0.4:16020] row 'SYSTEM:CATALOG,,' on table 'hbase:meta' at 
region=hbase:meta,,1.1588230740, hostname=namenode,16020,1560527811592, 
seqNum=0 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (PHOENIX-5377) SpeedUp LocalIndexSplitMergeIT

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved PHOENIX-5377.

   Resolution: Fixed
 Assignee: Lars Hofhansl
Fix Version/s: 5.1.0
   4.15.0

> SpeedUp LocalIndexSplitMergeIT
> --
>
> Key: PHOENIX-5377
> URL: https://issues.apache.org/jira/browse/PHOENIX-5377
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Lars Hofhansl
>Priority: Major
> Fix For: 4.15.0, 5.1.0
>
> Attachments: 5377.txt
>
>
> I noticed that 95% of the time is spent in testLocalIndexScanAfterRegionSplit.
> In that test we split the table 4 times, and each time wait until the parent 
> region is remove (in a major compaction).
> The test is just as valid if we split the table only twice. That effectively 
> cuts almost 50% off the runtime. Since we test for the number of a chunks we 
> have in the explain plan, I cannot do the same trick I did in PHOENIX-5336.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (PHOENIX-5377) SpeedUp LocalIndexSplitMergeIT

2019-06-27 Thread Lars Hofhansl (JIRA)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated PHOENIX-5377:
---
Attachment: 5377.txt

> SpeedUp LocalIndexSplitMergeIT
> --
>
> Key: PHOENIX-5377
> URL: https://issues.apache.org/jira/browse/PHOENIX-5377
> Project: Phoenix
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 5377.txt
>
>
> I noticed that 95% of the time is spent in testLocalIndexScanAfterRegionSplit.
> In that test we split the table 4 times, and each time wait until the parent 
> region is remove (in a major compaction).
> The test is just as valid if we split the table only twice. That effectively 
> cuts almost 50% off the runtime. Since we test for the number of a chunks we 
> have in the explain plan, I cannot do the same trick I did in PHOENIX-5336.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (PHOENIX-5377) SpeedUp LocalIndexSplitMergeIT

2019-06-27 Thread Lars Hofhansl (JIRA)
Lars Hofhansl created PHOENIX-5377:
--

 Summary: SpeedUp LocalIndexSplitMergeIT
 Key: PHOENIX-5377
 URL: https://issues.apache.org/jira/browse/PHOENIX-5377
 Project: Phoenix
  Issue Type: Sub-task
Reporter: Lars Hofhansl


I noticed that 95% of the time is spent in testLocalIndexScanAfterRegionSplit.

In that test we split the table 4 times, and each time wait until the parent 
region is remove (in a major compaction).

The test is just as valid if we split the table only twice. That effectively 
cuts almost 50% off the runtime. Since we test for the number of a chunks we 
have in the explain plan, I cannot do the same trick I did in PHOENIX-5336.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)