[jira] [Updated] (PHOENIX-5769) Phoenix precommit Flapping HadoopQA Tests in master

2020-03-31 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5769:
-
Attachment: PHOENIX-5769.master.v3.patch

> Phoenix precommit Flapping HadoopQA Tests in master 
> 
>
> Key: PHOENIX-5769
> URL: https://issues.apache.org/jira/browse/PHOENIX-5769
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Daniel Wong
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5769.master.v1.patch, 
> PHOENIX-5769.master.v3.patch, consoleFull (1).html, consoleFull (2).html, 
> consoleFull (3).html, consoleFull (4).html, consoleFull (5).html, consoleFull 
> (6).html, consoleFull (7).html, consoleFull (8).html, consoleFull.html
>
>
> I was recently trying to commit changes to Phoenix for multiple issues and 
> were asked to get clean HadoopQA runs.  However, this took a huge effort as I 
> had to resubmit the same patch multiple times in order to get one "clean".  
> Looking at the errors the most common one were 3 "Multiple regions on 
> " and 3 for apache infra issues (host shutdown), 1 for 
> org.apache.hadoop.hbase.NotServingRegionException, 1 for 
> SnapshotDoesNotExistException.   See builds 
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/] here from 3540's to 
> 3560's.  In addition I see multiple builds running simultaneously, limiting 
> tests to running on 1 host should be configurable right?
> In addition I was recommended by [~yanxinyi] that master was less likely to 
> have issues getting a clean run than 4.x.  FYI [~ckulkarni]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5814) disable trimStackTrace

2020-03-31 Thread Istvan Toth (Jira)
Istvan Toth created PHOENIX-5814:


 Summary: disable trimStackTrace
 Key: PHOENIX-5814
 URL: https://issues.apache.org/jira/browse/PHOENIX-5814
 Project: Phoenix
  Issue Type: Improvement
  Components: connectors, core, omid, queryserver, tephra
Reporter: Istvan Toth
Assignee: Istvan Toth


The default trimStackTrace=true maven setting is quite effective at making test 
output useless for all but the trivial failures.

I propose setting it to false everywhere (i.e all five repos) 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5793) Support parallel init and fast null return for SortMergeJoinPlan.

2020-03-31 Thread Chen Feng (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Feng updated PHOENIX-5793:
---
Attachment: (was: PHOENIX-5793-v5.patch)

> Support parallel init and fast null return for SortMergeJoinPlan.
> -
>
> Key: PHOENIX-5793
> URL: https://issues.apache.org/jira/browse/PHOENIX-5793
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Chen Feng
>Assignee: Chen Feng
>Priority: Minor
> Attachments: PHOENIX-5793-v2.patch, PHOENIX-5793-v3.patch, 
> PHOENIX-5793-v4.patch
>
>
> For a join sql like A join B. The implementation of SortMergeJoinPlan 
> currently inits the two iterators A and B one by one.
> By initializing A and B in parallel, we can improve performance in two 
> aspects.
> 1) By overlapping the time in initializing.
> 2) If one child query is null, the other child query can be canceled since 
> the final result must be null.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5719) testIndexRebuildTask test is failing on pre-commit and master build

2020-03-31 Thread Gokcen Iskender (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5719:
-
Attachment: PHOENIX-5719.master.004.patch

> testIndexRebuildTask test is failing on pre-commit and master build
> ---
>
> Key: PHOENIX-5719
> URL: https://issues.apache.org/jira/browse/PHOENIX-5719
> Project: Phoenix
>  Issue Type: Test
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Major
> Attachments: PHOENIX-5719.master.001.patch, 
> PHOENIX-5719.master.002.patch, PHOENIX-5719.master.003.patch, 
> PHOENIX-5719.master.004.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> testIndexRebuildTask has been failing for a few days on PreCommit build, as 
> well as master build(first failing on Jan 31).
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3401/testReport/]
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3393/]
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3400/]
> [https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master/2638/]
> [https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master/2639/testReport/]
> Can someone take a look at this flapper test, thanks
> [~kadir] [~gjacoby] [~swaroopa]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5813) Index read repair should not interfere with concurrent updates

2020-03-31 Thread Kadir OZDEMIR (Jira)
Kadir OZDEMIR created PHOENIX-5813:
--

 Summary: Index read repair should not interfere with concurrent 
updates 
 Key: PHOENIX-5813
 URL: https://issues.apache.org/jira/browse/PHOENIX-5813
 Project: Phoenix
  Issue Type: Bug
Affects Versions: 4.14.3, 5.0.0
Reporter: Kadir OZDEMIR


Let \{1, a, x, y} be a row in the data table. Let the first column be the only 
pk column and the second column be the only indexed column of the table, and 
finally the forth column be the only covered column by the index for this 
table. The corresponding row in the index table would be \{a, 1, y}. 

Now, let the same data table row be mutated and the new state of the row be 
\{1, b, x, y}. The index row \{a, 1, y} is not valid any more in the index 
table and needs to be deleted. Thus, the prepared index mutations will include 
the delete row mutation for the row key \{a, 1} and a put mutation, that is, 
put \{b, 1, y} for the new row.  

Let \{1, c, x, y} be another mutation on the same row that arrives before the 
previous mutation updates the data table. This means that the prepared index 
mutations will include the delete row mutation for the row key \{a, 1} and a 
put mutation, that is, put \{c, 1, y}. However, the last update should have 
deleted index row \{b, 1} instead of \{a, 1}. To prevent this, 
IndexRegionObserver maintains a collection of data table row keys for each 
pending data table row update in order to detect concurrent updates, and skips 
the third write phase for them. In the first update phase, index rows are made 
unverified and in the third update phase, they are verified or deleted. The 
read-repair operation on these unverified rows will lead to proper resolution 
of these concurrent updates. 

Therefore, two or more pending updates from different batches on the same data 
row are concurrent if and only if for all of these updates the data table row 
state is read from HBase under a Phoenix level row lock and for none of them 
the row lock has been acquired the second time for updating the data table. In 
other words, all of them are in the first update phase concurrently. For 
concurrent updates, the first two update phases are done but the last update 
phase is skipped. This means the data table row will be updated by these 
updates but the corresponding index table rows will be left with the unverified 
status. Then, the read repair process will repair these unverified index rows 
during scans.

For the example given above, \{1, b, x, y} and \{1, c, x, y} are concurrent 
updates (on the same data table row). As explained above, the index rows 
generated for these updates should be left unverified. Now assume that a scan 
on the index table detects that index row \{1, b, x, y} is unverified while the 
concurrent updates are in progress, and the index row is repaired from the data 
table. It is possible that the read repair gets the row \{1, b, x, y} from the 
data table. Then it will rebuild the corresponding index row which is the row 
\{b, 1, y} and will make the row verified. This rebuild may happen just after 
the row \{b, 1, y} is made unverified by the concurrent updates. This means 
that the repair will overwrite the result of the concurrent updates. 

This scan will return \{b, 1, y} to the client. Then this scan may also detect 
that \{c, 1, y} is also unverified. By the time, this row is repaired, the data 
table row could be \{1, c, x, y}. This means the corresponding index row \{c, 
1, y} will be made verified by the read repair and also returned to the client 
for the same scan. However, only one these index rows should have been returned 
to the client.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5812) Automatically Close "Idle" Long Open Connections

2020-03-31 Thread Daniel Wong (Jira)
Daniel Wong created PHOENIX-5812:


 Summary: Automatically Close "Idle" Long Open Connections
 Key: PHOENIX-5812
 URL: https://issues.apache.org/jira/browse/PHOENIX-5812
 Project: Phoenix
  Issue Type: Improvement
Reporter: Daniel Wong


As Phoenix may keep a maximum default number of connections.  Badly performing 
client calls or internal errors (See PHOENIX-5802).  Can cause total available 
connections to go to 0.  Proposing a client connection monitor with a 
connection reaper like task to reap idle connections.

Definition of "Idle"

Simple may be simple time based say if a connection has been open for 
configurable amount of minutes simply close.

More complicated solution may be keeping track of last interaction time.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5719) testIndexRebuildTask test is failing on pre-commit and master build

2020-03-31 Thread Gokcen Iskender (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gokcen Iskender updated PHOENIX-5719:
-
Attachment: PHOENIX-5719.master.003.patch

> testIndexRebuildTask test is failing on pre-commit and master build
> ---
>
> Key: PHOENIX-5719
> URL: https://issues.apache.org/jira/browse/PHOENIX-5719
> Project: Phoenix
>  Issue Type: Test
>Reporter: Xinyi Yan
>Assignee: Gokcen Iskender
>Priority: Major
> Attachments: PHOENIX-5719.master.001.patch, 
> PHOENIX-5719.master.002.patch, PHOENIX-5719.master.003.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> testIndexRebuildTask has been failing for a few days on PreCommit build, as 
> well as master build(first failing on Jan 31).
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3401/testReport/]
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3393/]
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/3400/]
> [https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master/2638/]
> [https://builds.apache.org/view/M-R/view/Phoenix/job/Phoenix-master/2639/testReport/]
> Can someone take a look at this flapper test, thanks
> [~kadir] [~gjacoby] [~swaroopa]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5780) Add mvn dependency:analyze to build process

2020-03-31 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5780:
-
Attachment: PHOENIX-5780.master.v5.patch

> Add mvn dependency:analyze to build process
> ---
>
> Key: PHOENIX-5780
> URL: https://issues.apache.org/jira/browse/PHOENIX-5780
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5780.master.v1.patch, 
> PHOENIX-5780.master.v2.patch, PHOENIX-5780.master.v3.patch, 
> PHOENIX-5780.master.v4.patch, PHOENIX-5780.master.v5.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> mvn dependency:analyze has shown that the dependency definitions in Phoenix 
> are in a bad shape.
> Include it in the build process, so that we can keep the dependencies true 
> and up to date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5780) Add mvn dependency:analyze to build process

2020-03-31 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5780:
-
Attachment: PHOENIX-5780.master.v4.patch

> Add mvn dependency:analyze to build process
> ---
>
> Key: PHOENIX-5780
> URL: https://issues.apache.org/jira/browse/PHOENIX-5780
> Project: Phoenix
>  Issue Type: Task
>  Components: core
>Reporter: Istvan Toth
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5780.master.v1.patch, 
> PHOENIX-5780.master.v2.patch, PHOENIX-5780.master.v3.patch, 
> PHOENIX-5780.master.v4.patch
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> mvn dependency:analyze has shown that the dependency definitions in Phoenix 
> are in a bad shape.
> Include it in the build process, so that we can keep the dependencies true 
> and up to date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5769) Phoenix precommit Flapping HadoopQA Tests in master

2020-03-31 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5769:
-
Attachment: (was: PHOENIX-5769.master.v2.patch)

> Phoenix precommit Flapping HadoopQA Tests in master 
> 
>
> Key: PHOENIX-5769
> URL: https://issues.apache.org/jira/browse/PHOENIX-5769
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Daniel Wong
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5769.master.v1.patch, consoleFull (1).html, 
> consoleFull (2).html, consoleFull (3).html, consoleFull (4).html, consoleFull 
> (5).html, consoleFull (6).html, consoleFull (7).html, consoleFull (8).html, 
> consoleFull.html
>
>
> I was recently trying to commit changes to Phoenix for multiple issues and 
> were asked to get clean HadoopQA runs.  However, this took a huge effort as I 
> had to resubmit the same patch multiple times in order to get one "clean".  
> Looking at the errors the most common one were 3 "Multiple regions on 
> " and 3 for apache infra issues (host shutdown), 1 for 
> org.apache.hadoop.hbase.NotServingRegionException, 1 for 
> SnapshotDoesNotExistException.   See builds 
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/] here from 3540's to 
> 3560's.  In addition I see multiple builds running simultaneously, limiting 
> tests to running on 1 host should be configurable right?
> In addition I was recommended by [~yanxinyi] that master was less likely to 
> have issues getting a clean run than 4.x.  FYI [~ckulkarni]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5769) Phoenix precommit Flapping HadoopQA Tests in master

2020-03-31 Thread Istvan Toth (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Istvan Toth updated PHOENIX-5769:
-
Attachment: PHOENIX-5769.master.v2.patch

> Phoenix precommit Flapping HadoopQA Tests in master 
> 
>
> Key: PHOENIX-5769
> URL: https://issues.apache.org/jira/browse/PHOENIX-5769
> Project: Phoenix
>  Issue Type: Improvement
>Reporter: Daniel Wong
>Assignee: Istvan Toth
>Priority: Major
> Attachments: PHOENIX-5769.master.v1.patch, 
> PHOENIX-5769.master.v2.patch, consoleFull (1).html, consoleFull (2).html, 
> consoleFull (3).html, consoleFull (4).html, consoleFull (5).html, consoleFull 
> (6).html, consoleFull (7).html, consoleFull (8).html, consoleFull.html
>
>
> I was recently trying to commit changes to Phoenix for multiple issues and 
> were asked to get clean HadoopQA runs.  However, this took a huge effort as I 
> had to resubmit the same patch multiple times in order to get one "clean".  
> Looking at the errors the most common one were 3 "Multiple regions on 
> " and 3 for apache infra issues (host shutdown), 1 for 
> org.apache.hadoop.hbase.NotServingRegionException, 1 for 
> SnapshotDoesNotExistException.   See builds 
> [https://builds.apache.org/job/PreCommit-PHOENIX-Build/] here from 3540's to 
> 3560's.  In addition I see multiple builds running simultaneously, limiting 
> tests to running on 1 host should be configurable right?
> In addition I was recommended by [~yanxinyi] that master was less likely to 
> have issues getting a clean run than 4.x.  FYI [~ckulkarni]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5810) PhoenixMRJobSubmitter is not working on a cluster with a single yarn RM

2020-03-31 Thread Richard Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Antal reassigned PHOENIX-5810:
--

Assignee: Richard Antal

> PhoenixMRJobSubmitter is not working on a cluster with a single yarn RM
> ---
>
> Key: PHOENIX-5810
> URL: https://issues.apache.org/jira/browse/PHOENIX-5810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Richard Antal
>Assignee: Richard Antal
>Priority: Major
>
> {code:java}
> Exception in thread "main" 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election{code}
> The error happens when we want to run scheduleIndexBuilds. In 
> getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to 
> determine the active Resource Manager.
>  But /yarn-leader-election only exists if yarn is in HA mode.
> I think this function should work well when we have single yarn RM and read 
> its address from the config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5810) PhoenixMRJobSubmitter is not working on a cluster with a single yarn RM

2020-03-31 Thread Richard Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Antal updated PHOENIX-5810:
---
Summary: PhoenixMRJobSubmitter is not working on a cluster with a single 
yarn RM  (was: PhoenixMRJobSubmitter is not working a cluster with a single 
yarn RM)

> PhoenixMRJobSubmitter is not working on a cluster with a single yarn RM
> ---
>
> Key: PHOENIX-5810
> URL: https://issues.apache.org/jira/browse/PHOENIX-5810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Richard Antal
>Priority: Major
>
> {code:java}
> Exception in thread "main" 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election{code}
> The error happens when we want to run scheduleIndexBuilds. In 
> getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to 
> determine the active Resource Manager.
>  But /yarn-leader-election only exists if yarn is in HA mode.
> I think this function should work well when we have single yarn RM and read 
> its address from the config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (PHOENIX-5811) Synchronise Phoenix dependencies to match Hbase dependency versions

2020-03-31 Thread Richard Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Antal reassigned PHOENIX-5811:
--

Assignee: Richard Antal

> Synchronise Phoenix dependencies to match Hbase dependency versions
> ---
>
> Key: PHOENIX-5811
> URL: https://issues.apache.org/jira/browse/PHOENIX-5811
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Richard Antal
>Assignee: Richard Antal
>Priority: Major
>
> Phoenix uses an older version of some dependencies.
> We could reduce the number of dependencies by using the same versions as 
> Hbase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5811) Synchronise Phoenix dependencies to match Hbase dependency versions

2020-03-31 Thread Richard Antal (Jira)
Richard Antal created PHOENIX-5811:
--

 Summary: Synchronise Phoenix dependencies to match Hbase 
dependency versions
 Key: PHOENIX-5811
 URL: https://issues.apache.org/jira/browse/PHOENIX-5811
 Project: Phoenix
  Issue Type: Bug
Reporter: Richard Antal


Phoenix uses an older version of some dependencies.
We could reduce the number of dependencies by using the same versions as Hbase.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (PHOENIX-5810) PhoenixMRJobSubmitter is not working a cluster with a single yarn RM

2020-03-31 Thread Richard Antal (Jira)


 [ 
https://issues.apache.org/jira/browse/PHOENIX-5810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Antal updated PHOENIX-5810:
---
Description: 
{code:java}
Exception in thread "main" 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /yarn-leader-election{code}
The error happens when we want to run scheduleIndexBuilds. In 
getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to determine 
the active Resource Manager.
 But /yarn-leader-election only exists if yarn is in HA mode.

I think this function should work well when we have single yarn RM and read its 
address from the config.

  was:
Exception in thread "main" 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /yarn-leader-election
The error happens when we want to run scheduleIndexBuilds. In 
getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to determine 
the active Resource Manager.
But /yarn-leader-election only exists if yarn is in HA mode.

I think this function should work well when we have single yarn RM and read its 
address from the config.


> PhoenixMRJobSubmitter is not working a cluster with a single yarn RM
> 
>
> Key: PHOENIX-5810
> URL: https://issues.apache.org/jira/browse/PHOENIX-5810
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Richard Antal
>Priority: Major
>
> {code:java}
> Exception in thread "main" 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /yarn-leader-election{code}
> The error happens when we want to run scheduleIndexBuilds. In 
> getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to 
> determine the active Resource Manager.
>  But /yarn-leader-election only exists if yarn is in HA mode.
> I think this function should work well when we have single yarn RM and read 
> its address from the config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (PHOENIX-5810) PhoenixMRJobSubmitter is not working a cluster with a single yarn RM

2020-03-31 Thread Richard Antal (Jira)
Richard Antal created PHOENIX-5810:
--

 Summary: PhoenixMRJobSubmitter is not working a cluster with a 
single yarn RM
 Key: PHOENIX-5810
 URL: https://issues.apache.org/jira/browse/PHOENIX-5810
 Project: Phoenix
  Issue Type: Bug
Reporter: Richard Antal


Exception in thread "main" 
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /yarn-leader-election
The error happens when we want to run scheduleIndexBuilds. In 
getSubmittedYarnApps, getActiveResourceManagerHost uses zookeeper to determine 
the active Resource Manager.
But /yarn-leader-election only exists if yarn is in HA mode.

I think this function should work well when we have single yarn RM and read its 
address from the config.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)